It's been a while since we got the CNC router up and running. What with real life and work and waiting for some new router bits to arrive, it's been a long time coming. But finally, this weekend we got our fingerboard routed to the point where it's actually usable on a guitar neck. And actually mounted the neck onto a guitar to make an actual, playable instrument.
[photo]
There are still a few "light leaks" but a little bit of work with some fine-tooth files to get the LEDs to fit (where they've been hand-soldered onto the PCB with a little less care than they needed) and we should have a fully sealed, ready-to-play fingerboard
[photo]
So now we're onto designing a two-part PCB for the screen and controller.
We've got a working prototype on breadboard, so hopefully it's a straight-forward translation to get it onto a PCB and embedded inside the guitar body. With
Saturday, 30 April 2016
Friday, 29 April 2016
TQFP socket programmer for Atmega328 SMT chips
Coming from an "industrial" rather than a "hobby" background, I always try to incorporate some kind of ICSP header into my (final) product designs. It's just something I've always done.
There's nothing worse that being asked to support a piece of hardware and after a number of years wanting to upgrade the firmware and having to desolder microcontrollers off their controller boards; who knows what damage you do when taking them off - and can you be sure they go back on and are fully working?
If you de-solder the chip, change the firmware and reinstall it and it doesn't work, how do you trace the fault? It could be the new firmware doesn't work as expected - but it might be that you're a genius coder, got everything right first time, and just haven't quite soldered all the pins back correctly, or have a dry joint or something.
It's the main difference I see between "hobbyists" and "commercial" developers.
A hobby developer will "work forwards" - find the things that make something work and follow the route to a conclusion. Usually, a "commercial" developer (and particularly one with lots of experience of the "forward working" method) will "work backwards".
That means, start from a point of failure.
Put yourself in the shoes of the poor sap who, in four years time, when the thing has failed or needs upgrading, is standing in front of your hardware, screwdriver in hand, cursing the one who could have made their life so much easier, with just a simple tweak of the design. That poor sap might even be you one day! Commercial developers tend to start with failure and work back - if this goes wrong, how do I put it right? Before this can happen, what conditions have to be met? If these conditions are not met, should my device even be allowed to run?
And in the "industrial" microcontroller world - where PIC still reigns (or, at least, did up until about ten years ago, when I stopped being seriously involved with industrial electronics and become more of a "hobbyist" myself) that means one of two things:
Either put every chip in a socket.
That's the easiest method.
But also - particularly for a hobbyist trying out ideas - the most expensive, leading to larger, more cumbersome looking designs (and in this time of miniaturisation, smaller is almost always better/more preferable).
The other method is to put an ICSP header on everything.
That's what I normally do.
Any PIC-based design, I'll make sure my programming pins are easily accessible so, should the time every come, it's dead easy to whack on a programmer and update the firmware. And, although ICSP programming isn't quite as common in Arduino-land, it's not unheard of.
Since using Arduino/AVR chips in a few projects this year, I've found the 0.8mm SMT pitch chips are great for hand-soldering (0.5mm pitch PICs on the other hand are a real pain to solder onto home-made PCBs). Since smaller is better, and a few of the other nerds have been pushing for AVR- rather than PIC-based projects, I thought I'd try to leave off the programming header on a few homebrew projects, to get the design-size down as small as possible (quite often my programming header is about the same size as the rest of the entire PCB!)
Which is where little beauty comes in.
There's nothing worse that being asked to support a piece of hardware and after a number of years wanting to upgrade the firmware and having to desolder microcontrollers off their controller boards; who knows what damage you do when taking them off - and can you be sure they go back on and are fully working?
If you de-solder the chip, change the firmware and reinstall it and it doesn't work, how do you trace the fault? It could be the new firmware doesn't work as expected - but it might be that you're a genius coder, got everything right first time, and just haven't quite soldered all the pins back correctly, or have a dry joint or something.
It's the main difference I see between "hobbyists" and "commercial" developers.
A hobby developer will "work forwards" - find the things that make something work and follow the route to a conclusion. Usually, a "commercial" developer (and particularly one with lots of experience of the "forward working" method) will "work backwards".
That means, start from a point of failure.
Put yourself in the shoes of the poor sap who, in four years time, when the thing has failed or needs upgrading, is standing in front of your hardware, screwdriver in hand, cursing the one who could have made their life so much easier, with just a simple tweak of the design. That poor sap might even be you one day! Commercial developers tend to start with failure and work back - if this goes wrong, how do I put it right? Before this can happen, what conditions have to be met? If these conditions are not met, should my device even be allowed to run?
And in the "industrial" microcontroller world - where PIC still reigns (or, at least, did up until about ten years ago, when I stopped being seriously involved with industrial electronics and become more of a "hobbyist" myself) that means one of two things:
Either put every chip in a socket.
That's the easiest method.
But also - particularly for a hobbyist trying out ideas - the most expensive, leading to larger, more cumbersome looking designs (and in this time of miniaturisation, smaller is almost always better/more preferable).
The other method is to put an ICSP header on everything.
That's what I normally do.
Any PIC-based design, I'll make sure my programming pins are easily accessible so, should the time every come, it's dead easy to whack on a programmer and update the firmware. And, although ICSP programming isn't quite as common in Arduino-land, it's not unheard of.
Since using Arduino/AVR chips in a few projects this year, I've found the 0.8mm SMT pitch chips are great for hand-soldering (0.5mm pitch PICs on the other hand are a real pain to solder onto home-made PCBs). Since smaller is better, and a few of the other nerds have been pushing for AVR- rather than PIC-based projects, I thought I'd try to leave off the programming header on a few homebrew projects, to get the design-size down as small as possible (quite often my programming header is about the same size as the rest of the entire PCB!)
Which is where little beauty comes in.
It's a 32-pin TQFP socket on a DIP-sized breakout board. We looked at these a few years back, and back then they were £70 or more. If you find them on Farnell, they're still stupid money.
Thankfully, the Chinese pirates are happy to knock them out at less than £15 and sell them on eBay. So it seemed like a no-brainer to get one to try out.
You can either program your AVR chips "properly" using an AVR programmer and Atmel Studio/avrdude. Or you can find the serial and reset pins, and program them using the bootloader over serial.
Since our chips are to be used for "one time" projects, we just burned the firmware directly onto the chip, without using the bootloader. It seemed a bit silly to burn the bootloader using the ICSP pins, then have to use a completely different set of pins to program the firmware, so we just dumped the firmware straight on.
The only thing that wasn't quite so straightforward was location of the pins.
We couldn't just assume that "pin one" on the breakout board (the top-left-most pin) was connected to pin one on the TQFP chip. In fact, it turned out not to be the case at all.
But after a bit of poking around with a multi-meter we managed to map out which of the socket pins went to which of the DIP pins on the breakout board.
Saturday, 23 April 2016
How to avoid setting off scanners at the airport
Last weekend a few of us took to the skies and went to set up an "installation" at Frankfurt. As I was "technical support" my work was done by Tuesday morning, so I flew home alone.
Feeling rather pleased with myself, for know that an airport in Germany is the "flughafen" and that the main train station was the "hauptbanhoff" I felt pretty confident in getting to the airport without any problems. And so it turned out. So problem-free was my return to the airport that I had nearly four hours to kill.
So a couple of hours in, and after reading the papers (I still had Sunday's tomes with me to get through) I opened up the laptop, got a brew and a sticky bun and - thanks to the free wifi - set about a little web development.
Feeling rather pleased with myself, for know that an airport in Germany is the "flughafen" and that the main train station was the "hauptbanhoff" I felt pretty confident in getting to the airport without any problems. And so it turned out. So problem-free was my return to the airport that I had nearly four hours to kill.
So a couple of hours in, and after reading the papers (I still had Sunday's tomes with me to get through) I opened up the laptop, got a brew and a sticky bun and - thanks to the free wifi - set about a little web development.
After a couple of hours, the battery in the laptop gave up, so I got out the back-up and carried out cranking out some php and javascript on the little notebook.
Unlike British airports, where you go through security and then hang around in duty free, trying to avoid that cloying sickly stench from the perfume counters, at Frankfurt, each departure gate has it's own dedicated security entrance. So it's only once your flight has been assigned a gate, can you pass through security.
So, with just an hour 'til lift-off, the gate opened, and I threw the notebook into my bag and dashed off to get through security. That's where things started to go wrong!
Firstly, I set off the body scanner.
Normally, I do this anyway. At Liverpool, Heathrow and Gatwick, minus shoes, belt, glasses and with empty pockets, I always set the scanner off. To this day, I still don't know why. A quick pat-down and everything is ok. So setting the body-scanner off at Frankfurt didn't worry me.
Except this time, there was a genuine reason.
But it took two goes to find out why - a couple of loose atmega328 surface mount chips had snagged in the lining of a pocket and it took a while to even realise that they were there.
Obviously, this didn't go down too well with the German Authorities. But, having explained that we'd just been at a conference, making and installing a custom electronics-based installation, they seemed to accept this as a reasonable excuse for carrying loose electronics components!
So, once through the scanner, I asked if I could have my rucksack.
It had been held back, as it had set off the heat detector.
A quick rummage through my dirty clothes and they found the offending notebook - still warm from being used just a few minutes earlier. Another, innocent, reason for setting off the detectors!
So I got my notebook back and the rucksack was passed through the scanner again.
And it failed the checks again. After asking if the bottle of water I was carrying could go through security (it couldn't) I'd left the bottle behind. But forgot to take out my toothpaste. And, as we all know by now, fluids on aeroplanes - however viscous - are frowned upon, especially if you say you're not carrying any.
A third check, a third, innocent excuse.
By now, I was starting to look like a hapless first-time traveller.
What started out as a humorous distraction was becoming a bit of an irritation. Unfortunately, the girl on the security desk was starting to think so too.
So I got my bag back, and my notebook, and left my toothpaste at the counter. Some loose electronic parts, undeclared fluids and something glowing white hot on the heat-sensor didn't appear suspicious at all... I put all my stuff back in my pockets, and casually asked if I could also have the laptop back, that they'd taken out of my bag.
And that's when things took a slightly darker turn.
As the girl on the security desk said "this one is not so funny".
With no further explanation, she said that "for this one, I need the police".
And went and got the police.
Not the friendly bobby in a slightly-too-tall hat with a silver knob on the top like we have "back home". This was the German Airport Security police. And not even the friendly-looking airport police with discreet holster guns. She went at got the federal police. Like the army-branch of the police (I'm still not exactly sure how different police forces work in Europe, but apparently there are different "grades" of police - and these were the big, scary, police) And they wear body armour. And carry machine guns!
(note to self: real machine guns look surprisingly plastic-y in real life, more like a video games controller than an actual killing device!)
When two of these guys turned up, one pushed me in the chest telling me "you don't move" while the other, even more threatening-looking one (note the position of the leading finger in the image, almost on the trigger - this is how they hold their guns) barked something incomprehensible in German. I tried to talk to them via the girl on the security desk (whose English was excellent, but - now that she didn't want to be associated with me - had suddenly become less conversational, and her tone was more similar to the barking instructions coming from the police).
I desperately tried to think why they might confiscate my laptop. I'd been using it almost solidly for the last 36 hours - coding everything for the installation during the day, getting a bite to eat, then coding into the early hours of the morning. It hadn't left my sight (except during mealtimes and sleeping) for the entire time I was in Germany. I knew there was nothing wrong with it. And yet....
The laptop was wiped over, across the screen and keyboard, and the swab placed into a machine. A red light came on. The girl on the desk informed me that I had been singled out for an explosives test. And I'd failed. I didn't feel, at this point - with machine guns trained on me - that I should maybe point out that I was the least explosive person I knew and that the problem had, in fact, been with my laptop.
Then it came to me - a few weeks earlier, Jake at BuildBrighton had smashed open the old laptop battery; the replacement battery, currently in the laptop, causing a potential bomb-scare at Frankfurt airport, had obviously come from a dodgy manufacturer in China, from where I'd bought it on eBay. I tried to explain this to the guards, and as I went to remove the battery from the laptop, was told - rather more firmly this time - that I was not to move, and to touch nothing!
A few phone calls later - initially to "London" (I didn't hear who, but I distinctly heard them say "London") and I was taken away from the main counter, and into a little area around the corner - out of sight of everybody else. It was only really at this point, that something like panic started to set in.
I had my passport taken and every page copied.
Forms with "something something alerte" in block capitals on the top were filled in. My bank card details were taken. I was asked where the laptop was bought, and PC World in Hove were called. This lead to another phone call to somewhere else - who then advised the guards to phone Hewlett Packard (the manufacturers of the laptop). Following this, a call was made to "something something consulate". I was starting to really worry.
While all this was going on, the girl from the security desk took the battery out of the laptop, and took another swab from the screen and keyboard of the now open laptop (minus the potentially explosive battery). The swab went into the little machine and.... mirp, failed.
Once again I was asked where I'd been, what I'd been doing, both before and during my time in Germany. The laptop failed the "explosives test" for a third time. It was only now that I started to realise just how dodgy things looked - some loose electronics components in my pocket, a bag showing up hot in the heat scanner and failing to declare the presence of fluids in my hand-luggage. To put a tin hat on the whole affair, my laptop had now failed the test for the presence of "trace explosives" not just once, but three times!
Now I've learned to love visiting Germany.
And I've learned to love Germans. Their blunt, directness can be both refreshing and funny. After finishing a meal at a restaurant (inbetween setting up the installation and going back to the hotel for yet more coding) a waitress told us one evening "you must now pay the bill and leave" once she realised we'd finished our meal. A directness that we're still not sure if it's due to culture, or a lack of nuance in translating into another language. Germans are - generally - pretty blunt, and very direct. Some people confuse this with being rude, possibly even aggressive (coupled with a language that sounds like they're shouting all the time). I'm sure they're not.
But all this means when they think you're potentially carrying explosive materials, the average German Airport Security guard, comes across as quite angry. And I don't blame them. I'd be angry too if some bastard tried to get explosives onto a 'plane I was responsible for guarding. And I'd certainly give them what-for if I caught them. Which meant, by now - and with just a few minutes until boarding - I was really starting to worry. People were just shouting and pointing guns at me, sounding more and more agitated as time went on. My flight was about to leave, and nobody knew I was in a little room away from the main waiting area. I had no idea what was being said but some pretty important-sounding people were being called.
A few more 'phone calls and another swab test. This time, and with no explanation, the light came up green. A further test also came up "clean". It seems that whatever had set the detector off was no longer present on my keyboard/screen.
So the guards turned on their heels and walked off!
The girl on the desk gave me my laptop back.
And then wished me a "good flight".
And that was that.
I'd gone from terrorist suspect #1 to "Mr Bean on holiday". And they let me get on the aeroplane.
Whether or not it's related, on returning to Heathrow, I couldn't help but notice a lot of armed police around the place. There are usually a couple of guards with guns at the airport. And maybe it's because of my experience that I was looking for them. But there were loads of armed guards around Heathrow on the afternoon I touched down.
To this day I'm not entirely sure what caused the red alert on the explosives sensor. I can appreciate that I didn't help my cause at the beginning, but trace elements of explosives?! On recounting the tale, Nick immediately suggested cinnamon.
Apparently, cinnamon (and, curiously, coriander) dust is highly volatile. It's quite possible that this is what cause the explosive sensor to flag an alert. After all, I'd been munching on a cinnamon bun (and slurping tea) just a few hours before, while using the laptop. Is it vaguely possible that traces of cinnamon were transferred onto the keyboard?
That's the only explanation I can think of. Which is why my advice when flying and to avoid setting the scanners off is
- Don't be a pillock and walk through the body scanner with loose electronic components in your pockets
- Don't carry hot electronics devices through security
- Stay away from cinnamon buns at the airport café!
Wednesday, 13 April 2016
VB6 G-Code editor
After getting over-excited about getting VB6 to run in my Windows 10 laptop, I pulled the trigger and published the last post without actually demonstrating how the app works.
So here it is:
From the Inkscape-generated G-code file, it's now possible to selection a region of the code and "export" it as a section. Before actually running the g-code, the VB6 app adds in the extremities, so when it comes to routing anything, we can double-check that it will fit inside the boundaries of the piece being milled.
The M0 command after each "preview move" halts program execution and allows us to take as long as necessary to check the position of the cutting head. The pause in the video is simply because I had to power up the spindle manually (since it can't yet take spin-start commands from Mach3).
The routed section exactly matches the extreme positions of the shape as plotted at the very beginning. After routing the entire guitar fingboard, we ended up with something like this....
So here it is:
From the Inkscape-generated G-code file, it's now possible to selection a region of the code and "export" it as a section. Before actually running the g-code, the VB6 app adds in the extremities, so when it comes to routing anything, we can double-check that it will fit inside the boundaries of the piece being milled.
The M0 command after each "preview move" halts program execution and allows us to take as long as necessary to check the position of the cutting head. The pause in the video is simply because I had to power up the spindle manually (since it can't yet take spin-start commands from Mach3).
The routed section exactly matches the extreme positions of the shape as plotted at the very beginning. After routing the entire guitar fingboard, we ended up with something like this....
Routing hardwoods doesn't half leave a mess on the cutting bed!
Tuesday, 12 April 2016
Welcome back VB6
It's only been a few weeks since I went nuclear on my laptop with a fresh install of Windows 10 (it behaves far better than it did following the 8-to-8.1-to-10 upgrade path but is still pretty shonky). It took only a day or two to get all my usual development tools on there - including weird stuff like HeidiSQL, WinSCP, NotePad++, WAMP as well as the usual PIC simulators, ExpressPCB and even Arduino.
But for a long while, something was missing.
I've been messing about with Raspberry Pi and Linux and Arduino and even a bit of Python. But it's taken until now to get VB6 installed on my machine. And it was like putting on an old pair of slippers. Sure, there's a hole in the toe and they're not cool enough to be seen in just popping down the road to the shops. But VB6 is an awesome bit of kit for bashing out apps in a couple of hours.
And Microsoft still haven't found the heart to kill it off.
.NET based applications are slow and bloated and buggy.
VB6 exes just work (and have done since the runtimes were included in Windows 2000). They're lightweight (just a few hundred kb) responsive (no watching the screen redraw as often happens with .NET based cack) and easy to deploy/distribute (just copy the .exe onto your machine and you're done - no stupid 1.5Gb framework downloads here!)
In fact, VB6 proved it's worth in just 45 minutes after being installed. That's how long it took to hack together a quick g-code editor which has massively improved the usability of my CNC router, despite it not having any end stops.
My g-code editor app reads a g-code file and lists each line. You can select any line or lines and apply a "global nudge" - increase or decrease any of the values by a specific amount. You can add a bit to the Z depth (to increase a drill depth for example) or just overwrite the minimum and maximum values for the Z-axis entirely.
But for a long while, something was missing.
I've been messing about with Raspberry Pi and Linux and Arduino and even a bit of Python. But it's taken until now to get VB6 installed on my machine. And it was like putting on an old pair of slippers. Sure, there's a hole in the toe and they're not cool enough to be seen in just popping down the road to the shops. But VB6 is an awesome bit of kit for bashing out apps in a couple of hours.
And Microsoft still haven't found the heart to kill it off.
.NET based applications are slow and bloated and buggy.
VB6 exes just work (and have done since the runtimes were included in Windows 2000). They're lightweight (just a few hundred kb) responsive (no watching the screen redraw as often happens with .NET based cack) and easy to deploy/distribute (just copy the .exe onto your machine and you're done - no stupid 1.5Gb framework downloads here!)
In fact, VB6 proved it's worth in just 45 minutes after being installed. That's how long it took to hack together a quick g-code editor which has massively improved the usability of my CNC router, despite it not having any end stops.
My g-code editor app reads a g-code file and lists each line. You can select any line or lines and apply a "global nudge" - increase or decrease any of the values by a specific amount. You can add a bit to the Z depth (to increase a drill depth for example) or just overwrite the minimum and maximum values for the Z-axis entirely.
But where it really shines, is it allows me to select a block of g-code and it will plot a "bounding box" around the points and create a second g-code file I can just load up into Mach3 Mill.
The new g-code file, before carrying out any of the actual instructions, moves to the top-most, left-most, right-most and bottom-most corners of the highlighted section of g-code, stopping at each point.
This gives me time to inspect the position of the cutting head to make sure it's within the boundaries of the piece I'm working on, before committing to the cut.
It's great for making sure I'm not about to plough through the edges of a guitar fingerboard, when routing the channels on the back. Without it, I'd just have to hope that I'd managed to line up my fingerboard absolutely perfectly when placing it on the cutting bed and I'd have to keep my fingers crossed after hitting the "go" button.
This way, I can take each section (that is cut just above each fret on the fingerboard) and make sure that the cutting head stays within the boundaries of the fingerboard. If it doesn't I'll see it and have change to correct it (using the "global nudge" function) without wasting more valuable rosewood!
Monday, 11 April 2016
Creating an IR (infrared) tracking webcam
After we managed to get colour tracking with OpenCV working, we hit a snag during "real world" testing. While it works fine in principle, in a controlled environment, trying to put it to use under normal conditions wasn't quite so easy.
At first, everything worked fine - although we'd hard-coded the colour range to look for, we figured this could easily be made into a parameter. But during the course of the day, our colour detection started to behave differently.
At the start of the day, the detection routine correctly placed the centre-point right in the middle of the object it was tracking. But as the day wore on, the centre-point slowly drifted south! When it became noticeable, we looked at the raw input image and discovered that as the sunlight had shifted, coming into the room, it was casting shadows in different places on our tracking object.
We needed a way of filtering out the shadows.
As we can't control the light conditions of the room in which our object tracking will ultimately be used, it essentially means not using visible light. Which, of course, means IR (infrared).
Cheap webcams are great for tracking infrared light.
Most cameras on mobile phones show infrared light - if you shine a remote control at your iPhone camera, you can see it blinking on and off. But some webcams don't. That's because they're usually fitted with an infrared filter, to try to maintain the right colour balance, using different light sources.
Really cheap webcams simply use a cover over the sensor to take out infrared from any incoming image. It's this idea that we're going to exploit, to make our webcam see infrared.
Here's our really cheap webcam (about £6 off Amazon). We took off the lens to expose the image sensor. Shock horror, no IR filter....
Not to worry - it was fixed over the lens.
At first, everything worked fine - although we'd hard-coded the colour range to look for, we figured this could easily be made into a parameter. But during the course of the day, our colour detection started to behave differently.
At the start of the day, the detection routine correctly placed the centre-point right in the middle of the object it was tracking. But as the day wore on, the centre-point slowly drifted south! When it became noticeable, we looked at the raw input image and discovered that as the sunlight had shifted, coming into the room, it was casting shadows in different places on our tracking object.
We needed a way of filtering out the shadows.
As we can't control the light conditions of the room in which our object tracking will ultimately be used, it essentially means not using visible light. Which, of course, means IR (infrared).
Cheap webcams are great for tracking infrared light.
Most cameras on mobile phones show infrared light - if you shine a remote control at your iPhone camera, you can see it blinking on and off. But some webcams don't. That's because they're usually fitted with an infrared filter, to try to maintain the right colour balance, using different light sources.
Really cheap webcams simply use a cover over the sensor to take out infrared from any incoming image. It's this idea that we're going to exploit, to make our webcam see infrared.
Here's our really cheap webcam (about £6 off Amazon). We took off the lens to expose the image sensor. Shock horror, no IR filter....
Not to worry - it was fixed over the lens.
Simply removing the IR filter means that our webcam can now see infrared light. But that's only half the story. Seeing previously invisible light means we've actually got more content to track, not less. What we really need, now we can see the invisible IR light, is to remove all the visible light entering the webcam sensor.
You can use polaroid/polarising filters, but just as good (and more easily sourced) is developed camera film. Remember when you used to load a film into your camera, take your snaps, then have them developed at a chemist or camera shop? Well, it's that sort of film.
What we need is just a piece of photography film, fully opaque (black) and developed. The easiest way to achieve this is to get a black-and-white film, pull a length of it out and hold it out in the sunshine for about five minutes. Then take the film to be developed the "old school way".
We had to tell the developing company that we didn't actually want the photos (they'd just be bright white pieces of paper anyway) and to develop whatever was on the film - some chemists might return your film with "error" or some comment about it being faulty. Pre-warning them that the pictures are actually junk means that they don't waste time trying to put right something that isn't there!
Then we simply cut a small square of developed film and used it in place of the IR filter.
The result? Our webcam can see absolutely nothing. An entirely black image appeared on the screen. But when we flashed an IR LED in front of it....
Bingo!
Saturday, 9 April 2016
Routing a guitar fingerboard with desktop CNC
It's been a long time coming, but we finally managed to get our desktop CNC actually routing again this weekend. With a super-whizzy spindle that practically purrs, it's so quiet, it was quite exciting to see our little 1.5mm cutting bit actually carving into the wood, following our Inkscrape designs.
Of which we'll address first.
Creating g-code for CNCs is a doddle with Inkscape. While the software has it's detractors (it's not exactly as nice to use as Adobe Illustrator, nor as fully featured as CorelDraw) for simple shaped-based CAD, it's really easy to use.
We drew our holes and cutting guide lines in Inkscape, then aligned them to make the whole image symmetrical. This way, we can flip the board over on the cutting bed and not worry about it needing to be re-centred, or to add (or subtract) and offset to compensate for it being flipped. Then - and this bit is important, or it might lock up the Python script - make sure all shapes are converted to paths.
With all the paths in the design selected, go to menu Extensions -> GCode tools -> Orientation Points
[video]
And with the actual fingerboard in place, we drilled some holes.
Our fingerboard goes up to 7mm thick in the centre, so we don't want to try to rip all that material out in one go. In this video we probably erred on the side of caution, drilling in steps of just 1mm in the z-axis. Maybe next time we might up this to 2.5mm or even 3mm per pass.
It was only after drilling a couple of holes we noticed that our fingerboard was actually shifting as the router plunged through it. We need to use something a bit more substantial than just double-sided tape to hole the piece in place while routing! The only thing was, after drilling just two rows of holes, we noticed that they were on a definite downhill slant.
On the row above we noticed tiny metal filings as the last couple of holes were drilled (the piece was drilled in reverse order with the fifteenth fret drilled first, working back towards the top of the neck). This indicated that we'd gone too far "south" and were cutting into the fret wire.
It's quite possible that the fingerboard wasn't mounted entirely accurately, but it's also quite possible that it shifted because it wasn't held firmly enough in place. Either way, it takes only a fraction of a degree twist to stop the drilled holes from lining up in their final place; something we hadn't accounted for when we just slapped the fingerboard down with a bit of tape!
So we'll have another go at this tomorrow, but instead of marking the fret positions with the cutting head, we're going to drill lightly into the backing board that the fingerboard is mounted onto. The idea being that if we draw a line through the centre of the holes, extending it out to the sides, we'll know where to line the board up to, but also have a known position for each hole. After all, the CNC is supposed to be able to repeat cutting patterns to within 1/10th of a millimetre.
If we can see where the holes should go and place the fingerboard over them, then in theory, we can't go wrong..........
Of which we'll address first.
Creating g-code for CNCs is a doddle with Inkscape. While the software has it's detractors (it's not exactly as nice to use as Adobe Illustrator, nor as fully featured as CorelDraw) for simple shaped-based CAD, it's really easy to use.
We drew our holes and cutting guide lines in Inkscape, then aligned them to make the whole image symmetrical. This way, we can flip the board over on the cutting bed and not worry about it needing to be re-centred, or to add (or subtract) and offset to compensate for it being flipped. Then - and this bit is important, or it might lock up the Python script - make sure all shapes are converted to paths.
With all the paths in the design selected, go to menu Extensions -> GCode tools -> Orientation Points
In the pop-up dialogue window, you can enter the total, final cutting depth of the tool, in the Z-axis.
After this has completed you should see an origin appear in the bottom left hand corner of the drawing. So now we need to add a tool/cutting bit. Menu Extensions -> GCodeTools again, and this time click the "tool library" and select "default".
Have a hunt around. Somewhere on your drawing should be a bright green panel, into which you can enter stuff about the cutting head.
The one we're interested in here is "depth step".
This tells Inkscape how many "passes" to create for the g-code. We're going to be plunging up to 2.5mm deep into our wood (that was the value set when we created the orientation points). But we might not want to do this all in one go. So we've set this panel up with a plunge depth of 1mm.
This means that our pattern will be routed to a depth of 1mm, then repeated, to a depth of 2mm, then finally a third time, to complete the cutting depth to make it 2.5mm in total.
Lastly, menu Extensions -> GCodeTools -> PathToGCode to actually generate the g-code file.
Here we've also selected the option to "sort paths to reduce rapid distance". This basically means that instead of routing the shapes in the order they were drawn, the cutting head will route one shape then move to the nearest - or whichever shape will reduce the overall travel of the cutting head over the entire job.
Just to be sure, we load our generated g-code into the software that used to be called OpenSCAM, and now goes by the name CAMotics.
After splitting our original drawing into two different files, here's our "placemarker" pattern - short cuts at the edges of each fret wire on the fingerboard to help up line up the piece for routing.
Then we drew our holes on a separate drawing, and set them to a much deeper cutting depth
The nice thing about OpenSCAM is that it includes a virtual cutting head. So you can play the cutting animation entirely from your generated g-code; the software not only shows you a 3d model of what your final piece will look like, but also animates the head and can also display the paths taken over the 3d image.
Convinced that we'd done a good job, we finally let our CNC run
[video]
And with the actual fingerboard in place, we drilled some holes.
Our fingerboard goes up to 7mm thick in the centre, so we don't want to try to rip all that material out in one go. In this video we probably erred on the side of caution, drilling in steps of just 1mm in the z-axis. Maybe next time we might up this to 2.5mm or even 3mm per pass.
It was only after drilling a couple of holes we noticed that our fingerboard was actually shifting as the router plunged through it. We need to use something a bit more substantial than just double-sided tape to hole the piece in place while routing! The only thing was, after drilling just two rows of holes, we noticed that they were on a definite downhill slant.
On the row above we noticed tiny metal filings as the last couple of holes were drilled (the piece was drilled in reverse order with the fifteenth fret drilled first, working back towards the top of the neck). This indicated that we'd gone too far "south" and were cutting into the fret wire.
It's quite possible that the fingerboard wasn't mounted entirely accurately, but it's also quite possible that it shifted because it wasn't held firmly enough in place. Either way, it takes only a fraction of a degree twist to stop the drilled holes from lining up in their final place; something we hadn't accounted for when we just slapped the fingerboard down with a bit of tape!
So we'll have another go at this tomorrow, but instead of marking the fret positions with the cutting head, we're going to drill lightly into the backing board that the fingerboard is mounted onto. The idea being that if we draw a line through the centre of the holes, extending it out to the sides, we'll know where to line the board up to, but also have a known position for each hole. After all, the CNC is supposed to be able to repeat cutting patterns to within 1/10th of a millimetre.
If we can see where the holes should go and place the fingerboard over them, then in theory, we can't go wrong..........
Friday, 8 April 2016
OpenCV Python and colour tracking
Despite our Unity asset being based on the OpenCV libraries (and supposedly OpenCV/java compatible) there are slight differences in the syntax that make it quite difficult to get working by porting online OpenCV examples from other languages.
So before we battled getting our Unity asset working, we thought we'd just try to get some OpenCV object detection working on a more common platform; Python.
There are loads of functions for object detecting, but they're pretty CPU intensive. We did - eventually - get HoughCircle detection working, albeit quite crudely. A frame rate of 1fp3s was about possible, and the detection routines missed some of the biggest, most obvious circles in the image!
One of the fastest routines we managed to get working was object detecting by colour.
To do this we created a threshold "hue" range that our colour should fall inside, applied this as a mask to our original webcam stream, then used the createContours function to find the mid-point of each coloured shape in the image. Finally, we chose only the largest shape from the image (to avoid picking up extraneous background noise as an object) and plotted the centrepoint back onto the image.
The first step was to create a "colour threshold".
To do this is a two-step process; first change the RGB image into HSV.
Then, we create a range of HSV values to look for in the image.
This is done by creating two arrays of values.
To pick out, in this case, just blue colours, we look for pixels that match a specific range of hues, while at the same time, looking within a set of saturation (amount of colour compared to black & white) and value (brightness) ranges.
To find the HSV values for a given shade of blue, we used OpenCV's own function cvtColour in the Python IDE editor. This could easily be converted into a script if necessary, but we were just making a quick-n-dirty example, so entered commands into the Python interpreter directly.
Convert the array of colours from BGR to HSV using
And finally display the colour as HSV with the command
The result is an array of h, s and v values to put into the detection script.
Since we're only interested in the "hue" of the colour, we can ignore the s and v values- in our detection script, we'll set these as 100 at the lowest end of the range, and 255 at the highest. That means we're looking for any shade of our selected colour from "a little bit darker" all the way up to "almost white". (if your image contained the colour you were looking for in a dark environment, you might want to change these to something like 50 at the lowest and 150 at the upper end of the range).
We created a hue range starting at h-10, up to h+10.
So in our example, the HSV value was 105, 228, 186.
So our HSV range in the detection script was from 90,100,100 to 110,255,255.
This results in a single black and white image as each pixel in the original is compared to the "threshold" value(s). Pixels either pass the threshold test (and turn white) or fail (and turn black). This is our mask.
OpenCV has a natty little routine that finds the contours of shapes in an image. You can think of it like "flood fill" in reverse - instead of blocking an area of the same colour, up to a boundary, it finds the boundary of an area of common colour
After running this function, we end up with an array of shapes that OpenCV found in the image. So it's a simple case of looping through all shapes, finding the largest shape, and placing a dot in the middle of it.
note that the script has placed the red dot in an area of black on the final merged image. This is quite acceptable - the shape of the blue region is two of Superman's legs, joined across the top. The centre-point of this entire region (from top-left to bottom-right) may well fall in an area that isn't actually blue in colour - if the two legs were separate, the script would place the red dot in the centre of the largest individual leg.
Because the contours function works on the entire perimeter of the shape, it allows relatively complex shapes to be detected well - even if they have "gaps" or holes in the colour mask.
Here's the final Python script:
So before we battled getting our Unity asset working, we thought we'd just try to get some OpenCV object detection working on a more common platform; Python.
There are loads of functions for object detecting, but they're pretty CPU intensive. We did - eventually - get HoughCircle detection working, albeit quite crudely. A frame rate of 1fp3s was about possible, and the detection routines missed some of the biggest, most obvious circles in the image!
One of the fastest routines we managed to get working was object detecting by colour.
To do this we created a threshold "hue" range that our colour should fall inside, applied this as a mask to our original webcam stream, then used the createContours function to find the mid-point of each coloured shape in the image. Finally, we chose only the largest shape from the image (to avoid picking up extraneous background noise as an object) and plotted the centrepoint back onto the image.
The first step was to create a "colour threshold".
To do this is a two-step process; first change the RGB image into HSV.
Then, we create a range of HSV values to look for in the image.
This is done by creating two arrays of values.
To pick out, in this case, just blue colours, we look for pixels that match a specific range of hues, while at the same time, looking within a set of saturation (amount of colour compared to black & white) and value (brightness) ranges.
To find the HSV values for a given shade of blue, we used OpenCV's own function cvtColour in the Python IDE editor. This could easily be converted into a script if necessary, but we were just making a quick-n-dirty example, so entered commands into the Python interpreter directly.
Don't forget to import the libraries numpy and cv2 before starting!
Then define your colour to convert as a numpy array.
Note the colour is in BGR (not RGB) format.
my_colour = np.unit8([[[ b, g, r ]]])
Convert the array of colours from BGR to HSV using
hsv_colour = cv2.cvtColour(my_colour, cv2.COLOR_BGR2HSV)
And finally display the colour as HSV with the command
print hsv_colour
The result is an array of h, s and v values to put into the detection script.
Since we're only interested in the "hue" of the colour, we can ignore the s and v values- in our detection script, we'll set these as 100 at the lowest end of the range, and 255 at the highest. That means we're looking for any shade of our selected colour from "a little bit darker" all the way up to "almost white". (if your image contained the colour you were looking for in a dark environment, you might want to change these to something like 50 at the lowest and 150 at the upper end of the range).
We created a hue range starting at h-10, up to h+10.
So in our example, the HSV value was 105, 228, 186.
So our HSV range in the detection script was from 90,100,100 to 110,255,255.
This results in a single black and white image as each pixel in the original is compared to the "threshold" value(s). Pixels either pass the threshold test (and turn white) or fail (and turn black). This is our mask.
OpenCV has a natty little routine that finds the contours of shapes in an image. You can think of it like "flood fill" in reverse - instead of blocking an area of the same colour, up to a boundary, it finds the boundary of an area of common colour
After running this function, we end up with an array of shapes that OpenCV found in the image. So it's a simple case of looping through all shapes, finding the largest shape, and placing a dot in the middle of it.
note that the script has placed the red dot in an area of black on the final merged image. This is quite acceptable - the shape of the blue region is two of Superman's legs, joined across the top. The centre-point of this entire region (from top-left to bottom-right) may well fall in an area that isn't actually blue in colour - if the two legs were separate, the script would place the red dot in the centre of the largest individual leg.
Because the contours function works on the entire perimeter of the shape, it allows relatively complex shapes to be detected well - even if they have "gaps" or holes in the colour mask.
Here's the final Python script:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
kernel = np.ones((5,5),np.uint8)
while(1):
# Take each frame
_, frame = cap.read()
# resample (to reduce the amount of work that has to be done)
frame = cv2.resize(frame, (0,0), fx=0.5, fy=0.5)
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([90,100,100])
upper_blue = np.array([120,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# these two lines swell then contract the region around
# each contour/shape, then contract then swell it; the idea
# is to remove areas of noise and little tiny shapes
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# now we find each shape in the image
_, contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
# find the largest matching object
max_area = 0
r=0
for cnt in contours:
cnt_area = cv2.contourArea(cnt)
if cnt_area > max_area:
max_area = cnt_area
best_cnt = cnt
r=1
# end if
# next
# and stick a dot in the centre of it
if r==1:
moments = cv2.moments(best_cnt) # Calculate moments
if moments['m00']!=0:
cx = int(moments['m10']/moments['m00']) # cx = M10/M00
cy = int(moments['m01']/moments['m00']) # cy = M01/M00
#moment_area = moments['m00'] # Contour area from moment
# cv2.drawContours(res,[best_cnt],0,(0,255,0),1) # draw contours on final image
cv2.circle(res,(cx,cy),5,(0,0,255),-1) # draw centroids in red color
# end if
# end if
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
import numpy as np
cap = cv2.VideoCapture(0)
kernel = np.ones((5,5),np.uint8)
while(1):
# Take each frame
_, frame = cap.read()
# resample (to reduce the amount of work that has to be done)
frame = cv2.resize(frame, (0,0), fx=0.5, fy=0.5)
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([90,100,100])
upper_blue = np.array([120,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# these two lines swell then contract the region around
# each contour/shape, then contract then swell it; the idea
# is to remove areas of noise and little tiny shapes
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# now we find each shape in the image
_, contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
# find the largest matching object
max_area = 0
r=0
for cnt in contours:
cnt_area = cv2.contourArea(cnt)
if cnt_area > max_area:
max_area = cnt_area
best_cnt = cnt
r=1
# end if
# next
# and stick a dot in the centre of it
if r==1:
moments = cv2.moments(best_cnt) # Calculate moments
if moments['m00']!=0:
cx = int(moments['m10']/moments['m00']) # cx = M10/M00
cy = int(moments['m01']/moments['m00']) # cy = M01/M00
#moment_area = moments['m00'] # Contour area from moment
# cv2.drawContours(res,[best_cnt],0,(0,255,0),1) # draw contours on final image
cv2.circle(res,(cx,cy),5,(0,0,255),-1) # draw centroids in red color
# end if
# end if
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
Sunday, 3 April 2016
Pikey Programmer
Some of the nerds at BuildBrighton have commented on my battered old Transit van. Often it's filled with junk either on its way to, or from, some auction or other. Regularly it's carrying paving slabs, rotivator and all other kinds of allotment-related paraphernalia.
Steve even suggested I'm just a mongrel on a piece of blue nylon rope away from joining the "pikey brigade" as he so eloquently puts it (I know, I don't know what it means either).
So it seems quite appropriate to name my latest homebrew-pcb-etched project (a quick-and-dirty build just to use up some ferric chloride after I'd gone to the trouble of heating it for another project) the "Pikey Programmer". It's nothing more than a TQFP pad layout to a DIP-style breakout board.
Simply place one of those 32-pin atmega328 (or atmega168) chips with the 0.8mm pin pitch onto the pads (TQFP packaged PICs have a 0.5mm pitch and are much trickier to handle when surface-mount-soldering by hand) and line it up by eye
Then squash it down with your finger while the programming takes place!
It works surprisingly well!
I did consider making it a dedicated AVR chip programmer but eventually settled on a simple breakout board and some breadboard; this way, different chips can be programmed (in different orientations if necessary) by simply swapping a few wires on the breadboard - a permanent, fixed-layout board would limit the possible uses for our "Pikey Programmer".
After testing, we found the board works equally well for both ICSP programming, and the more traditional bootloader-over-serial approach.
Steve even suggested I'm just a mongrel on a piece of blue nylon rope away from joining the "pikey brigade" as he so eloquently puts it (I know, I don't know what it means either).
So it seems quite appropriate to name my latest homebrew-pcb-etched project (a quick-and-dirty build just to use up some ferric chloride after I'd gone to the trouble of heating it for another project) the "Pikey Programmer". It's nothing more than a TQFP pad layout to a DIP-style breakout board.
Simply place one of those 32-pin atmega328 (or atmega168) chips with the 0.8mm pin pitch onto the pads (TQFP packaged PICs have a 0.5mm pitch and are much trickier to handle when surface-mount-soldering by hand) and line it up by eye
Then squash it down with your finger while the programming takes place!
It works surprisingly well!
I did consider making it a dedicated AVR chip programmer but eventually settled on a simple breakout board and some breadboard; this way, different chips can be programmed (in different orientations if necessary) by simply swapping a few wires on the breadboard - a permanent, fixed-layout board would limit the possible uses for our "Pikey Programmer".
After testing, we found the board works equally well for both ICSP programming, and the more traditional bootloader-over-serial approach.
Friday, 1 April 2016
Unity and OpenCV
In recent weeks we've been exposed to quite a range of new platforms and programming languages. Like running Linux on a Raspberry Pi, coding in Python and hacking into OpenCV for a face detection project a little while back.
Following on from our Twitter app recently, we were asked if it could be updated, to track an object or face around a screen. We're still not entirely sure about a practical use for face tracking with a tweet (although it was much easier to implement than simple object-tracking) but perhaps some kind of app with speech-bubbles might be on the cards?
The OpenCV asset for Unity is pretty spendy, but it offers a lot of potential for making cool interactive apps - not least of all AR (augmented reality). As we were looking for a "quick win" and to get something up and running quickly, we took the hit in the pocket and bought it, to see what it could do. The AR stuff would just have to wait - for now we wanted to track a canvas object against a moving shape in a webcam feed.
As it happens, the OpenCV asset comes with a load of examples, demonstrating what you can use it for. We didn't manage to get a shape-tracking example working (the ultimate aim of this particular project) but we did get face tracking working quite quickly.
The only thing we had trouble with was reflecting the co-ordinates from the OpenCV routines into co-ordinates for our Unity Canvas objects. Our OpenCV object works using a "top-left" co-ordinate system. But if we put our text box in the top left in Unity....
.... the y co-ordinate system retains the realword orientation, where an increase in Y causes the text box to go "up" not down.
We can use positive numbers for the y-axis to position our text box, by setting the position of the box to "bottom-left".
But this means that all our y-values are relative to the bottom-left corner, not the top-left.
So we either need to use a fixed height canvas, or know the canvas height so we can calculate the relative y-position for our test box. In the end we went for "go from the centre" and changed our OpenCV results to draw the position of the centre of the found object (a face) from the middle of the image, instead of the top-left corner.
The code was surprisingly simple to hack into the existing facial recognition examples.
Here's the result:
Following on from our Twitter app recently, we were asked if it could be updated, to track an object or face around a screen. We're still not entirely sure about a practical use for face tracking with a tweet (although it was much easier to implement than simple object-tracking) but perhaps some kind of app with speech-bubbles might be on the cards?
The OpenCV asset for Unity is pretty spendy, but it offers a lot of potential for making cool interactive apps - not least of all AR (augmented reality). As we were looking for a "quick win" and to get something up and running quickly, we took the hit in the pocket and bought it, to see what it could do. The AR stuff would just have to wait - for now we wanted to track a canvas object against a moving shape in a webcam feed.
As it happens, the OpenCV asset comes with a load of examples, demonstrating what you can use it for. We didn't manage to get a shape-tracking example working (the ultimate aim of this particular project) but we did get face tracking working quite quickly.
The only thing we had trouble with was reflecting the co-ordinates from the OpenCV routines into co-ordinates for our Unity Canvas objects. Our OpenCV object works using a "top-left" co-ordinate system. But if we put our text box in the top left in Unity....
.... the y co-ordinate system retains the realword orientation, where an increase in Y causes the text box to go "up" not down.
We can use positive numbers for the y-axis to position our text box, by setting the position of the box to "bottom-left".
But this means that all our y-values are relative to the bottom-left corner, not the top-left.
So we either need to use a fixed height canvas, or know the canvas height so we can calculate the relative y-position for our test box. In the end we went for "go from the centre" and changed our OpenCV results to draw the position of the centre of the found object (a face) from the middle of the image, instead of the top-left corner.
The code was surprisingly simple to hack into the existing facial recognition examples.
Here's the result:
You can just about see the red dot drawn over the centre point of the detected face in the webcam stream. The text "tweens" to this point every 0.5 seconds, hence sometimes there's a bit of lag. The latency can be removed, by making the text jump straight to the new location, but this can make the text appear a bit "jumpy" if the centre point moves by a pixel or two, even when the target is perfectly still; tweening provides the best compromise between delay and smoothness.