It's taken a few weeks for them to finally get here, and it's been a nerve-wracking couple of weeks, since the factory were unable to validate that they were sending the correct serial data out of the "data" pin, but today a large box arrived via DHL.
With over £600-worth of circuit boards, still untested fully (the factory confirmed that the hall sensors were going from high-to-low but hadn't been able to demonstrate them working with the test software I emailed over) it was with some trepidation we peeked inside.
Until the boards were actually hooked up and failed, there was still the chance that we hadn't just exchanged a load of cash for a boxful of expensive book covers!
"Sophia" from the factory remained supremely confident, and emailed to say she'd stuck a little something into the box...
We're still not exactly sure how a free bluetooth headset should instill more confidence. But it was a nice touch anyway. A visual inspection of the boards was pretty encouraging - everything looked to be in order, each pair of boards were packed individually in their own anti-static bags, and well supported in the carton with plenty of polystyrene.
All that remained was to actually test the boards!
Since alignment of the connectors is absolutely critical, these have been left off- we'll solder our three-pin connectors on them here (we're not actually decided on the final design, so far it's looking like 0.1" pitch pin header type square sockets) using a jig to ensure that every connector is in exactly the right place (even a millimetre or two of deviation would cause board sections to go out of alignment and we just couldn't have that!)
So on with the testing.....
We used exactly the same test rig as for our hand-soldered and homebrew made boards.
100% success rate! Phew. That was a real relief.
We've now got enough boards to make up either 10 individual football pitches.... or one enormous, massive skirmish battlefield!
Saturday, 20 December 2014
Enhanced wifi controller for digital board game
Our digital board game idea depends heavily on a wi-fi based module, to transfer data from the game board to the tablet/smartphone/PC that is hosting the app running the game rules. We've spent a lot of time investigating lots of different methods of getting data from the board into some kind of controller, including
Eventually, and thanks to the super-cheap ESP8266 modules that recently flooded onto the market, we went with wifi (prior to the ESP8266 we'd given up on wifi as too expensive, using either WiFly or HLK-RM04 modules).
Now there are a number of convoluted ways of getting your SSID and password into the wifi module (including setting it up as an access point, connecting your phone to it, running an app, entering the SSID/password combination then closing the app, disconnecting the phone from the AP, rebooting the wifi module in "client" mode, reconnecting your phone to your home router and starting another app, then using UDP broadcasts to find out the ip address of the wifi module) but in the end we went with something a little simpler - a screen and a rotary dial to "type in" your SSID and password into the wifi module (it retains this information during power cycles, so shouldn't need to be done too often).
Now that's been working fine for a while now, and we're at the point where we're able to share a few videos with people to gauge the level of interest for an electronically enhanced board game platform. Mostly, people have been really excited by it. But there have been maybe one or two people who didn't quite understand that the idea is to speed up tabletop gaming, by having the smart device do all the dice rolling and conflict resolution for you - they wanted to be able to roll dice and somehow enter the results into the game.
At first, we just dismissed this idea as the rambilngs of a few RGP-obsessed lunatics. But over time, we just got to thinking - well, why not? So we've added a bit of extra code to our wifi module, to allow the user to select a dice roll result, using the rotary dial and push button.
The first thing to do was draw some 1-bit bitmaps of dice. This wasn't as easy as we'd hoped (grab some images off Google, resize, reduce colour depth) and we ended up drawing each dice shape, pixel by pixel.
Of the two types, we preferred the "inverted" dice (though the "white" dice also look pretty nice, we may yet use those!) so squashed them up together (to remove any unnecessary blank pixels, to keep our RAM usage down to a minimum) then ran a simple VB script to create one big array of bitmap data.
Here's a video of it in action:
While we've provided a method for players to roll dice and enter their results into the app, it's a little bit cumbersome and not very satisfactory - stopping the game to interact with technology, to then return back to the board game, breaks up the feeling of playing a tabletop game: rolling dice and placing them over some kind of "reader" which could then automatically extract the dice result from the actual dice face would be a far more "fluid" way of playing the game. It's just a shame that we can't do that just yet!
[blog post edit: Oh yes we can - http://nerdclub-uk.blogspot.co.uk/2014/12/dice-reading-photo-image-processing.html ]
- USB (suitable really only for PCs)
- Serial (suitable for PCs and some Android tablets)
- RF devices (a bit buggy and susceptible to noise)
- Audio (works with all tablets, even crazy Apple iDevices)
- Ethernet (works with everything but requires lots of wires)
Eventually, and thanks to the super-cheap ESP8266 modules that recently flooded onto the market, we went with wifi (prior to the ESP8266 we'd given up on wifi as too expensive, using either WiFly or HLK-RM04 modules).
Now there are a number of convoluted ways of getting your SSID and password into the wifi module (including setting it up as an access point, connecting your phone to it, running an app, entering the SSID/password combination then closing the app, disconnecting the phone from the AP, rebooting the wifi module in "client" mode, reconnecting your phone to your home router and starting another app, then using UDP broadcasts to find out the ip address of the wifi module) but in the end we went with something a little simpler - a screen and a rotary dial to "type in" your SSID and password into the wifi module (it retains this information during power cycles, so shouldn't need to be done too often).
Now that's been working fine for a while now, and we're at the point where we're able to share a few videos with people to gauge the level of interest for an electronically enhanced board game platform. Mostly, people have been really excited by it. But there have been maybe one or two people who didn't quite understand that the idea is to speed up tabletop gaming, by having the smart device do all the dice rolling and conflict resolution for you - they wanted to be able to roll dice and somehow enter the results into the game.
At first, we just dismissed this idea as the rambilngs of a few RGP-obsessed lunatics. But over time, we just got to thinking - well, why not? So we've added a bit of extra code to our wifi module, to allow the user to select a dice roll result, using the rotary dial and push button.
The first thing to do was draw some 1-bit bitmaps of dice. This wasn't as easy as we'd hoped (grab some images off Google, resize, reduce colour depth) and we ended up drawing each dice shape, pixel by pixel.
Of the two types, we preferred the "inverted" dice (though the "white" dice also look pretty nice, we may yet use those!) so squashed them up together (to remove any unnecessary blank pixels, to keep our RAM usage down to a minimum) then ran a simple VB script to create one big array of bitmap data.
Here's a video of it in action:
While we've provided a method for players to roll dice and enter their results into the app, it's a little bit cumbersome and not very satisfactory - stopping the game to interact with technology, to then return back to the board game, breaks up the feeling of playing a tabletop game: rolling dice and placing them over some kind of "reader" which could then automatically extract the dice result from the actual dice face would be a far more "fluid" way of playing the game. It's just a shame that we can't do that just yet!
[blog post edit: Oh yes we can - http://nerdclub-uk.blogspot.co.uk/2014/12/dice-reading-photo-image-processing.html ]
Dice reading - photo image processing
One of the methods suggested for reading back dice rolls for our electronic board game is to make use of image processing. This seemed like a massive overkill, as well as being quite nasty to implement. Sure, we're using a smart device/tablet with our board game and it has a camera in it - so holding a dice up to a camera and getting it to read the value off it makes sense. Sort of.
Until you consider how this would play out in the middle of a game. At key decision points in the game, roll a dice (or two or three) then show them, one at a time, to the camera on the smart device. Which may be a front-facing camera. Or it may be round the back. Or it might be off to one side, so you don't actually present the dice to the screen displaying the game details, but to a tiny lens on the opposite side and towards the edge of the device. Except in landscape mode, the camera isn't at that end, because the device is rotated 90 degrees the other way around..... it just gets really nasty, really quickly.
Steve pointed out that there are OV7670 UART-cameras available relatively cheaply all over the net and that at £2 each, they're only a little bit more expensive than a matrix of five reflective sensors. Our little 8-bit micro is unlikely to have the grunt-power to do much image processing (though, reading through the code below, if one could be found with enough RAM, it might just be possible) so the idea is to snap an image with the camera, stream it from the camera GRAM into the microcontroller, and send the data, byte-by-byte to the host app over wifi (using one of the ESP8266 UART-to-wifi modules).
Now we're working with the original v2 firmware on these devices, which runs at 115200bps. An image taken at 640x480 is a massive 307,200 bytes. Even using a really low colour resolution (where each colour is one byte of 3G3R2B) that would take over 20 seconds to transfer just one frame.
Luckily, the OV7670 has a number of supported modes. One of which is QQCIF and scales the captured image to 88x72 pixels.
88x72 = 6336 bytes or 50688 bits, which, at one byte per pixel, could be transferred in less than half a second over 115200bps UART-to-wifi. Even on the "higher setting" of one byte per colour (three bytes per pixel) this is about one-and-a-half seconds to transfer the image; a much more reasonable delay time.
So what does an 88x72 bitmap look like?
Here's a photo we took with a regular camera phone, of two dice on a clear plastic acrylic sheet, with the backlight on (to illuminate the face of the dice being photographed).
When scaled down and reduced to a 1-bit image (all our image processing will be done on single-bit data) it looks like this:
Although not perfect, it retains enough information to show which dots are showing on the dice. All we need to do is process the image and extract the dice values. Which is easier said than done!
Steve suggested OpenCV and an ANE to get existing processing routines running in Flash (we're coding our native apps in Flash, the compiling the same source code down for iOS, Android and PC). This took a lot of time to set up and understand, and fail to understand, and give up on. Eventually we decided to just code our own dot recognition routine!
We can't be sure that the dice in the image are "square" to the frame - they could be at any old angle. But irrespective of the angle, we're expecting to find one or more "clusters" of dots in the image. That is, one or more instances of a dot made up of black point, surrounded by white points. So the first thing we do is scan the entire bitmap (it's only 6336 bytes remember: looping for(i=0; i<6336; i++) is actually pretty quick in AS3) and look for a black pixel, with a white pixel above, a white pixel below, a white pixel to the left and a white pixel to the right.
Whenever we find this combination of points, we compare the centre pixel to the centre pixel of previously found "dots" on the dice. If it's within a few pixels, there's a very real chance that it's actually one of the dots we've already found, so it's ignored. If it is a new dot, however, we add it to an ongoing array.
During development, we wrote a routine to draw each discovered dot in our black-and-white bitmap image. Amazingly, it correctly drew the dice dots in the right place, first time!
After parsing the image once, we end up with an array of co-ordinates where our black-dots-surrounded-by-white appear. The trick now is to group these into "clusters" of dots, to work out what the actual face values are.
We give every set of co-ordinates in the array a "cluster number" - all co-ordinate groups begin set to zero. The first time we find a co-ordinate point without a cluster number, we give it the current dice count, then loop through all the other points, looking for another dot, within a few pixels of this one, also without a "cluster number". By calling a couple of recursive functions, we can give every dot in the image a "cluster number" by working out which other dots it's closest to.
Once all dots in the image have been given their "cluster number" the values of each dice are easy to read back. If we gave three dots a cluster number of one, dice one has the value three. If five dots were given the cluster number two, it means that dice two has the value five, and so on.
The great thing about this approach is that it automatically adapts to more than one or two dice: so long as the dice are separated so that the dots appear in definite, distinct groups, there's no reason why this routine can't detect the face values of three, four, five or more dice in a single image.
Here's the code
Below are the results of some of our testing. To date we've tested it on about a dozen photos of dice (all take from the same distance, since any device using this approach would have a plate at a fixed height from a fixed-position camera) and each time we've correctly reported back the dice values on the faces in the photo.
Obviously, in real use, we'd need to subtract the dice face value from seven to infer the value that was face-up on the dice (since we're taking a photo of the dice face that is face-down on the clear surface) but that is just a trivial application of our dice-reading routine.
Until you consider how this would play out in the middle of a game. At key decision points in the game, roll a dice (or two or three) then show them, one at a time, to the camera on the smart device. Which may be a front-facing camera. Or it may be round the back. Or it might be off to one side, so you don't actually present the dice to the screen displaying the game details, but to a tiny lens on the opposite side and towards the edge of the device. Except in landscape mode, the camera isn't at that end, because the device is rotated 90 degrees the other way around..... it just gets really nasty, really quickly.
Steve pointed out that there are OV7670 UART-cameras available relatively cheaply all over the net and that at £2 each, they're only a little bit more expensive than a matrix of five reflective sensors. Our little 8-bit micro is unlikely to have the grunt-power to do much image processing (though, reading through the code below, if one could be found with enough RAM, it might just be possible) so the idea is to snap an image with the camera, stream it from the camera GRAM into the microcontroller, and send the data, byte-by-byte to the host app over wifi (using one of the ESP8266 UART-to-wifi modules).
Now we're working with the original v2 firmware on these devices, which runs at 115200bps. An image taken at 640x480 is a massive 307,200 bytes. Even using a really low colour resolution (where each colour is one byte of 3G3R2B) that would take over 20 seconds to transfer just one frame.
Luckily, the OV7670 has a number of supported modes. One of which is QQCIF and scales the captured image to 88x72 pixels.
88x72 = 6336 bytes or 50688 bits, which, at one byte per pixel, could be transferred in less than half a second over 115200bps UART-to-wifi. Even on the "higher setting" of one byte per colour (three bytes per pixel) this is about one-and-a-half seconds to transfer the image; a much more reasonable delay time.
So what does an 88x72 bitmap look like?
Here's a photo we took with a regular camera phone, of two dice on a clear plastic acrylic sheet, with the backlight on (to illuminate the face of the dice being photographed).
When scaled down and reduced to a 1-bit image (all our image processing will be done on single-bit data) it looks like this:
Although not perfect, it retains enough information to show which dots are showing on the dice. All we need to do is process the image and extract the dice values. Which is easier said than done!
Steve suggested OpenCV and an ANE to get existing processing routines running in Flash (we're coding our native apps in Flash, the compiling the same source code down for iOS, Android and PC). This took a lot of time to set up and understand, and fail to understand, and give up on. Eventually we decided to just code our own dot recognition routine!
We can't be sure that the dice in the image are "square" to the frame - they could be at any old angle. But irrespective of the angle, we're expecting to find one or more "clusters" of dots in the image. That is, one or more instances of a dot made up of black point, surrounded by white points. So the first thing we do is scan the entire bitmap (it's only 6336 bytes remember: looping for(i=0; i<6336; i++) is actually pretty quick in AS3) and look for a black pixel, with a white pixel above, a white pixel below, a white pixel to the left and a white pixel to the right.
Whenever we find this combination of points, we compare the centre pixel to the centre pixel of previously found "dots" on the dice. If it's within a few pixels, there's a very real chance that it's actually one of the dots we've already found, so it's ignored. If it is a new dot, however, we add it to an ongoing array.
During development, we wrote a routine to draw each discovered dot in our black-and-white bitmap image. Amazingly, it correctly drew the dice dots in the right place, first time!
After parsing the image once, we end up with an array of co-ordinates where our black-dots-surrounded-by-white appear. The trick now is to group these into "clusters" of dots, to work out what the actual face values are.
We give every set of co-ordinates in the array a "cluster number" - all co-ordinate groups begin set to zero. The first time we find a co-ordinate point without a cluster number, we give it the current dice count, then loop through all the other points, looking for another dot, within a few pixels of this one, also without a "cluster number". By calling a couple of recursive functions, we can give every dot in the image a "cluster number" by working out which other dots it's closest to.
Once all dots in the image have been given their "cluster number" the values of each dice are easy to read back. If we gave three dots a cluster number of one, dice one has the value three. If five dots were given the cluster number two, it means that dice two has the value five, and so on.
The great thing about this approach is that it automatically adapts to more than one or two dice: so long as the dice are separated so that the dots appear in definite, distinct groups, there's no reason why this routine can't detect the face values of three, four, five or more dice in a single image.
Here's the code
import flash.display.Bitmap;
import flash.geom.Point;
var bmp:BitmapData=new BitmapData(imgHolder.width, imgHolder.height, true, 0x00000000);
var foundDots:Array = new Array();
var p0:int=0;
var p1:int=0;
var p2:int=0;
var p3:int=0;
var p4:int=0;
var clusterChar:int=0;
var clusterIndex:int=0;
var lastDot:Object;
function findDots(){
var idx:int=0;
bmp.draw(imgHolder);
for(var y:int=2; y<imgHolder.height-2; y++){
for(var x:int=2; x<imgHolder.width-2; x++){
// check to see if you can find a dot
p0 = bmp.getPixel(x,y);
p1 = bmp.getPixel(x-3,y);
p2 = bmp.getPixel(x+2,y);
p3 = bmp.getPixel(x,y-3);
p4 = bmp.getPixel(x,y+2);
// if you find a dot, see if you've already got one within the very near vicinity
if(p0==0x00){
if(p1!=0x00 && p2!=0x00 && p3!=0x00 && p4!=0x00){
// this looks like a black dot on a white background
// but check the array to see if we've ever found a dot within a few
// pixels of this one (it might be the same dot)
// if so, skip this dot (you've already found it)
// otherwise add to the array of found dots
if(similarPixel(x,y)==false){
var o:Object = new Object();
o.coords=new Point(x,y);
o.cluster=0;
o.index=idx;
foundDots.push(o);
idx++;
}
}
}
}
}
}
function drawFoundDots(){
var circle:Shape = new Shape(); // The instance name circle is created
for(var i:int=0; i<foundDots.length; i++){
trace(foundDots[i].coords.x+","+foundDots[i].coords.y);
circle.graphics.beginFill(0x990000, 1); // Fill the circle with the color 990000
circle.graphics.lineStyle(1, 0x000000); // Give the ellipse a black, 1 pixel thick line
circle.graphics.drawCircle(foundDots[i].coords.x, foundDots[i].coords.y, 4); // Draw the circle, assigning it a x position, y position, radius.
circle.graphics.endFill(); // End the filling of the circle
}
addChild(circle); // Add a child
circle.x=imgHolder.x;
circle.y=imgHolder.y;
}
function parseDots(){
clusterChar=0;
clusterIndex=0;
var dotValue:int=0
var dots:Array=new Array();
// find the first dot in the array that doesn't have a cluster character
lastDot=getDotWithNoClusterChar();
while(lastDot){
clusterIndex++;
dotValue=1;
trace("found start of cluster "+clusterIndex+" at index "+lastDot.index);
lastDot.cluster=clusterIndex;
while(lastDot){
lastDot=getConnectedDotForCluster(clusterIndex);
if(lastDot){
trace("found another dot for cluster "+clusterIndex);
dotValue++;
lastDot.cluster=clusterIndex;
}
}
trace("dotValue = "+dotValue);
dots[clusterChar]=dotValue;
clusterChar++;
lastDot=getDotWithNoClusterChar();
}
for(var i:int=0; i<dots.length; i++){
trace("dice "+i+" value "+dots[i]);
}
}
function getConnectedDotForCluster(indx:int){
var o:Object=null;
for(var i:int=0; i<foundDots.length; i++){
if(foundDots[i].cluster==indx){
for(var j:int=0; j<foundDots.length; j++){
if(j!=i && foundDots[j].cluster==0){
if(Math.abs(foundDots[j].coords.x-foundDots[i].coords.x)<=7 && Math.abs(foundDots[j].coords.y-foundDots[i].coords.y)<=7 ){
trace(i+" is connected to another dot in cluster "+indx);
o=foundDots[j];
break;
}
}
}
if(o){break;}
}
}
return(o);
}
function getDotWithNoClusterChar():Object{
var o:Object=null;
for(var i:int=0; i<foundDots.length; i++){
if(foundDots[i].cluster==0){
o=foundDots[i];
break;
}
}
return(o);
}
function similarPixel(ix:int, iy:int):Boolean {
var found:Boolean=false;
for(var i:int=0; i<foundDots.length; i++){
if(Math.abs(foundDots[i].coords.x-ix) < 4 && Math.abs(foundDots[i].coords.y-iy) < 4){
// found a similar pixel
found=true;
break;
}
}
return(found);
}
findDots();
drawFoundDots();
parseDots();
import flash.geom.Point;
var bmp:BitmapData=new BitmapData(imgHolder.width, imgHolder.height, true, 0x00000000);
var foundDots:Array = new Array();
var p0:int=0;
var p1:int=0;
var p2:int=0;
var p3:int=0;
var p4:int=0;
var clusterChar:int=0;
var clusterIndex:int=0;
var lastDot:Object;
function findDots(){
var idx:int=0;
bmp.draw(imgHolder);
for(var y:int=2; y<imgHolder.height-2; y++){
for(var x:int=2; x<imgHolder.width-2; x++){
// check to see if you can find a dot
p0 = bmp.getPixel(x,y);
p1 = bmp.getPixel(x-3,y);
p2 = bmp.getPixel(x+2,y);
p3 = bmp.getPixel(x,y-3);
p4 = bmp.getPixel(x,y+2);
// if you find a dot, see if you've already got one within the very near vicinity
if(p0==0x00){
if(p1!=0x00 && p2!=0x00 && p3!=0x00 && p4!=0x00){
// this looks like a black dot on a white background
// but check the array to see if we've ever found a dot within a few
// pixels of this one (it might be the same dot)
// if so, skip this dot (you've already found it)
// otherwise add to the array of found dots
if(similarPixel(x,y)==false){
var o:Object = new Object();
o.coords=new Point(x,y);
o.cluster=0;
o.index=idx;
foundDots.push(o);
idx++;
}
}
}
}
}
}
function drawFoundDots(){
var circle:Shape = new Shape(); // The instance name circle is created
for(var i:int=0; i<foundDots.length; i++){
trace(foundDots[i].coords.x+","+foundDots[i].coords.y);
circle.graphics.beginFill(0x990000, 1); // Fill the circle with the color 990000
circle.graphics.lineStyle(1, 0x000000); // Give the ellipse a black, 1 pixel thick line
circle.graphics.drawCircle(foundDots[i].coords.x, foundDots[i].coords.y, 4); // Draw the circle, assigning it a x position, y position, radius.
circle.graphics.endFill(); // End the filling of the circle
}
addChild(circle); // Add a child
circle.x=imgHolder.x;
circle.y=imgHolder.y;
}
function parseDots(){
clusterChar=0;
clusterIndex=0;
var dotValue:int=0
var dots:Array=new Array();
// find the first dot in the array that doesn't have a cluster character
lastDot=getDotWithNoClusterChar();
while(lastDot){
clusterIndex++;
dotValue=1;
trace("found start of cluster "+clusterIndex+" at index "+lastDot.index);
lastDot.cluster=clusterIndex;
while(lastDot){
lastDot=getConnectedDotForCluster(clusterIndex);
if(lastDot){
trace("found another dot for cluster "+clusterIndex);
dotValue++;
lastDot.cluster=clusterIndex;
}
}
trace("dotValue = "+dotValue);
dots[clusterChar]=dotValue;
clusterChar++;
lastDot=getDotWithNoClusterChar();
}
for(var i:int=0; i<dots.length; i++){
trace("dice "+i+" value "+dots[i]);
}
}
function getConnectedDotForCluster(indx:int){
var o:Object=null;
for(var i:int=0; i<foundDots.length; i++){
if(foundDots[i].cluster==indx){
for(var j:int=0; j<foundDots.length; j++){
if(j!=i && foundDots[j].cluster==0){
if(Math.abs(foundDots[j].coords.x-foundDots[i].coords.x)<=7 && Math.abs(foundDots[j].coords.y-foundDots[i].coords.y)<=7 ){
trace(i+" is connected to another dot in cluster "+indx);
o=foundDots[j];
break;
}
}
}
if(o){break;}
}
}
return(o);
}
function getDotWithNoClusterChar():Object{
var o:Object=null;
for(var i:int=0; i<foundDots.length; i++){
if(foundDots[i].cluster==0){
o=foundDots[i];
break;
}
}
return(o);
}
function similarPixel(ix:int, iy:int):Boolean {
var found:Boolean=false;
for(var i:int=0; i<foundDots.length; i++){
if(Math.abs(foundDots[i].coords.x-ix) < 4 && Math.abs(foundDots[i].coords.y-iy) < 4){
// found a similar pixel
found=true;
break;
}
}
return(found);
}
findDots();
drawFoundDots();
parseDots();
Below are the results of some of our testing. To date we've tested it on about a dozen photos of dice (all take from the same distance, since any device using this approach would have a plate at a fixed height from a fixed-position camera) and each time we've correctly reported back the dice values on the faces in the photo.
Obviously, in real use, we'd need to subtract the dice face value from seven to infer the value that was face-up on the dice (since we're taking a photo of the dice face that is face-down on the clear surface) but that is just a trivial application of our dice-reading routine.
Friday, 19 December 2014
Dice reader for 18mm dice fail
It was BuildBrighton's Open Evening tonight and the reflective sensors had arrived from Farnell just minutes before we set out, along with some 18mm dice from eBay, so we took the whole lot along and designed our first dice reading device.
It didn't take long to discover that things weren't quite going to plan.
Namely the dice are too small, or the sensors are too large to work together:
We can get a line of three sensors to pretty-well line up with the dots on the dice. Where the two don't quite line up, we could take up any gaps by placing a laser-cut tray with the dice spots cut out between the dice and the sensors.
But when the dice is oriented the other way around, the spots and sensors simply refuse to line up.
We've already placed our sensors as close together as we can get them, on a single sided board (though even on a double-sided board, there's no much extra room between the pads).
No matter how much we twist the sensors around, we can't get the sensors and dice dots to line up. Which means we either need to come up with a different way of reading our dice - or maybe just buy some bigger dice to use with the game!
It didn't take long to discover that things weren't quite going to plan.
Namely the dice are too small, or the sensors are too large to work together:
We can get a line of three sensors to pretty-well line up with the dots on the dice. Where the two don't quite line up, we could take up any gaps by placing a laser-cut tray with the dice spots cut out between the dice and the sensors.
But when the dice is oriented the other way around, the spots and sensors simply refuse to line up.
We've already placed our sensors as close together as we can get them, on a single sided board (though even on a double-sided board, there's no much extra room between the pads).
No matter how much we twist the sensors around, we can't get the sensors and dice dots to line up. Which means we either need to come up with a different way of reading our dice - or maybe just buy some bigger dice to use with the game!
Thursday, 18 December 2014
Dice reading device
About a million years ago (and on another, earlier Nerd Club blog) Evil Ben was working on an electronic cube reader. The idea was to have a cube with a number of pre-printed faces on it, which could be placed into a dedicated cube reader and read back not just the face showing (by inferring it from which face was down) but also the orientation in the reader.
Having demonstrated how our electronic board game works to a few people even nerdier and geekier than we are, a few people have suggested that we might like to allow players to roll their dice and enter the results into the game rules.
Originally we dodged this as an idea- it just seemed like a whole world of hurt and extra work. But then we thought "hey, why not?" and so made a few changes to our wifi connector device.
Now, our board game was deliberately designed to use nothing more than a single serial pin for tranferring data - specifically so that we can develop extra add-ons for the hardware without having to redesign it from the ground up. And one add-on that immediately springs to mind is a "dice reading module"; a module which can detect the presence of dice, read which face is down (and from that, infer which face is pointing upwards) and send this information back to the host via serial.
We've already built this into our first game app - pausing the game at key points and waiting for the user to enter the dice roll result using the enhanced wifi connector. Instead of using the rotary dial to select a dice result, we can simply send the same "data message" back to the host, over serial, but from a different device plugged into one of the game board pieces.
So how do we read the value of a dice face?
These clever little QRE1113 things from Farnell are just the ticket - they are reflective sensors and are often used by robotics hobbyists for black/white line following. It's simply an IR LED and an IR phototransistor in a single package. When the sensor "sees" IR light, it activates the internal transistor. We can use this, connecting an input pin (pulled high using internal pull-up resistors) to the phototransistor collector, with the emitter connected to ground:
The simple idea is that placing an array of these sensors on a board, and placing a dice over them will result in some sensors activating (where no spot is found immediately above the sensor) and some not (where a black dot appears above the sensor, the IR will not be reflected back).
Thinking about how the spots on a dice are laid out, and allowing for the dice to be "read" in any orientation, the first thing that springs to mind is a grid of 3x3 sensors. However, with a bit of thought, it turns out that we need only 5 sensors to be able to read every possible number on a dice face:
The image above shows how the spots on a dice could possibly be presented to a 3x3 grid of sensors. But if we use only the sensors as show on the right, we can still identify each possible number uniquely (and still allow for the dice to be rotated either horizontally or vertically).
For example, if the number two were face down, either the bottom-right or top-right sensor would be "inactive" and all other sensors would see reflected IR light from the (white/light-coloured) face of the dice. If the number three were face down, either of these sensors would be inactive, along with the sensor in the "centre" of the 3x3 grid.
The patterns of active/inactive sensors are unique for each number on the die, as shown above. Now we're just waiting for our sensors to arrive from Farnell and we can put the theory into practice!
Having demonstrated how our electronic board game works to a few people even nerdier and geekier than we are, a few people have suggested that we might like to allow players to roll their dice and enter the results into the game rules.
Originally we dodged this as an idea- it just seemed like a whole world of hurt and extra work. But then we thought "hey, why not?" and so made a few changes to our wifi connector device.
Now, our board game was deliberately designed to use nothing more than a single serial pin for tranferring data - specifically so that we can develop extra add-ons for the hardware without having to redesign it from the ground up. And one add-on that immediately springs to mind is a "dice reading module"; a module which can detect the presence of dice, read which face is down (and from that, infer which face is pointing upwards) and send this information back to the host via serial.
We've already built this into our first game app - pausing the game at key points and waiting for the user to enter the dice roll result using the enhanced wifi connector. Instead of using the rotary dial to select a dice result, we can simply send the same "data message" back to the host, over serial, but from a different device plugged into one of the game board pieces.
So how do we read the value of a dice face?
These clever little QRE1113 things from Farnell are just the ticket - they are reflective sensors and are often used by robotics hobbyists for black/white line following. It's simply an IR LED and an IR phototransistor in a single package. When the sensor "sees" IR light, it activates the internal transistor. We can use this, connecting an input pin (pulled high using internal pull-up resistors) to the phototransistor collector, with the emitter connected to ground:
The simple idea is that placing an array of these sensors on a board, and placing a dice over them will result in some sensors activating (where no spot is found immediately above the sensor) and some not (where a black dot appears above the sensor, the IR will not be reflected back).
Thinking about how the spots on a dice are laid out, and allowing for the dice to be "read" in any orientation, the first thing that springs to mind is a grid of 3x3 sensors. However, with a bit of thought, it turns out that we need only 5 sensors to be able to read every possible number on a dice face:
The image above shows how the spots on a dice could possibly be presented to a 3x3 grid of sensors. But if we use only the sensors as show on the right, we can still identify each possible number uniquely (and still allow for the dice to be rotated either horizontally or vertically).
For example, if the number two were face down, either the bottom-right or top-right sensor would be "inactive" and all other sensors would see reflected IR light from the (white/light-coloured) face of the dice. If the number three were face down, either of these sensors would be inactive, along with the sensor in the "centre" of the 3x3 grid.
The patterns of active/inactive sensors are unique for each number on the die, as shown above. Now we're just waiting for our sensors to arrive from Farnell and we can put the theory into practice!
Saturday, 13 December 2014
Game board testing
Blog posts have been a little thin on the ground in recent weeks (compared to the usual volume of noise coming out of Nerd Towers). This has been due to a number of reasons - one of which is the inordinate amount of code we've been churning out.
Coding isn't exactly a great spectator sport - likewise, it doesn't always make for particularly interesting reading. But here's a quick video showing our pro-made PCBs interfacing with a dedicated fantasy football app.
We're placing the playing pieces on the "underside" of the board here - demonstrating that the board can be used either way up, allowing us to consider offering double-sided boards (perhaps with a space shooty-game on one side, and something like a football pitch on the other).
The app has the start of some interactive commentary in there - this needs redeveloping almost entirely, but the idea is quite fun; as you move your playing pieces across the board, two lip-synched characters give an audio description of what's actually going on, on the pitch - much like Motty and Lawro do on BBC football broadcasts!
Coding isn't exactly a great spectator sport - likewise, it doesn't always make for particularly interesting reading. But here's a quick video showing our pro-made PCBs interfacing with a dedicated fantasy football app.
We're placing the playing pieces on the "underside" of the board here - demonstrating that the board can be used either way up, allowing us to consider offering double-sided boards (perhaps with a space shooty-game on one side, and something like a football pitch on the other).
The app has the start of some interactive commentary in there - this needs redeveloping almost entirely, but the idea is quite fun; as you move your playing pieces across the board, two lip-synched characters give an audio description of what's actually going on, on the pitch - much like Motty and Lawro do on BBC football broadcasts!
We're hoping to have our fantasy football game finished in time for Xmas, and already GrumpyPaul(tm) is looking over some zombie co-op game rules, to see how they can be "electronified" for this system.
While we're already excited about getting our football game done, it does also mean that we're going to have to paint up about two dozen miniatures to be able to demonstrate it - on top of the forty or so zombies that Paul's no doubt got lined up for a game we've tentatively called "Last Night in Zombieville" (because the domain name was available!)
Wednesday, 10 December 2014
GW miniatures Blood Bowl
After painting up some Space Marines using, not exactly a speed-painting technique, but a faster-than-normal approach, and taking a break from writing mountains of trigonometry-based functions for line-of-sight and bullet-ricochet routines, we decided to have a go at some more GW miniatures - only this time, Blood Bowl players.
We used pretty much the same technique as for the Space Marines. Firstly, whack on a couple of base colours, keeping the palette simple. We're using Ash grey for anything that's going to be white at the end, A funny orangey-yellow for anything that will eventually be yellow, and some Crystal Blue (the same colour we used to base-coat the Space Marines) for anything blue.
At this stage, the miniatures look awful. They look pretty much like the first time I tried painting a miniature, aged about 12 years old! A quick dash of Army Painter Quickshade and the miniature is transformed
This time, because we're using mostly "warm" colours, and because we're expecting problems with painting yellow over a darker colour, we went with Strong (rather than Dark) Tone.
As with the Space Marines, the Quickshade not only picks out the shaded area but darkens the underlying colours.
After a good 36 hours drying time, a coat of Testors Dullcote kills the shine - and dulls down the "vibrancy" of the colours. The minis are looking ok at this point, but a bit dull and "dirty".
This team is going to be based on the "Bright Crusaders" from the very first Blood Bowl game I ever played. The Blood Bowl game came with a lot of "fluff" in the 80s. Example teams were given, and I just thought that painting this team as the Bright Crusaders might re-capture some of the initial excitement of seeing and playing the game, all those years ago.
Just like the Space Marines, the colours were "tidied up" by painting over the shaded colours, only this time, we went one shade brighter than the base coat. So instead of orangey-yellow, we used Sunshine Yellow (this takes two or three coats to get decent coverage). Instead of Crystal Blue, we painted the gloves and kneepads in Electric Blue. Instead of painting the white parts a pale grey and edge-highlighting in white, we just went for white from the off. This means no edge highlighting on the white parts - making the painting process a little quicker, but the finished result a little less interesting!
Finally, each miniature was finished off using our new-favourite-method for basing: a tiny dab of superglue on each foot, and glued to a clear acrylic disc!
Unlike the Space Marines, which were plastic miniatures, we've glued a metal miniature to an acrylic disc. To make the bond we used Loctite Superglue (the real stuff, not the cheap 5-for-a-quid stuff from Poundland).
Putting superglue onto clear acrylic is a risky job. Superglue makes clear acrylic go cloudy, so it's really important that there's no excess squeezing out from under the feet. Obviously, we made sure that we used only a tiny drop of glue on each of the feet of the miniature. But, to discourage the glue from squelching out from under the feet, we held the miniature upside-down and let the disc rest on the feet of the miniature. This made sure that there was no real weight on the join, allowing it to go off without pushing any excess out of the sides.
We used pretty much the same technique as for the Space Marines. Firstly, whack on a couple of base colours, keeping the palette simple. We're using Ash grey for anything that's going to be white at the end, A funny orangey-yellow for anything that will eventually be yellow, and some Crystal Blue (the same colour we used to base-coat the Space Marines) for anything blue.
At this stage, the miniatures look awful. They look pretty much like the first time I tried painting a miniature, aged about 12 years old! A quick dash of Army Painter Quickshade and the miniature is transformed
This time, because we're using mostly "warm" colours, and because we're expecting problems with painting yellow over a darker colour, we went with Strong (rather than Dark) Tone.
As with the Space Marines, the Quickshade not only picks out the shaded area but darkens the underlying colours.
After a good 36 hours drying time, a coat of Testors Dullcote kills the shine - and dulls down the "vibrancy" of the colours. The minis are looking ok at this point, but a bit dull and "dirty".
(this photo was taken after a little tidying up of the yellow. Yellow is a notoriously difficult colour to paint over miniatures. We really should have stuck with a blue-and-grey team, as both colours offer great coverage, even over darker base paints).
This team is going to be based on the "Bright Crusaders" from the very first Blood Bowl game I ever played. The Blood Bowl game came with a lot of "fluff" in the 80s. Example teams were given, and I just thought that painting this team as the Bright Crusaders might re-capture some of the initial excitement of seeing and playing the game, all those years ago.
Just like the Space Marines, the colours were "tidied up" by painting over the shaded colours, only this time, we went one shade brighter than the base coat. So instead of orangey-yellow, we used Sunshine Yellow (this takes two or three coats to get decent coverage). Instead of Crystal Blue, we painted the gloves and kneepads in Electric Blue. Instead of painting the white parts a pale grey and edge-highlighting in white, we just went for white from the off. This means no edge highlighting on the white parts - making the painting process a little quicker, but the finished result a little less interesting!
Finally, each miniature was finished off using our new-favourite-method for basing: a tiny dab of superglue on each foot, and glued to a clear acrylic disc!
After painting the shoulder pad and legs white, and tidying up the feet just a little (though not too much, to try to keep the appearance of muddy-white Nike trainers) we decided to keep the shirt grey instead of making that white also. This is going to be the basis for our team colours for all the other miniatures - for this team at least!
Unlike the Space Marines, which were plastic miniatures, we've glued a metal miniature to an acrylic disc. To make the bond we used Loctite Superglue (the real stuff, not the cheap 5-for-a-quid stuff from Poundland).
Putting superglue onto clear acrylic is a risky job. Superglue makes clear acrylic go cloudy, so it's really important that there's no excess squeezing out from under the feet. Obviously, we made sure that we used only a tiny drop of glue on each of the feet of the miniature. But, to discourage the glue from squelching out from under the feet, we held the miniature upside-down and let the disc rest on the feet of the miniature. This made sure that there was no real weight on the join, allowing it to go off without pushing any excess out of the sides.
Thursday, 4 December 2014
Space Marines - finished
Here's a squad of six Space Marines from Games Workshop, painted over the course of a weekend (with lots of drying time inbetween, so we've only totted up the total amount time spent actually painting!). We've been looking at getting an actual game coded up for our electronic board game idea, and decided that we're more likely to finish a Fantasy Football game (or at least, make more headway into it) than a space-shooty-type game. So before starting work on the few Blood Bowl miniatures we have here, Nick insisted that our Space Marines were finished off.
We deliberately kept the basic palette very small, to focus on getting the main bulk of the painting done in one go, not to spend hours and hours on the tiny, fiddly details, or spend too long with blending and highlighting and multiple glaze layers.
The basic approach on all the miniatures was:
At this stage, the miniature looks like a child's cariacture, with garish, bright colours and no definition to any of it's shape. Don't worry - this is all about to change, after you splosh on a coat of Army Painter Quickshade (Dark Tone, the black-based pigment goes on over blue far better than the slightly brown-y coloured Strong Tone).
By now, the miniature should look quite "clean" but with heavily defined outlines - a bit like a drawing, done with a black, felt-tipped-pen outline and coloured in using strong felt-tipped pen colours. (if this slightly cartoon-look isn't what you're striving far, you should have backed off after the Dark Tone!)
The last stage is to "edge highlight" the blue colours using a much brighter blue colour again - we used Electric Blue, a strong, vibrant, pale blue colour - after all, if you're going to go to the trouble of highlighting your miniatures, you may as well be able to see it!
This meant painting a thin line between any two surfaces that met at a darkened edge. We also painted along the edge of the eyes on the helmet, and individual fingers on the hands, and any raised corner or edge
Final touches (not yet done on the model above) include touching any details with a highlight colour. For example, on the seal on the leg above, the very rim of the red "rosette" would be edge highlighted with a bright orangey-red, and the cream-coloured paper stuck beneath it would be edged with pure white to give each part a little depth.
The backpacks and weapons are painted using the same technique(s) and then the whole model is assembled. Some people assemble their models before painting (we used to, and have done with our Tyranid/Genestealer aliens) but we just found because these clipped together so easily, painting them first would allow us to get into all the nooks and crannies around the weapons, to give a neater finish.
Note that the colours used are actually much brighter than the "recommended" colour scheme (by both Games Workshop, and many people on the G+ forums!) We found this worked well with our "cartoon style" painting approach; any darker, and the effect would be lost (although darker miniatures, well painted with blending and shading and drybrushing and all that stuff would probably look pretty realistic). We also had a few comments from people saying we'd used entirely the "wrong colours" (apparently, if a model has a skull here and wing there and is looking at it's feet and not into the sky, it's a something-or-other, which should only be coloured green with the pantone colour of x. We just wanted to get some nice, blue Space Marines onto the tabletop. These are Space Marines. We painted them blue! </rant>)
Finally, the miniatures need basing.
We've had a discussion about this with other wargamers online; while our paintwork isn't going to win any awards, it's nice enough for a gaming standard, and the miniatures look really great on the tabletop.
We played about with different basing approaches a while back:
So we went for the easy option and laser-cut some 24mm discs out of clear 3mm acrylic. We added a 3.95mm hole in the centre, and jammed in some 4mm magnets. Because of the "kerf" on the cut edges, this means that the hole is 3.95mm on one side, and ever so slightly larger on the other - allowing the magnet to fit inside the hole, then to be jammed in place, without the use of glue or solvents.
Obviously, if these were not for our electronic board game, we wouldn't have bothered with the magnet in the middle and the final result would be a neater base. But all in all, we're quite pleased with the way these turned out.
Total time spent: 9hrs
At about an hour and a half per model (not including the time we spent going to the unit to laser cut the bases) we're not even sure if this could be classed as "speed painting" any more. But it's quicker than any other miniatures we've painted to this standard before. And it was quite fun to spend time over the weekend getting reacquainted with the miniature painting hobby. Maybe "faster painting" would be a more accurate title?
We deliberately kept the basic palette very small, to focus on getting the main bulk of the painting done in one go, not to spend hours and hours on the tiny, fiddly details, or spend too long with blending and highlighting and multiple glaze layers.
The basic approach on all the miniatures was:
- Spray the entire miniature with Army Painter Crystal Blue primer
- Paint the symbols on the shoulders, little skulls etc. in Ash Grey
- Shoulder pad trims and chest plate are painted in Bronze
- Back of legs and joints between armour plates are painted silver
(note how the colours have become much darker from their original shades)
- After the Quickshade has dried, remove the shine (and, sadly, some of the depth of colour) with Testors Dullcote
- Paint in Crystal Blue again on large surfaces, such as shoulder pads, armour plates fingers, and areas around the helmet. Don't be afraid to let parts of the darkened areas show through. (As we're speed painting, we didn't bother trying to blend the two shades together)
- Paint white onto the greyed out areas (skulls, symbols on the shoulder pads etc)
- Paint the bronzed areas with Greedy Gold (on some of the chest plates we added a wash of black ink before painting, to allow us to pick out individual feathers on the winged armour)
(our camera makes this blue colour look much brighter than it actually is; it's bright - just not this bright!)
By now, the miniature should look quite "clean" but with heavily defined outlines - a bit like a drawing, done with a black, felt-tipped-pen outline and coloured in using strong felt-tipped pen colours. (if this slightly cartoon-look isn't what you're striving far, you should have backed off after the Dark Tone!)
The last stage is to "edge highlight" the blue colours using a much brighter blue colour again - we used Electric Blue, a strong, vibrant, pale blue colour - after all, if you're going to go to the trouble of highlighting your miniatures, you may as well be able to see it!
This meant painting a thin line between any two surfaces that met at a darkened edge. We also painted along the edge of the eyes on the helmet, and individual fingers on the hands, and any raised corner or edge
notice how the small image for this character looks far better than if you click and zoom in on the model. That's how this style of painting works - with a lack of blending and too much detail painting, it maybe doesn't look the best, up-close. But when viewed at arm's length, the effects are quite striking.
Final touches (not yet done on the model above) include touching any details with a highlight colour. For example, on the seal on the leg above, the very rim of the red "rosette" would be edge highlighted with a bright orangey-red, and the cream-coloured paper stuck beneath it would be edged with pure white to give each part a little depth.
The backpacks and weapons are painted using the same technique(s) and then the whole model is assembled. Some people assemble their models before painting (we used to, and have done with our Tyranid/Genestealer aliens) but we just found because these clipped together so easily, painting them first would allow us to get into all the nooks and crannies around the weapons, to give a neater finish.
Note that the colours used are actually much brighter than the "recommended" colour scheme (by both Games Workshop, and many people on the G+ forums!) We found this worked well with our "cartoon style" painting approach; any darker, and the effect would be lost (although darker miniatures, well painted with blending and shading and drybrushing and all that stuff would probably look pretty realistic). We also had a few comments from people saying we'd used entirely the "wrong colours" (apparently, if a model has a skull here and wing there and is looking at it's feet and not into the sky, it's a something-or-other, which should only be coloured green with the pantone colour of x. We just wanted to get some nice, blue Space Marines onto the tabletop. These are Space Marines. We painted them blue! </rant>)
Finally, the miniatures need basing.
We've had a discussion about this with other wargamers online; while our paintwork isn't going to win any awards, it's nice enough for a gaming standard, and the miniatures look really great on the tabletop.
We played about with different basing approaches a while back:
- http://nerdclub-uk.blogspot.co.uk/2013/10/creating-sci-fi-industrial-base-for-our.html
- http://nerdclub-uk.blogspot.co.uk/2013/11/more-miniature-painting-wild-west.html
- http://nerdclub-uk.blogspot.co.uk/2014/10/making-28mm-terrain-from-greenstuff-and.html
(some may argue otherwise, but we prefer the clear acrylic discs to the painted-and-modelled bases which, on their own, look fine, just not when used on a printed playing surface like this one!)
So we went for the easy option and laser-cut some 24mm discs out of clear 3mm acrylic. We added a 3.95mm hole in the centre, and jammed in some 4mm magnets. Because of the "kerf" on the cut edges, this means that the hole is 3.95mm on one side, and ever so slightly larger on the other - allowing the magnet to fit inside the hole, then to be jammed in place, without the use of glue or solvents.
Obviously, if these were not for our electronic board game, we wouldn't have bothered with the magnet in the middle and the final result would be a neater base. But all in all, we're quite pleased with the way these turned out.
Total time spent: 9hrs
At about an hour and a half per model (not including the time we spent going to the unit to laser cut the bases) we're not even sure if this could be classed as "speed painting" any more. But it's quicker than any other miniatures we've painted to this standard before. And it was quite fun to spend time over the weekend getting reacquainted with the miniature painting hobby. Maybe "faster painting" would be a more accurate title?
Sunday, 30 November 2014
Dynamic lip sync using AS3 and Flash with mp3 and Audacity
Sometimes it's nice to try flexing your coding muscles on something small and achievable in a single sitting, rather than having to load an entire, complex, multi-module application into your head, just to edit a few lines of code.
This weekend, we spent some time thinking about the kinds of games we'd like to create for our electronic board game system. Not just vague genres, but how to actually implement some of these ideas into a game.
For example, we're already decided on a fantasy football game. That's a given.
But wouldn't it be great to have some online commentary as you play? As a playing piece is moved down the pitch, and tackles attempted, the ball fumbled and so on, some disembodied radio-commentary-style voice could read out the action as it unfolds.
We're pretty much decided that such a thing would be really cool. But then - as often happens with good ideas - someone went at took it a stage further. Why not have an animated commentator, in the top of the screen somewhere? A John-Moston looky-likey maybe- delivering his audio commentary into a microphone, on the screen?
That would be cool. But could prove quite tricky to implement.
After all, it was a long time ago that a few of us put together a perfectly lip-synched "Nothing Compares 2U" animation with a couple of 3d characters (using Hash Animation Master if you're interested!). That was only three minutes long and the animation took about three weeks, of scrubbing, listening, manually setting key frames and so on. That would be a massive task!
Michael (who had a hand in organising the phoneme shapes for our original animation) suggested an alternative, slightly-cheating-but-good-enough-for-a-bit-of-fun approach to lip-synching, using on-the-fly processing: moving the mouth to match the amplitude of the sound coming out of it.
This seemed a bit of a cop-out, but it does ensure that the most important parts are at least in sync (i.e. the mouth is closed at the start, and after completing a sentence). The bit inbetween, where words are spewing out of the mouth will just be more like a "flappy-gob" style of animation. But it's better than hand-crafting the keyframes for every phrase.
Michael also suggested that if the lip sync is being done on-the-fly, then there's no reason why we couldn't make the audio interchangeable: simply load an mp3 file at runtime and play that, to make the mouth animate on-screen. Load a different mp3 file, get a different set of sounds, but re-use the same lip-sync routines to allow for custom, user-generated content.
This is already getting way more complicated that anything we wanted to put together. But there's also something about being given a seemingly impossible challenge and a tight deadline to complete it by!
So we set about planning how things might work, in our fantasy football game.
To begin with, we'd have one, long, single mp3 file, containing loads of stock footballing phrases and cliches. These would be "chopped up" at runtime, and different sections of the mp3 played at different points during the game.
While the mp3 is playing (the commentary is being "said" by our onscreen avatar) the actual sound byte data would be analysed, an average amplitude of the section calculated, and an appropriate mouth shape displayed on the screen.
So far so easy....
For the sake of testing, we mashed a few Homer Simpson phrases together into a single, larger, mp3 file in Audacity. This great, free, audio editor allows you to create a "label layer" and place labels at various points during the audio file.
It's worth noting that as you select a position in the label track, the "playhead" in the audio track is updated to the same point in the audio data: this helps line up the labels with the actual audio to be played.
We placed some simple titles at specific points in our audio file, to indicate which phrase was being said at which point in the file.
With all our labels in place, we exported the label track as a text file.
It is possible to stretch each label, to define a start and an end point. We didn't consider this necessary, since our mp3 track consists of lots of small phrases, all following each other with just a short break between them. For our use, we can say that the start point of the next phrase in the file is the end point of the previous one. So long as these points occur during the short silences between phrases, there should be no problems!
In our Flash file, we created a single movieclip, consisting of five frames, each with a different mouth shape
Then we added a few buttons to simply set the playhead position of the audio playback.
Then a bit of actionscript does the rest....
import flash.events.Event;
import flash.events.MouseEvent;
import flash.media.SoundChannel;
var left:Number;
var playerObject:Object = new Object();
var mySound:Sound = new Sound();
var myChannel:SoundChannel = new SoundChannel();
mySound.addEventListener(Event.COMPLETE, onSoundLoaded);
var stopAudioAt:Number=0;
var playHeadPos:Number=0;
var position:Number=0;
var samplesPerSecond:Number=0;
function processSound(event:SampleDataEvent):void{
var bytes:ByteArray = new ByteArray();
// grab some audio data
playerObject.sourceSnd.extract(bytes, 4096, playHeadPos);
if(playHeadPos>stopAudioAt+(4096*2)){
scale=0;
playerObject.outputSnd.removeEventListener(SampleDataEvent.SAMPLE_DATA, processSound);
myChannel.stop();
}else{
bytes.position = 0;
while(bytes.bytesAvailable > 0) {
left = Math.abs(bytes.readFloat()*128);
var scale:Number = left*2;
}
// send the audio data to the speaker
event.data.writeBytes(bytes);
}
playHeadPos+=4096;
// display the appropriate mouth shape, based on amplitude
if(scale<0.04){
mouth.gotoAndStop(5);
}else if (scale<1) {
mouth.gotoAndStop(1);
}else if (scale<10) {
mouth.gotoAndStop(2);
}else if (scale<25) {
mouth.gotoAndStop(3);
}else if (scale<50) {
mouth.gotoAndStop(4);
}else {
mouth.gotoAndStop(5);
}
}
function playSound(){
trace("playing sound from "+playHeadPos);
playerObject.outputSnd.addEventListener(SampleDataEvent.SAMPLE_DATA, processSound);
myChannel=playerObject.outputSnd.play(); //playHeadPos);
}
function onSoundLoaded(e:Event){
playerObject.sourceSnd = mySound;
playerObject.outputSnd = new Sound();
// get the sample rate from the mp3 if possible
// there are 8 bytes per stereo sample (4 bytes left channel, 4 bytes right channel)
// so if the audio is recorded at 44.1khz, samples per second is 44100*8
// but we recorded at 11025hz in mono to keep the file size down, so samples per
// second is actually 11025*4
var sampleRate:Number=11025;
samplesPerSecond=sampleRate*4;
btnBlah.addEventListener(MouseEvent.MOUSE_DOWN, playBlah);
btnBozo.addEventListener(MouseEvent.MOUSE_DOWN, playBozo);
btnTrust.addEventListener(MouseEvent.MOUSE_DOWN, playTrust);
btnNews.addEventListener(MouseEvent.MOUSE_DOWN, playNews);
}
// load the external mp3 file
function loadSound(){
mySound.load(new URLRequest("homer.mp3"));
}
// load the labels from a file
function playBlah(e:MouseEvent){
playHeadPos=0.123875*samplesPerSecond;
stopAudioAt=1.907678*samplesPerSecond;
playSound();
}
function playBozo(e:MouseEvent){
playHeadPos=1.907678*samplesPerSecond;
stopAudioAt=3.815355*samplesPerSecond;
playSound();
}
function playTrust(e:MouseEvent){
playHeadPos=3.815355*samplesPerSecond;
stopAudioAt=9.765493*samplesPerSecond;
playSound();
}
function playNews(e:MouseEvent){
playHeadPos=9.765493*samplesPerSecond;
stopAudioAt=12.781134*samplesPerSecond;
playSound();
}
mouth.gotoAndStop(5);
loadSound();
stop();
The AS3 above uses two "sound" objects. One holds the audio data of our mp3 file, which is loaded from an external file at runtime. The other sound object is the one that actually plays sounds. There are few comments which explain basically what's going on, but it's important to understand a few core concepts:
We use an eventlistener to populate the "outgoing" sound object with wavdata, just before it gets played (sent to the sound card). It's also this data that we interrogate, just before sending it to the soundcard, to decide which mouth shape to display.
To play a sound, we simply say from which point in the loaded sound data we want to start extracting the wavdata bytes. When we've reached the end of the sample section, we stop the audio from playing.
The trickiest part in all this is to convert the label data from Audacity into "sample data offset" values, to get the audio to start/stop at the correct place. The labels in Audacity are placed according to how many seconds have elapsed. But Flash works with "samples" not time-based data.
Knowing that Flash uses four bytes for each sample, we can work out that one second of audio data, recorded at CD-quality 44.1khz should contain 44100 * 8 = 352800 bytes. So if we start extracting the sample data from our loaded mp3 file, 352800 bytes in, we effectively start playback one second into the audio. Similarly, to start playback from 5 seconds in, we need to start extracting the audio sample data from byte 44100 * 8 * 5 = 1,764,000.
To reduce our file sizes (and decrease the overhead in manipulating files) we recorded our mp3 samples in mono, at 11025hz. This means that instead of 8 bytes per sample, we're only using 4 (since there's no left/right track, just 4-bytes per sample for a single, mono track). So we need to calculate our byte offset as:
startByte= 11025 * 4 * timeInSeconds
Here's the result: http://www.nerdclub.co.uk/lipsync_demo.htm
Still to do - load the "cue points" from an external file (rather than embed them into the AS3 code) but this should be a relatively trivial matter; we've managed to demonstrate the idea works in principle. And once we can load "cue points" from an external file, as well as the actual mp3 audio, there's no reason why we can't include user-editable commentary in our fantasy football game. Now that would be exciting!
This weekend, we spent some time thinking about the kinds of games we'd like to create for our electronic board game system. Not just vague genres, but how to actually implement some of these ideas into a game.
For example, we're already decided on a fantasy football game. That's a given.
But wouldn't it be great to have some online commentary as you play? As a playing piece is moved down the pitch, and tackles attempted, the ball fumbled and so on, some disembodied radio-commentary-style voice could read out the action as it unfolds.
We're pretty much decided that such a thing would be really cool. But then - as often happens with good ideas - someone went at took it a stage further. Why not have an animated commentator, in the top of the screen somewhere? A John-Moston looky-likey maybe- delivering his audio commentary into a microphone, on the screen?
That would be cool. But could prove quite tricky to implement.
After all, it was a long time ago that a few of us put together a perfectly lip-synched "Nothing Compares 2U" animation with a couple of 3d characters (using Hash Animation Master if you're interested!). That was only three minutes long and the animation took about three weeks, of scrubbing, listening, manually setting key frames and so on. That would be a massive task!
Michael (who had a hand in organising the phoneme shapes for our original animation) suggested an alternative, slightly-cheating-but-good-enough-for-a-bit-of-fun approach to lip-synching, using on-the-fly processing: moving the mouth to match the amplitude of the sound coming out of it.
This seemed a bit of a cop-out, but it does ensure that the most important parts are at least in sync (i.e. the mouth is closed at the start, and after completing a sentence). The bit inbetween, where words are spewing out of the mouth will just be more like a "flappy-gob" style of animation. But it's better than hand-crafting the keyframes for every phrase.
Michael also suggested that if the lip sync is being done on-the-fly, then there's no reason why we couldn't make the audio interchangeable: simply load an mp3 file at runtime and play that, to make the mouth animate on-screen. Load a different mp3 file, get a different set of sounds, but re-use the same lip-sync routines to allow for custom, user-generated content.
This is already getting way more complicated that anything we wanted to put together. But there's also something about being given a seemingly impossible challenge and a tight deadline to complete it by!
So we set about planning how things might work, in our fantasy football game.
To begin with, we'd have one, long, single mp3 file, containing loads of stock footballing phrases and cliches. These would be "chopped up" at runtime, and different sections of the mp3 played at different points during the game.
While the mp3 is playing (the commentary is being "said" by our onscreen avatar) the actual sound byte data would be analysed, an average amplitude of the section calculated, and an appropriate mouth shape displayed on the screen.
So far so easy....
For the sake of testing, we mashed a few Homer Simpson phrases together into a single, larger, mp3 file in Audacity. This great, free, audio editor allows you to create a "label layer" and place labels at various points during the audio file.
It's worth noting that as you select a position in the label track, the "playhead" in the audio track is updated to the same point in the audio data: this helps line up the labels with the actual audio to be played.
We placed some simple titles at specific points in our audio file, to indicate which phrase was being said at which point in the file.
With all our labels in place, we exported the label track as a text file.
It is possible to stretch each label, to define a start and an end point. We didn't consider this necessary, since our mp3 track consists of lots of small phrases, all following each other with just a short break between them. For our use, we can say that the start point of the next phrase in the file is the end point of the previous one. So long as these points occur during the short silences between phrases, there should be no problems!
In our Flash file, we created a single movieclip, consisting of five frames, each with a different mouth shape
Then we added a few buttons to simply set the playhead position of the audio playback.
Then a bit of actionscript does the rest....
import flash.events.Event;
import flash.events.MouseEvent;
import flash.media.SoundChannel;
var left:Number;
var playerObject:Object = new Object();
var mySound:Sound = new Sound();
var myChannel:SoundChannel = new SoundChannel();
mySound.addEventListener(Event.COMPLETE, onSoundLoaded);
var stopAudioAt:Number=0;
var playHeadPos:Number=0;
var position:Number=0;
var samplesPerSecond:Number=0;
function processSound(event:SampleDataEvent):void{
var bytes:ByteArray = new ByteArray();
// grab some audio data
playerObject.sourceSnd.extract(bytes, 4096, playHeadPos);
if(playHeadPos>stopAudioAt+(4096*2)){
scale=0;
playerObject.outputSnd.removeEventListener(SampleDataEvent.SAMPLE_DATA, processSound);
myChannel.stop();
}else{
bytes.position = 0;
while(bytes.bytesAvailable > 0) {
left = Math.abs(bytes.readFloat()*128);
var scale:Number = left*2;
}
// send the audio data to the speaker
event.data.writeBytes(bytes);
}
playHeadPos+=4096;
// display the appropriate mouth shape, based on amplitude
if(scale<0.04){
mouth.gotoAndStop(5);
}else if (scale<1) {
mouth.gotoAndStop(1);
}else if (scale<10) {
mouth.gotoAndStop(2);
}else if (scale<25) {
mouth.gotoAndStop(3);
}else if (scale<50) {
mouth.gotoAndStop(4);
}else {
mouth.gotoAndStop(5);
}
}
function playSound(){
trace("playing sound from "+playHeadPos);
playerObject.outputSnd.addEventListener(SampleDataEvent.SAMPLE_DATA, processSound);
myChannel=playerObject.outputSnd.play(); //playHeadPos);
}
function onSoundLoaded(e:Event){
playerObject.sourceSnd = mySound;
playerObject.outputSnd = new Sound();
// get the sample rate from the mp3 if possible
// there are 8 bytes per stereo sample (4 bytes left channel, 4 bytes right channel)
// so if the audio is recorded at 44.1khz, samples per second is 44100*8
// but we recorded at 11025hz in mono to keep the file size down, so samples per
// second is actually 11025*4
var sampleRate:Number=11025;
samplesPerSecond=sampleRate*4;
btnBlah.addEventListener(MouseEvent.MOUSE_DOWN, playBlah);
btnBozo.addEventListener(MouseEvent.MOUSE_DOWN, playBozo);
btnTrust.addEventListener(MouseEvent.MOUSE_DOWN, playTrust);
btnNews.addEventListener(MouseEvent.MOUSE_DOWN, playNews);
}
// load the external mp3 file
function loadSound(){
mySound.load(new URLRequest("homer.mp3"));
}
// load the labels from a file
function playBlah(e:MouseEvent){
playHeadPos=0.123875*samplesPerSecond;
stopAudioAt=1.907678*samplesPerSecond;
playSound();
}
function playBozo(e:MouseEvent){
playHeadPos=1.907678*samplesPerSecond;
stopAudioAt=3.815355*samplesPerSecond;
playSound();
}
function playTrust(e:MouseEvent){
playHeadPos=3.815355*samplesPerSecond;
stopAudioAt=9.765493*samplesPerSecond;
playSound();
}
function playNews(e:MouseEvent){
playHeadPos=9.765493*samplesPerSecond;
stopAudioAt=12.781134*samplesPerSecond;
playSound();
}
mouth.gotoAndStop(5);
loadSound();
stop();
The AS3 above uses two "sound" objects. One holds the audio data of our mp3 file, which is loaded from an external file at runtime. The other sound object is the one that actually plays sounds. There are few comments which explain basically what's going on, but it's important to understand a few core concepts:
We use an eventlistener to populate the "outgoing" sound object with wavdata, just before it gets played (sent to the sound card). It's also this data that we interrogate, just before sending it to the soundcard, to decide which mouth shape to display.
To play a sound, we simply say from which point in the loaded sound data we want to start extracting the wavdata bytes. When we've reached the end of the sample section, we stop the audio from playing.
The trickiest part in all this is to convert the label data from Audacity into "sample data offset" values, to get the audio to start/stop at the correct place. The labels in Audacity are placed according to how many seconds have elapsed. But Flash works with "samples" not time-based data.
Knowing that Flash uses four bytes for each sample, we can work out that one second of audio data, recorded at CD-quality 44.1khz should contain 44100 * 8 = 352800 bytes. So if we start extracting the sample data from our loaded mp3 file, 352800 bytes in, we effectively start playback one second into the audio. Similarly, to start playback from 5 seconds in, we need to start extracting the audio sample data from byte 44100 * 8 * 5 = 1,764,000.
To reduce our file sizes (and decrease the overhead in manipulating files) we recorded our mp3 samples in mono, at 11025hz. This means that instead of 8 bytes per sample, we're only using 4 (since there's no left/right track, just 4-bytes per sample for a single, mono track). So we need to calculate our byte offset as:
startByte= 11025 * 4 * timeInSeconds
Here's the result: http://www.nerdclub.co.uk/lipsync_demo.htm
Still to do - load the "cue points" from an external file (rather than embed them into the AS3 code) but this should be a relatively trivial matter; we've managed to demonstrate the idea works in principle. And once we can load "cue points" from an external file, as well as the actual mp3 audio, there's no reason why we can't include user-editable commentary in our fantasy football game. Now that would be exciting!
Sunday, 16 November 2014
Connecting hardware to a smart device using a cheap ESP8266 wifi module
These ESP8266 wifi modules are great.
They're cheap, and simple to work with. For simple tasks.
Try to get too clever with them, and they crap out and lock up and need a full power-cycle to sort out. But keep your data to short little packets, and they're pretty solid, robust little devices.
After getting one working by sending AT commands over serial, we decided to take it a stage further: to build a self contained little module that you could just give power to, and it would hook up to a home router/network.
We've recently got re-acquainted with the old Nokia 5110 displays, so it made sense to build something around that for our unit. Originally we planned to have the wifi module list all available APs (access points) and allow the user to scroll through a list of them onscreen.
This idea worked fine, at the BuildBrighton Hackspace, where only three or four wifi access points are accessible from inside a big tin shed that acts like a large Faraday cage! Trying the same approach at home, where there can be ten or more available wifi hotspots at any one time, and the module couldn't cope.
We're assuming that the list of available wifi connections exceeded some string/character limit, and the wifi module simply fills up it's buffers trying to query every one in range - whatever the reason, using the AT+CWLAP command sometimes causes the wifi module to lock up.
What we needed was some way of entering an SSID and password, using the Nokia screen to display progress. Our imagined device would have only a few buttons, and a phone-text-style method of data entry could get a bit clumsy. It didn't take Steve long to suggest a menu-based interface and a rotary encoder for selecting letters/numbers/symbols from a pre-determined list of characters - a bit like when you used to put your initials on the high score table on the original Space Invaders arcade machines.
At first, a menu-based interface seemed like overkill - Steve loves everything to be aesthetically pleasing, we just like stuff to work! But his persistence paid dividends: despite making the firmware much more complicated than it was, the end result is an interface which is simple and intuitive to use. And more importantly, one that works!
Surprisingly, the wifi modules retain their last connection details. We were anticipating having to store and retrieve SSID details to/from eeprom - but it turns out to be unnecessary. If you power down the wifi module, it will try to connect to the same source as last time, using the same SSID and password as before.
This makes our firmware quite a bit easier than we were expecting (once we got over Steve's extra complexity for the menu system). On boot up, we simply query our IP address, using the AT+CIFSR command, every 5-8 seconds. Once a connection has been established, we'll see an ip address in response to this command. If we get "error" or an empty string four times in succession, we move on to the "enter SSID "screen and prompt the user for their SSID and password.
Once these have been entered using the rotary encoder, we try again, four times. Setting the connection details using the AT+CWJAP command always returns OK - even if the password is incorrect and the connection failed. So we simply set the SSID/password combination, then query for a valid IP address, just as if the device had first switched on. If, after four attempts, there is no connection, we assume the connection details are incorrect, and prompt the user for their password again.
Once the wifi module has an IP address, we simply start it up as a server, using AT+CIPMUX=1 (to allow multiple clients to connect to it over the network) and then AT+CIPSERVER=1,port_number to start listening to incoming connections.
All in all, we're quite pleased with how far this has come.
The last stage is to get it all into a nice little enclosure and look like a "proper" wifi module - not just a homebrew circuit on a bit of home-etched PCB!
They're cheap, and simple to work with. For simple tasks.
Try to get too clever with them, and they crap out and lock up and need a full power-cycle to sort out. But keep your data to short little packets, and they're pretty solid, robust little devices.
After getting one working by sending AT commands over serial, we decided to take it a stage further: to build a self contained little module that you could just give power to, and it would hook up to a home router/network.
We've recently got re-acquainted with the old Nokia 5110 displays, so it made sense to build something around that for our unit. Originally we planned to have the wifi module list all available APs (access points) and allow the user to scroll through a list of them onscreen.
This idea worked fine, at the BuildBrighton Hackspace, where only three or four wifi access points are accessible from inside a big tin shed that acts like a large Faraday cage! Trying the same approach at home, where there can be ten or more available wifi hotspots at any one time, and the module couldn't cope.
We're assuming that the list of available wifi connections exceeded some string/character limit, and the wifi module simply fills up it's buffers trying to query every one in range - whatever the reason, using the AT+CWLAP command sometimes causes the wifi module to lock up.
What we needed was some way of entering an SSID and password, using the Nokia screen to display progress. Our imagined device would have only a few buttons, and a phone-text-style method of data entry could get a bit clumsy. It didn't take Steve long to suggest a menu-based interface and a rotary encoder for selecting letters/numbers/symbols from a pre-determined list of characters - a bit like when you used to put your initials on the high score table on the original Space Invaders arcade machines.
At first, a menu-based interface seemed like overkill - Steve loves everything to be aesthetically pleasing, we just like stuff to work! But his persistence paid dividends: despite making the firmware much more complicated than it was, the end result is an interface which is simple and intuitive to use. And more importantly, one that works!
(what's with the nasty jump in the video? Well, of course, I'm not going to publish my real wifi SSID and password on here - so I paused the video, entered the real username/password, and set the video recording again. Honestly, some people think you have to be an idiot to be in the Nerd Club!)
Surprisingly, the wifi modules retain their last connection details. We were anticipating having to store and retrieve SSID details to/from eeprom - but it turns out to be unnecessary. If you power down the wifi module, it will try to connect to the same source as last time, using the same SSID and password as before.
This makes our firmware quite a bit easier than we were expecting (once we got over Steve's extra complexity for the menu system). On boot up, we simply query our IP address, using the AT+CIFSR command, every 5-8 seconds. Once a connection has been established, we'll see an ip address in response to this command. If we get "error" or an empty string four times in succession, we move on to the "enter SSID "screen and prompt the user for their SSID and password.
Once these have been entered using the rotary encoder, we try again, four times. Setting the connection details using the AT+CWJAP command always returns OK - even if the password is incorrect and the connection failed. So we simply set the SSID/password combination, then query for a valid IP address, just as if the device had first switched on. If, after four attempts, there is no connection, we assume the connection details are incorrect, and prompt the user for their password again.
Once the wifi module has an IP address, we simply start it up as a server, using AT+CIPMUX=1 (to allow multiple clients to connect to it over the network) and then AT+CIPSERVER=1,port_number to start listening to incoming connections.
All in all, we're quite pleased with how far this has come.
The last stage is to get it all into a nice little enclosure and look like a "proper" wifi module - not just a homebrew circuit on a bit of home-etched PCB!
Game application testing
A little while back we were trying out some ideas for making a vector-based line-of-sight algorithm, in Flash/AS3. We spent a few hours this weekend turning a few scribbled ideas into a simple physics engine for calculating not just line-of-sight, but also for firing bullets with ricochet/rebounding off the walls.
For further testing, we overwrote the "recalculate distance after collision" routine, and made it a (large) fixed amount. This meant that every time the line-of-sight hit a wall in the map, it would always ricochet by a large amount - the idea being to test how it would react over a large distance.
It was to some relief that the ricochet/rebound algorithm correctly drew the expected path. It was noted at this point that the angle of rebound appears similar on every collision point. This is actually to be expected - since we're using only walls that are either vertical and horizontal, and our algorithm calculates a "perfect rebound" (the angle of approach is also the angle it leaves the collision point) it makes sense that we see the same angle throughout the whole screen.
Confident that the rebound function was working correctly, we started to add different functionality to different walls.
Using an online editor made earlier, we can describe our map as a number of "lines with properties" as an xml document. As shown in the image above, one of the properties that we can apply to our lines/walls is "can shoot through". Different lines with different properties can be used to describe different types of walls in our game maps.
For example, a line which can be "seen through" but not "can shoot through" could be used to indicate bullet-proof glass, or - in a sci-fi game - a force-field. Similarly, a line which the player "can shoot through" but not "see through" could be used to indicate a line of fog or smoke.
To further test our line-of-sight/bullet-path routines, we made one of the walls (specifically, in the image above, line 4) "passable" by setting "can shoot through" to "yes". The resulting test looked crazy - but after investigating a bit further, it actually appears to be correct too!
So we've tested our line-of-sight/bullet path routine with simple and complex paths, and proven that we can change the properties of different walls in the map, to alter the way the bullets ricochet and bounce around the map. It's starting to look like we've got the basis of a pretty impressive engine for our board game....
- The basic idea is that we draw a line between two points.
- Then, loop through every line in the map, and see if the line we've just drawn intersects the line on the map.
- If it does, use pythagoras' equation to calculate the linear distance between the starting point and the point of contact.
- If this distance is shorter than any previous collision point (it's quite possible that one, long line can intersect a number of points on the map) then we remember this point and continue testing the other lines in the map.
- At the end of all this testing, if there's been a collision, we should have stored the first point along the line that meets with a wall.
- We then calculate the linear distance between the point of collision and the destination point. This represents the amount of travel "still to complete" after we ricochet/rebound off the wall.
- The next thing to do is to calculate the potential destination point, after applying rebound at the point of contact.
- The line-of-sight function is called a second time, with the starting point now the earlier collision point, and the end point the new, calculated destination point (after applying the rebound/ricochet function to the original line). This new line is then tested against every line in the map, and any more collisions calculated in the same manner.
(we marked each collision point on the map during testing)
For further testing, we overwrote the "recalculate distance after collision" routine, and made it a (large) fixed amount. This meant that every time the line-of-sight hit a wall in the map, it would always ricochet by a large amount - the idea being to test how it would react over a large distance.
It was to some relief that the ricochet/rebound algorithm correctly drew the expected path. It was noted at this point that the angle of rebound appears similar on every collision point. This is actually to be expected - since we're using only walls that are either vertical and horizontal, and our algorithm calculates a "perfect rebound" (the angle of approach is also the angle it leaves the collision point) it makes sense that we see the same angle throughout the whole screen.
Confident that the rebound function was working correctly, we started to add different functionality to different walls.
Using an online editor made earlier, we can describe our map as a number of "lines with properties" as an xml document. As shown in the image above, one of the properties that we can apply to our lines/walls is "can shoot through". Different lines with different properties can be used to describe different types of walls in our game maps.
For example, a line which can be "seen through" but not "can shoot through" could be used to indicate bullet-proof glass, or - in a sci-fi game - a force-field. Similarly, a line which the player "can shoot through" but not "see through" could be used to indicate a line of fog or smoke.
To further test our line-of-sight/bullet-path routines, we made one of the walls (specifically, in the image above, line 4) "passable" by setting "can shoot through" to "yes". The resulting test looked crazy - but after investigating a bit further, it actually appears to be correct too!
So we've tested our line-of-sight/bullet path routine with simple and complex paths, and proven that we can change the properties of different walls in the map, to alter the way the bullets ricochet and bounce around the map. It's starting to look like we've got the basis of a pretty impressive engine for our board game....
Finishing Space Marines the quick and easy way
After sploshing some paint on half a dozen space marines, and coating with Army Painter's Dark Tone Quickshade, we had to wait 24 hours for the solvent based varnish to dry.
The miniatures actually looked pretty good (in the right lighting condtions) just as they were, albeit a bit shiny (ok, really, really shiny).
The dark tone provides really nice, deep contrasting shading, while the high points on the miniature are nice and bright. But the gloss finish is really unappealing (it doesn't show up too well in the photo, but it's really not nice at all!).
To tone down the gloss effect, we use Testor's Dullcote. It's a matt varnish, specially formulated to provide an ultra-matt surface. Some people even call it Deadcote - it does such a good job of "matting down" anything even vaguely shiny. (in comparison, the Army Painter Anti-shine - when you can get it to work - needs about two or three coats to completely remove the gloss shine created by the Quickshade). Dullcote takes just a couple of hours to dry fully, and provides a nice, matt surface which takes acrylic paints really well.
The character on the left has been painted with Dullcote, the character on the right is still waiting to be dulled down.
An interesting effect of applying dullcote is that it not only removes the shine from a model, but also does something to the contrast of the colours. The darkest recesses no longer appear really dark, and the raised parts no longer "stand out" because of their shiny appearance. It seems to not only dull the shine, but also the brilliance of the colours.
In the top half of the character on the left, you can see where the original Crystal Blue colour has been painted back in - particularly noticeable around the helmet and shoulder pads. Care was taken to allow a little of the "dulled" colours to show through (there'd be no point in completely painting over the lot of it!) and the result is a much "cleaner" looking model (compared to the model on the right).
Something that becomes noticeable after applying the Dullcote is how around many of the recesses, the paint actually appears brighter around the edges. Take a look at the markings on the bottom of the legs near the feet: the recessed part appears to be outlined in black - an affect we'd like to keep - but immediately alongside, the darkened-down blue appears much brighter than the surface around it.
This is an affect we wanted to simulate, by applying a much brighter blue colour as "edge highlights". The transistion from dulled, to cleaned, to highlighted model is shown below.
The miniature on the left is fresh from being Dullcoted. The model in the middle has had large areas of colour repainted (the white on the shoulders, a brighter gold applied on the rimes, and the original Crystal Blue repainted over the larger exposed areas). The model on the right has been "edge highlighted" to pick out the edges and transitional parts: it's particularly noticeable on the elbow pads and around the helmet and feet.
At this point, we're going to call our miniatures done (the guns and backpacks obviously need applying, but we're done as far as painting goes). We could probably spend a number of hours on each one, picking out the tiny, minute details on the models, applying multiple layers of shading and highlighting and so on. But the whole point of this exercise was "speed painting" - to get some models painted up to a "good enough" standard to pu them onto the tabletop.
And we reckon the guy on the right is painted to a "good enough" standard for what we need.
Total time taken so far: 4.5hrs
The miniatures actually looked pretty good (in the right lighting condtions) just as they were, albeit a bit shiny (ok, really, really shiny).
The dark tone provides really nice, deep contrasting shading, while the high points on the miniature are nice and bright. But the gloss finish is really unappealing (it doesn't show up too well in the photo, but it's really not nice at all!).
To tone down the gloss effect, we use Testor's Dullcote. It's a matt varnish, specially formulated to provide an ultra-matt surface. Some people even call it Deadcote - it does such a good job of "matting down" anything even vaguely shiny. (in comparison, the Army Painter Anti-shine - when you can get it to work - needs about two or three coats to completely remove the gloss shine created by the Quickshade). Dullcote takes just a couple of hours to dry fully, and provides a nice, matt surface which takes acrylic paints really well.
The character on the left has been painted with Dullcote, the character on the right is still waiting to be dulled down.
An interesting effect of applying dullcote is that it not only removes the shine from a model, but also does something to the contrast of the colours. The darkest recesses no longer appear really dark, and the raised parts no longer "stand out" because of their shiny appearance. It seems to not only dull the shine, but also the brilliance of the colours.
In the top half of the character on the left, you can see where the original Crystal Blue colour has been painted back in - particularly noticeable around the helmet and shoulder pads. Care was taken to allow a little of the "dulled" colours to show through (there'd be no point in completely painting over the lot of it!) and the result is a much "cleaner" looking model (compared to the model on the right).
Something that becomes noticeable after applying the Dullcote is how around many of the recesses, the paint actually appears brighter around the edges. Take a look at the markings on the bottom of the legs near the feet: the recessed part appears to be outlined in black - an affect we'd like to keep - but immediately alongside, the darkened-down blue appears much brighter than the surface around it.
This is an affect we wanted to simulate, by applying a much brighter blue colour as "edge highlights". The transistion from dulled, to cleaned, to highlighted model is shown below.
(although up close, you can see some of the brushmarks where the highlight colours have been applied, at a distance of a couple of feet - the distance you'd expect them to be when on a board game playing area - the finished miniatures look great!)
The miniature on the left is fresh from being Dullcoted. The model in the middle has had large areas of colour repainted (the white on the shoulders, a brighter gold applied on the rimes, and the original Crystal Blue repainted over the larger exposed areas). The model on the right has been "edge highlighted" to pick out the edges and transitional parts: it's particularly noticeable on the elbow pads and around the helmet and feet.
At this point, we're going to call our miniatures done (the guns and backpacks obviously need applying, but we're done as far as painting goes). We could probably spend a number of hours on each one, picking out the tiny, minute details on the models, applying multiple layers of shading and highlighting and so on. But the whole point of this exercise was "speed painting" - to get some models painted up to a "good enough" standard to pu them onto the tabletop.
And we reckon the guy on the right is painted to a "good enough" standard for what we need.
Total time taken so far: 4.5hrs
Subscribe to:
Posts (Atom)