Tuesday 31 March 2015

Choosing a scale for our electronic board game project

With the recent change over to Unity for programming the app for our electronic board game, we've come across some exciting - and challenging - decisions to make. For a while we wrestled with the idea of scale for our board game.

While each board section was created using an "industry standard" 8x6 1-inch-square grid, we did try out a number of scales for the boards.

We loved the idea of 15mm, because:

  1. more game on a physically smaller playing area
  2. miniatures are cheaper and quicker to paint
  3. laser-cut terrain is cheaper and easier to create
  4. small minatures fit easily into our 1" squares.


But 15mm is not without it's downsides:
a) there's a massive difference in sizes between manufacturers (some create real 15mm, some are as large as 18mm or more)
b) the miniatures can look a bit "lost" when placed with just one in each square, on a 1" grid.

Of course 28mm (and the more recent 32mm) is a great scale for tabletop skirmish games, because:
a) there's a massive range of miniatures already available
b) miniatures have lots of great-looking detail when painted
c) for small-to-medium sized games, the larger sizes look very impressive on the tabletop

But on the downside....

  1. individual miniatures are relatively expensive - building a large horde of zombies, for example, could get quite pricey
  2. painting all those lovely details takes time. Lots of time
  3. terrain looks great but can be quite bulky and take up a lot of space
  4. the board can look quite crowded when a number of playing pieces are placed closely together on a 1" grid.

Fortunately, we found a great halfway-house: 20mm minatures.
They have a lot of the benefits of both 15mm and 28mm/32mm scale miniatures:
  1. They have enough detail to stay interesting, but not so much that they take forever to paint
  2. They match nicely with other, modern-day terrain (small cars, 1/76th railway models etc)
  3. They're relatively cheap, even when buying in bulk
  4. They look very impressive when assembled in volume on the tabletop
  5. They are neither too large or too small for a 1" grid

We'd love to stick with 20mm for everything, but it does suffer one massive drawback as a scale for tabletop gaming: namely that there are very few decent suppliers out there, with a large enough range, to justify asking all the potential players of our game to invest in (yet another) scale.

Now we've really tried everything from approaching 20mm suppliers to investigating casting our own range of miniatures. But that's not really something we want to get tied up with! So we've got to make a decision on which off-the-shelf scale to use for our game.

Whichever scale we decide on, we're going to have to re-design our board game sections for future versions - either scale it up for 28mm/32mm, or scale it down for "true" 15mm.

Given that scale creep is only ever looking like it goes upwards (i.e. 28mm has become 32mm and is creeping towards 35mm and 15mm is often more like 18mm) we've decided to simply go with the range that currently has most miniatures available for it. Which means we're shifting over to 28mm/32mm.

In truth, we'd always had one eye on this as our preferred range anyway; it was the creating of the playing surfaces on a 1" grid that caused us to try other scales and sizes. The one thing we've stuck to, throughout all this, has been our 1" grid arrangement.

Unfortunately, it's this 1" grid arrangement that's held us back!
So we're losing the one thing we've stuck rigidly to since day one of this project, in order to claw back some of the benefits of using existing miniatures and playing pieces. Which leads to the next question - how big should our grid squares be?

We printed a couple of different sized grids for comparison:


It's immediately obvious that, for 28mm/32mm miniatures, our 1" square grid is simply too small. The space marines in the background are touching and overlapping each other, and picking them up and moving them around is simply too fiddly to be practical. Our footballing players in the foreground are supposed to represent players in a rugby-style wrestling ruck (though where the ball is, who knows?!)



By increasing the grid size to 30mm, the space marines get a bit more breathing space, but when placed on the board along with some oversized miniatures (like the Mech robot for example) or when placed with scenery and walls also on the board, things could still get a bit crowded.

In an uncrowded, open board (such as a football field, for example) our two football players still look as though they are close enough to be engaged.



Lastly, we tried 35mm grid squares (actually, we did try 40mm but quickly decided that these were simply too big). At this size, there's plenty of room around our space marine characters, and our oversized miniatures (like the Mech robot, and for characters like GW termagaunts) fit inside the grid squares, even if three or four need to be placed in close proximity.

Our football players are starting to look like there's a bit of space between them now. If the grid squares were any larger they wouldn't look right representing two players locked in a physical tussle.

So there we have it. We've decided to "upscale" our playing board sections, to 35mm squares (instead of the current 1" or 25.4mm squares). In doing so, we'll leave all thoughts of smaller scale miniatures behind and focus on the most popular 28mm/32mm scale.

Sure, these individual miniatures can be relatively expensive, when compared to their 15mm/20mm counterparts. But there's also such a massive range of miniatures from a massive range of suppliers, the extra cost must surely be worth it, if only for the ability to source exactly the miniature you're looking for.

With all the recent focus on software development, it's been easy to forget about the physical game. Well, no so much "forget" as "put on the back-burner". At least with this decision made and put to bed, we can forget about crazy ideas about creating a range of custom miniatures, or trying to get miniatures made to match our Unity characters - after all, it'd be far cheaper to digitally sculpt an existing miniature character and include that in our game, than to try to create an entire range of niche, obscurely-sized pewter miniatures just to match the artwork in our Unity-based app!

If it means changing our 1" grid in order to access the massive range of skirmish-sized miniatures already out on the market, then that's what we'll do.





Monday 30 March 2015

Integrating Unity with a web-based map editor

While I still feel a total noob when it comes to Unity, I've a fair bit of experience with web development and javascript. So I was in more familiar territory recently, making an online map editor, which allows users to select different tile and wall types, in a grid-like arrangement.

The editor itself is still pretty crude, but - like most of my own personal projects - it's functional enough to demonstrate how it would work.


Of course, with a bit of SteveMagic, it'll really look the biz, but for now it's simple enough. Select either the floor tile or wall type you want to place, and click into the HTML5 canvas area. Using some simple array sorting and some bitmap blitting, the map is redrawn each time a tile (or wall) is placed or removed.

Wherever you click on the canvas, a bit of javascript turns the mouse co-ordinates into a relative square number. If you're placing a tile, the currently selected tile is placed in (or removed from) the selected square.

If you're placing a wall, a bit more javascript looks at where within the square you clicked (top, bottom, left or right of centre) and places (or removes) a wall as necessary. When you're done, the whole array is written out as a string, ready to be delivered to our Unity app.



A bit of simple string splitting and a "instantiate" calls later, and our map is recreated from our flat HTML5 web page into our 3D game environment. All very exciting stuff. A bit boring and repetitive to give full code listing here (and if you really must, you can always "view source" on the map editor screen) but at last it feels like we're making progress with this whole "let's-put-the-game-into-a-3d-environment-we've-never-used-before" idea!



Guitar tutor project explained

In an attempt to get away from the computer screen for a few hours at a time, our new, musical hardware-based project is going to be a full-sized, playable guitar with light-up LED fretboard.

We're not entirely convinced we're going to make that good a job of the fretboard, so it'll likely sound nasty and out-of-tune all the time. So to make it a playable instrument, we're going to use it like a game controller - getting feedback to indicate which string is being pressed against which fret, to determine which note(s) are being played.

Analogue-to-MIDI stuff is available. But it's mostly very expensive. And, if truth be told, not very good. Jason has an awesome MIDI bass that looks and plays just like a regular bass. But it uses frequency recognition to determine which note is being played (not which string is fretting which note) and that means

a) sometimes it's not very accurate and
b) there's a noticeable delay between plucking the string and hearing the note.
c) it requires dedicated one-signal-per-string pickups

We're going with the crazy idea of building a fingerboard from scratch, placed over a circuit board which contains

a) an array of 72 RGB LEDs (6 strings x 12 frets = 72 lights)
b) a resistor ladder, with a single resistor connecting each fret to the next one

Frets are placed into cutouts in the fingerboard, using "tangs" to hold them in place. Our idea is to solder a connector onto the underside of each tang to connect it to the next resistor in the resistor ladder.


So underneath the fingerboard, we'll have a single resistor ladder covering the first twelve frets of the guitar neck (after the twelfth fret, all patterns repeat).


Why only one resistor ladder when there are six strings? Well, here's how it's going to work:
At the end of the resistor ladder, we have a connection to an analogue input, We're going to use the string to short part of the ladder to ground, effectively changing the ratio between the resistors "above" and "below" where we take our analogue reading from.

We're going to connect our guitar strings to ground, isolating each string from each other by using shrink-wrap where the string touches any metal parts on the bridge or where they pass through the guitar body.


Here we've shown each possible location where you can fret a string as a switch on the circuit diagram. Imagine a guitar with just one string. When no notes are fretted, the resistor ladder acts like a voltage divider with a relatively short resistance between the input pin and the power supply, and a much larger resistance between the input pin and ground (the result being an analogue input which is relatively close to the supply voltage).

Now let's fret a note about half-way along the fingerboard. The string presses against the wire fret, "shortening" the lower part of the resistor ladder, meaning the ratio of resistance between the power supply, the input pin and ground is altered. The closer to the input pin the player frets a note, the closer to ground the analogue input signal becomes.

All this sounds great for a single string. But a guitar has six of them. But each wire fret is going to be conductive no matter which string is pressing against it. If we were to fret, say, the second and fourth frets on the guitar, and query the analogue input pin, only the fret closest to the input pin would actually be making a difference. It's a bit like closing the switches above R1 and R3 in the diagram above - the switch at R3 shortens the length of the resistor ladder - the fact the R1 is also closed makes no difference whatsoever.

What we need to be able to do is to poll the fret position of each string, individually, and to "disconnect" the strings we're not interested in while polling each one.

Sounds tricky? Not really. Thanks to "tri-stating" on our PIC microcontroller inputs, we can do exactly that. We simply make all pins connected to the strings tri-state (set to high-impedance inputs) then make just one string/pin a digital output and pull it to ground.

We then read the analogue input and convert the reading into a fret position. Because we know which string we "activated", we know which fret only that string is in contact with (if any). If another string is being fretted closer to the input pin, it doesn't matter - because that string is "disconnected" from the circuit (as it's connected to a high-impedance input pin).

To prove the concept, we built a simple test-rig


There's our twelve-stage resistor ladder, connected to a PICmicrocontroller (a 16F1825 I think). The first thing to do is to test this, by shorting each resistor to ground, and reading the analogue values recorded by the PIC.

With all equal 10K resistors
Fret 10:384
Fret 9:6336
Fret 8:11520
Fret 7:15936
Fret 6:19648
Fret 5:23232
Fret 4:26240
Fret 3:28608
Fret 2:31040
Fret 1:33472
Fret 0:35072
open:35392

We hit a snag here, because our analogue read routines report varying values, within quite a wide range (this is the nature of analogue reads, normally you take multiple readings and average out the results to get a closest-match). Between frets 4-10 there isn't really much of a problem. But further down, the difference between the values sometimes gets very close to the natural margin for error in our analogue read functions.

It's no good taking a reading from fret zero of 35k +/- 1000 when the reading for fret one might also be 33k +/1000. There's too much chance of an overlap between values and errors, so we looked to reduce this by changing our resistor ladder slightly.

Instead of using all the same resistor values, we made the resistor values larger, closer to the zero fret. So frets 0 and 1 had 23k resistors connecting them to the next rung in the ladder. Frets 2-4 were connected using 18k resistors, frets 5-7 were connected using 15k resistors and all the other frets were connected using 10k resistors.

The results this time were much more encouraging

Fret 12:384
Fret 11:6272
Fret 10:11328
Fret 9:15744
Fret 8:19520
Fret 7:24256
Fret 6:28480
Fret 5:31808
Fret 4:35200
Fret 3:38656
Fret 2:40384
Fret 1:42624
Fret 0:46528
open:47040

While it would be nice to have wider gaps between the values at the lower valued frets, they're now a reasonable distance apart. With a working resistor ladder, the next thing to do was prove that the whole thing worked with two or more notes fretted (we simulated this by leaving two wires connected at two different points on the resistor ladder, and "activating" only one wire at a time).


Define CONFIG1 = 0x0804
Define CONFIG2 = 0x1dff
Define CLOCK_FREQUENCY = 32

declarations:
     Symbol led_out = PORTC.0
     Symbol analogue_in = PORTA.4
     Symbol tx = PORTA.0
   
     Dim b As Byte
     Dim r As Byte
     Dim iw As Word

init:
   
     OSCCON = 11110000b '32Mhz internal
     APFCON0 = 01000000b 'alternative tx/rx for pickit2
     ANSELA = 00010000b 'make PORTA.4 an analogue input (all others digital)
     ANSELC = 0x00 'PORTC is all digital
   
     ConfigPin led_out = Output
     ConfigPin analogue_in = Input
     ConfigPin PORTC = Input
   
     Define ADC_CLOCK = 3
     Define ADC_SAMPLEUS = 50
   
     b = 0
     r = 0
   
loop:
     High led_out
     WaitMs 500

     'make all of PORTC inputs (tristated/disconnected)
     ConfigPin PORTC = Input
   
     Select Case r
           Case 0
           'do nothing: let's see what value we get
           '(57k on average, but sometimes as low as 53k)
         
           Case 1
           'make c4 an output and pull it low. This should
           'short the fifth resistor to ground and our reading
           'should be somewhere in the 49k-45k range
           ConfigPin PORTC.4 = Output
           Low PORTC.4
         
           Case 2
           'make c4 an output and pull it low. This should
           'short the sixth resistor to ground and our reading
           'should be somewhere in the 53k-50k range
           ConfigPin PORTC.3 = Output
           Low PORTC.3
     EndSelect
   
     'wait a sec
     WaitMs 2
   
     'read the analogue input
     'RA4 is AD channel 3
     Adcin 3, b
     iw.HB = ADRESH
     iw.LB = ADRESL
   
     Serout tx, 9600, "input "
     Serout tx, 9600, #r
     Serout tx, 9600, ": "
     Serout tx, 9600, #iw, CrLf
         
   
     Low led_out
     WaitMs 500
         
     r = r + 1
     If r > 2 Then
           Serout tx, 9600, CrLf
           r = 0
     Endif
Goto loop
End


We have two different i/o lines connected (the white coloured wires at the bottom of the photo). In firmware we "activate" each digital i/o pin, and read the analogue input, while leaving the other wire(s) connected.

Results with two wires connected:
input 048576
input 115808
input 231872
input 050048
input 115040
input 231680
input 048576
input 115808
input 231872
input 048704
input 115808
input 231936

Looking up these values on our previous table of results, we can see that a value of around 15k corresponds to a note at the 9th fret being held, while a value of around 31k is typical of a note at the 5th fret being fretted.

Looking at the photo of the breadboard, we can see that these are the positions of the wires during testing, thus demonstrating that it doesn't matter if more than one string is connected to more than one fret - we can poll each string independently of the others to work out which note is being held on each string.

That's enough for tonight. It's always pleasing to end the weekend being able to demonstrate that an idea or theory does, in fact, work!


Sunday 29 March 2015

Guitar tutor project

It's been programming, programming, programming, code, code, code for a few weeks lately. And sometimes it's nice to create something that you can hold in your hand. So - as a break to get away from the computer screen for a little while - another "weekend project" (likely to take about five weeks in reality) was started.

This time, it's something that I personally hope to benefit from (rather than making cool stuff for other people, or to demonstrate something for someone else!).

Buoyed by our recent success with making a MIDI keyboard that doubles-up as a tutor, we're looking to make a guitar tutor that can double-up as a midi controller! This is likely to need a bit of explanation - so let's start here:

Papastache has some great lessons online (and has a great range of DVDs going much more in-depth) explaining how target chord tones, and to do more than just run up and down pentatonic scales. He makes passing reference to the idea of using major and minor sounds together, as advocated by tutors like Griff Hamlin - but it's not enough to just flip flop between them: you need to know when, and you need to be aware of the chords playing underneath your guitar leads and licks. Brett Papa also mentions playing different chord shapes, not just the regular "open chords" which sounds a lot like Steve Stine's CAGED theory approach.


(nope, we don't see it either. I wonder where the name PapaStache comes from?)

Whichever tutor teaches whichever approach, they all rely on a similar approach - being able to "see" chords all over the fretboard. And being able to see chord shapes in may different keys.

This sounds very similar to being able to - literally - see our chords and scales on our MIDI keyboard, but re-purposed for a guitar fretboard layout, rather than for a piano-style keyboard.


Above is an example of how to play a C-major chord in lots of different places all over the neck. What Papastache teaches are lead patterns than you can "hook onto" these different chord shapes (he works mostly with E-shape, A-shape and D-shape chords, but occasionally wanders into a C-shape chord too).

Now we could just wire up a bunch of (surface mounted) LEDs into a fretboard shaped layout and make them light up using the same technique as before, perhaps a couple of MAX7219 drivers and some clever PIC-based firmware. But we're looking to go one stage further this time!

Because there's so much overlap with chords, scales and so on (to simply light up the C-shape chord and the A-shape chord in the example above, for example, would just look like a bunch of random fret highlighted, rather that distinct chord patterns) we're going to use RGB LEDs.

The idea being that we can program in some way of deciding which LEDs to light up (depending on which chord/scale/position has been selected) and, using some WS2812B RGBs, get, say, the C-shape chord to light up in one colour (green perhaps) and the A-shape chord in, say, red, and so on. Now we can show all the different chord shapes across the whole neck at the same time, but, using different colours for each, and perhaps a pure white shade for the root note(s) keep their different patterns distinct from each other.

All this is probably very similar to what we've already done, with our MIDI keyboard. At least, converting musical notes and music theory into digital binary is anyway.

Now it's highly unlikely that we're going to be able to make anything like a playable guitar, with a light-up tutorial fingerboard. That would be a massive task, and require lots of intricate measuring and cutting. But that's no reason not to give it a go! Wouldn't it be great if, as well as showing you where to put your fingers, the thing could actually be used to play a tune (just like we turned our light-up keyboard tutor into a fully working MIDI instrument)?


Of course, there's no reason why we can't make a working, replacement fingerboard. People do it all the time. But it takes skill and patience, and super-fine tools and equipment, and a steady hand, and a passion for guitar-making, and.... well, a whole heap of other stuff that we just don't have! If just one fret is in the wrong place, or the nut is slightly out, or the bridge is in the wrong place, the entire instrument will always sound out-of-tune - and no amount of fiddling will fix it: a bad guitar will always just be a bad guitar!

But what about making a MIDI-based feedback system instead?
What about having a controller which accepts midi-like data, lights up some dots to show you which frets to play, and then responds to whichever frets you're holding the string(s) against? What about a Guitar Hero type learning system, but with real guitar feedback (instead of a silly five-button controller that just-so-happens to be shaped like a miniature guitar)?

That would be really cool.
And it wouldn't matter if the frets were a little bit off - so long as they're placed closely enough to where the should be that you don't have to re-learn the guitar all over again, it should be good enough.

So how to detect which fret the string is being held against?
This is a big topic - and one that will probably need another post, as this one is getting on a bit already....

Wednesday 18 March 2015

Unity and quality settings

We've been pretty bogged down in learning the Unity IDE and picking our way through the differences in Javascript and UnityScript (yep, that really is a thing - and it's ever-so-slightly different to "regular" javascript). But this evening we took an hour out to play with the quality settings. And we got some interesting results.

At the minute we're developing on a PC - it's a quad-core 3Ghz something-or-other with 6Gb of RAM and a whopping 1Gb graphics card (these numbers actually don't mean very much to me, personally, but the guy I got the machine off seemed to think they sounded impressive). It's the most powerful PC I've ever used. But then again, for the last ten years or so, most of my coding has been done in NotePad (well, EditPlus2 if I'm honest) or Visual Studio!

Anyway, it's a half-decent machine (as far as I'm concerned anyway) and it runs 3D graphics fairly well. So during a bit of "downtime" we had a play with the quality settings.

I didn't even know this kind of thing existed - it was only after asking Steve about how he prepares his software for "real-world applications" that he suggested using the same code-based and simply dropping the graphics quality for lower-end devices. It seemed like an idea, so we had a play to see what the different settings did:

On our machine, there was actually very little difference between the first three settings, fastest, fast and simple. Maybe we didn't have enough lights and effects to tell them apart; in any of these settings, there were few or no shadows on any of the objects.


Noticing the quality level change slightly as we went to "good" quality, we actually turned off shadows, as these were a little distracting. At this stage, we were more concerned with how our shapes were rendered, rather than the "sparkle" that particle systems and lighting occlusion (is that even a thing?) added to a scene.


Compared to "simple" mode, where the edges of all the shapes on screen had very definite "jaggies" along the edge, the "good" mode did a decent job of smoothing all the edges. So we were expecting great things from "beautiful" mode...


 Beautiful mode sharpened up a lot of things in the scene; it was only after comparing screenshots between "good" and "beautiful" we noticed what the actual difference was. The bitmaps on the floor tiles are much sharper (we hadn't really noticed that the deforms on the floors in "good" mode actually made them look quite blurry on second glance).

But in sharpening up some of the bitmaps, something else happened too. Our animated (soldier) character started to display little white dots along the seams of the arms. They only appeared every now and again, and only for a single frame of animation. But they were noticeable enough to be distracting.

If you looked at the surroundings (as you might for a first-person shoot'em up) beautiful was definitely an improvement over "good". But if you looked at animated characters in the scene (as you might with a third-person shooter, for example) "good" actually gave better results than "beautiful" - the characters were certainly better animated, and the edges were not so jaggy (though the perspective distortion on the bitmaps did make them appear a bit more blurry).

Strangley, things didn't improve with the ultimate "fantastic" setting.



Once again, the scenery got that little bit sharper, but the animated character still had those annoying flashes of white every now and again. Not there all the time, but noticeable enough if you watched the animation - a little like the way you might spot a rabbit's tail as it runs away from you in an empty field. If you look for it, it's hard to notice - but just gaze at the scene and every now and again you see a flash of white.


While "good" does create distorted floor (and ceiling) tiles, we're actually thinking of sticking with "good" as the default quality setting, if only because those flashes of white on the animated character are so distracting. The jagged edges of the walls and floors (and, for that matter, around the character) in "beautiful" mode are pretty off-putting too.

Just when we thought we'd found the best graphics settings for us we then discovered a single-line script, which confirmed our choice: QualitySettings.antiAliasing.

This can be set to zero, one, two, four or eight.


The differences in quality are most noticeable with antiAliasing set to eight.
The screenshot above shows the same scene, at "good" setting, with antiAliasing set to zero (left) and eight (right). The scene on the right is much easier on the eye!

The decision was cast, when we flipped back to "beautiful" settings, even with antiAliasing set to maximum, and we still got jaggies around our animated character. At "good" quality, the character sits "inside" the scene - he looks like he's part of the spaceship interior. At "beautiful" quality or above, the jaggies around the edges - and in particular along the gun barrels - make it look like an animated overlay, plopped on top of a 3D render.

So there we have it. It may be peculiar to our machine, and work perfectly fine on other devices. But for now, we're sticking with max antialiasing, and limiting our graphics quality to "good". We'll just have to learn to live with the slightly blurry tiles (or perhaps whizz the camera around quickly, and call it motion blur!)


Monday 16 March 2015

Creating animations with Unity - complete noobs only!

While we're still finding our way around Unity, we're stumbling about falling into all kinds of little gotchas and not quite understanding how it all works. But today we managed to create our own custom animation, entirely from scratch. Something which has been a really headache for days now, as screens appear and disappear, previously seen windows no longer available and so on.

Here's how we made some sci-fi sliding doors for our Starship Raiders board game. It may not be the best way to do it. It may not even be the right way to do it! But this is how we did it, and it works, so better get it written down before we forget!

This is the animation we created:


It's simply a door frame with two door sections "inside" it. When a function call is made, the doors slide apart. Another function call brings the two back together. A simple spot effect is played as the doors open/close.

To begin with we're using the Top Down Sci Fi (mobile) environment from Manufactura K4. We just placed a couple of their source models into our scene (to keep things simple).


The doors are originally designed to slide horizontally. But in our game, we're going to be putting a floor and ceiling in place, and sliding them vertically. If we stuck with horizontal, then we'd have to keep at least one blank panel alongside each edge of the door, so that the doors aren't visible after we've pushed them open. By turning them to operate vertically, it doesn't matter if they protrude above the ceiling, or below the floor, as they will be hidden by the floor and ceiling tiles anyway.

After rotating the doors, we placed them inside the frame - lining them up "by eye". We also made sure that each door was made a child of the doorframe. This is important, should we want to make copies of the sliding door to use again in the game. By simply cloning the parent (frame) we don't have to mess about setting up the doors again in future.


To keep things nice and tidy (it's not really necessary, but after using Unity for half a dozen times or so, you quickly learn it really does make things easier in the long run!) we created a folder for our animations, selected one of the doors and selected "Animation" (not animator) from the menu


At first we really struggled with even this basic premise. We tried creating an empty animation file, then tried to find some way of telling it which object to apply the animation(s) to. It may be possible to do it this way, be we just got confused. So this is our method - select the item you want to move, then bring up the animation window.

When you click "add property" you'll be prompted to save your animation to a file. Enter a suitable filename here. We called our first animation "door1_open".


Gotcha number two - if you don't see the "add property" button, it's probably because you've nothing selected that can be animated. It took is ages to work this one out. By selecting an object before bringing up the animation window, you should always have an "add property" button, because Unity already knows which object you want to animate.


There are a few things to note here:
Firstly, we selected the "position" property of our door, and the animator immediately displayed two sets of keyframes. Since we want our door to start off in the closed position, we left everything alone and moved the playhead (indicated by the arrow) to the second set of keyframes.

At this point, the record button, playbar at the top, and the x/y/z properties were all lit up in red. This tells us that we're in "record mode". Anything that we move around here will be recorded in the animation.

Making sure that the playhead was in the last set of keyframes, we lifted the door upwards, to it's final open position (that it's popping out of the frame doesn't really matter - when the ceiling tiles are in place, it'll just look like it has slid inside the frame). Click off the record button and the changes are committed to the animation (the door drops back to its original position in the scene as well).

Here's gotcha number three: we want to create a second animation, moving the door from the open to the closed position. It's quite simple, once you know what you're doing, but it took us ages to work this one out!


Animations are applied to objects - so you must have an object selected in order to animate it. We spent ages creating second, empty animation clips, then wondering how to get to this bit again, where we could add keyframes and move things around. The answer is, with the object selected, click the drop down in the top of the animation window and create a new clip.

This will throw up the save dialogue window, and allow you to create your second, separate animation file. Because the door object is selected, it already knows which object to animate, and so, once again, you're presented with the "add property" button.

As before, we selected transform - position, and this time on the first frame moved the door to the same Y co-ordinates as at the end of the previous animation (you can just type into the co-ordinate boxes in the inspection panel). Because we're closing the door - moving it from up in the air back to it's resting place, we left the last set of keyframe values as they were; you can always hit play to preview the animation.

With our "open" and "close" animations for door 1 complete, we repeated the process for door 2, until we had a total of four animations

[edit: it has been pointed out, that had we selected the door frame and started our animations, we could have set the y-co-ordinates of both door1 and door2 in a single animation, since both are child objects of the door frame. This would have meant having just two animations - one that animated both door1 and door2 to the open position at the same time, and a second which brought them both to the closed position. In future we'll use this method, but for now we're leaving this instruction post as-is, because this is the method that worked at the time!]


With our four animations in place, it's time to create an animation controller and bung some script in, to make the door open and close!

Selecting each  of the doors, we created an animation controller for them (we'll put the animations in place in a moment) then selecting the door frame (not the individual doors) we created a script and dropped this onto the (parent) door frame too.


Inside each door controller we repeated the same process. Select a door and then "Animator" (not animation) from the menu. Create a boolean parameter and call it isOpen. Create a blank, empty state, and make this the default

Next drag the appropriate open/closed animations into the animator window.
So if you've got door one selected in the scene, drop the door1_open and door1_close animations into the window. If it's door two you have selected, drop door2_open and door2_close in there.


Now our default state is "door is closed". So we want a transition from the default state to door1_open, when the boolean value isOpen is set to true. Click the default state (to select it) then right click, and select "make transition" before drawing a line to the door1_open state.

Click on the white arrow that appears between the two states, and from the properties panel, add a new condition - isOpen is true. This tells Unity that at any time we're in the default state, we can play the "door opening" animation whenever the boolean value is set to true (we'll do that later with a bit of scripting).


Now we need to create a transition from door1_open to door1_close. The condition for this is isOpen = false. This is because after we've set the isOpen value to true, the open door animation will play, and the "state machine" in the animator will remain in the "open" state. So when we've opened the door, Unity will keep monitoring the isOpen property and if it ever goes false (while the door is open) it will then play the door1_close animation.


Lastly, we make a transition from the _close back to the _open animation, any time the isOpen property ever goes true again. Once all this is in place, we repeat the whole lot all over again, for door2.

If you hit play at this point, nothing particularly exciting happens. In fact, nothing should happen at all. If your doors are opening and closing at this point, something has gone wrong (and you're probably feeling like we did for two days!) Let's write some code to make these things move!
In the door frame we placed a controller script. This needs editing now...

#pragma strict
var door1:GameObject;
var door2:GameObject;
var door1Anim:Animator;
var door2Anim:Animator;
var doorState:boolean;

function Start () {
   
     // loop through the children of this object (rather than just
     // use object.find which could return any matching name on the map!)
     // and get the two door components for this frame object.
     var allChildren = gameObject.GetComponentsInChildren(Transform);
     for (var child in allChildren) {
          var s:String=child.name;
          if(s=="Doors_02A"){ door1=child.gameObject; }
          if(s=="Doors_02B"){ door2=child.gameObject; }
     }

     doorState=false;   
     door1Anim = door1.GetComponent(Animator);
     door2Anim = door2.GetComponent(Animator);
}

function Update () {

}

function openDoor(b:boolean){
     door1Anim.SetBool("isOpen",b);
     door2Anim.SetBool("isOpen",b);   
     doorState=b;
}

This little script runs as soon as the door frame object is created at "runtime".
It basically gets a reference (pointer) to the child door objects, and the animator objects that control their animations.

The openDoor function is a publicly accessible function - it's going to be called by our main controller in a minute - and can accept either true or false; Whichever value is sent into this function is passed to the two door controller objects. If the door is in either its default position, or the closed position, we created a transition to play the open animation, whenever the isOpen parameter goes true.

Similarly if the door has played the open animation, it plays the closed animation whenever the isOpen parameter is true. Any other combination of true/false is ignored (so if the door is open and the function openDoor(true) is called, nothing happens - you've tried to set the door to open, and it's already open, so it is correct to ignore this request).

So now all we really need to do is to create a script to allow us to call the openDoor function on the doorframe...


There are probably a hundred ways you can do this. We like to create a new, blank gameObject and call it "mainController" and add a script to this. It just makes it easier to keep everything in the same sort of place, once the project gets a little larger (and a bit more unwieldy).

In our main controller script, we just place a couple of buttons on the screen so we can call the openDoor function. In reality, our game logic will be making all kinds of decisions and deciding which doors need to open and close. But for testing our animations, this will do for now.

#pragma strict

function Start () {

}

function Update () {

}

function OnGUI(){
     if (GUI.Button (Rect (10,10,150,30), "doors open")){
          var d:GameObject=GameObject.Find("Gate_02");
          d.GetComponent.<door_controller>().openDoor(true);
     }
   
     if (GUI.Button (Rect (10,50,150,30), "doors close")){
          var e:GameObject=GameObject.Find("Gate_02");
          e.GetComponent.<door_controller>().openDoor(false);
     }
}

And that's it!
Marvel at your amazing sliding doors


One last little gotcha - if your doors open then flip shut, then start opening again, make sure you haven't got the "loop" option ticked in the door1_open, door1_close, door2_open, door2_close animations.


For added authenticity, you can add in a "hydraulic swoosh" sound, as the doors open and close. But that's probably a bit much for one night. For now we're just thrilled that we managed to understand enough about the crazy visually-based Unity editor to get some doors to open and close!

Good luck.......


Animating characters in Unity - not all models are the same

There are some brilliant models on the Unity Asset Store website. We've already invested quite heavily in Unity (over just the last few weeks, it's quite alarming at how quickly $10 and $30 there soon adds up to a few hundred dollars!) and in doing so have found a wide range of quality in the Unity models.

There are plenty of 3d models available online, not necessarily designed for Unity, but with animations and poses that can be (relatively) easily imported into Unity. Then there are some which are an absolute nightmare to get working!

Originally we were really impressed with the Mixamo website.
It offers loads of character models, and some really cool animations (albeit at $5 per animation, which could quickly end up being quite a pricey way to put together a simple game!). Their online rigging tool is particularly impressive.



Simply upload a mesh (even without a skeleton or any complicated rigging or bones) and give the system a few cues -  where to find major joints like elbows, knees and wrists - and let it run for a few minutes. The resulting rigged character is surprisingly easy to animate; just select one from hundreds of different actions, and apply to the rig. It's as easy as that!

Mixamo looks like a great way of quickly producing characters for your games. Except, it doesn't always play nice with Unity.

Now, we only new to Unity, but already we know what a decent model looks like. You import it, drag-n-drop a few controllers and, hey presto! you get a working model. The StarDude characters are great examples of this.

Mixamo claim to have worked with Unity for a number of years, so we were quite looking forward to quickly and easily assembling a zombie horde for another of our game ideas (we've tried importing models into Blender and 3DS Max, and applying some pre-built mo-cap animations, but it's a lot of work, and a bit hit-and-miss as to whether it'll ultimately be successful or not).

But the Unity/Mixamo integration isn't place nice - either with Unity4 or Unity5, we get the same results. Now it might just be that we're doing something wrong - but it's no different to how we've successfully managed to get a number of animated characters from other suppliers, so perhaps there's just something we're not quite getting.

Here's how we tried animating our Mixamo free character (screenshots are from Unity4, but we get the same results using Unity5):

After installing the Mixamo plug-in for Unity, a screen very much like the Asset Store appears in the Window menu. We simply downloaded the (free) zombie model from Mixamo and dropped it onto the screen.


The Mixamo plug-in allows you to try out their animations in your project window. Simply select an animation then drag-n-drop your model onto it and click "preview". The animation is downloaded and a clone of your character acts out the animation in both the game and scene windows.


So far so good. In the screenshot above, we can see the original zombie character, dropped into the scene, as well as the clone character, carrying out the walk cycle animation. But this is where things start going a bit weird.


We downloaded the Mixamo (free) zombie walk cycle (by "buying" it at $0.00) and imported it into our project. Just like we have done with so many other models, we then created an animation controller and dropped it onto the model.


We then opened the Animator windows and dropped the (newly downloaded) animation into it, setting "walk" as our default animation - just as we have done so many times, with so many other models from so many other providers.


Then simply set the game playing, to marvel at our shambling zombie walking animation. Ta-da!


Oh.
And this is where we got stuck.
Well and truly stuck.
Stuck like there's no answer to this! We tried all the online tutorials and followed them to the letter. Then we tried the forums and made sure that our model was rigged as "humanoid" (it was) and the animation was set to "humanoid" (not legacy). We tried running the animation as "legacy" and even tried dropping the animation straight onto the model (instead of using an animation controller).

It got so bad that we even entered Unity in debug mode and changed the animation type from 1 to 2 (as suggested in one of the online forums). Nothing worked.

No amount of deleting, restarting, reinstalling, tampering, tinkering and hacking got us any further than this weird, slightly cramped pose. The Mixamo preview animation was exactly as we wanted it, but we can't find

a) what we're doing wrong, to get the zombie to behave like this and
b) what we need to do to make it animate properly.

It's really frustrating - because Mixamo have a massive library of characters and animations which it says are designed to simply drag and drop into Unity, to allow you to get on with the fun stuff of writing your games.

Which is exactly what we want to.
If anyone can shed any light on why this doesn't work, please leave a comment below!


Friday 13 March 2015

Using Stardudes rigged characters in Unity

So we hit upon this genius idea about rewriting our entire board game app, moving away from Flash and trying to build the whole thing in Unity.

Firstly, Unity compiles down to a multitude of platforms. Flash is great for iPhone/iOS development, and does a passable job of creating Windows executables. But Unity does all this and a whole load more! It can compile for Linux and Mac, as well as iOS, Android and Windows, as well as consoles like XBox. And it works with native .NET code, as well as Unity-targetted sort-of-javascript.

All this meant we had to give it a try.
First up, we hit the Asset Store and bought an entire mobile-friendly sci-fi environment called Top Down Sci-Fi from Manufactura K4


While we're still not sure how to optimise for mobile (it involves creating single sheet textures, fake lighting and low-poly count objects, apparently) this kit looks great not just on mobiles, but even on our large dual-screen setup.


Using the environment was as simple as throwing some money at the screen, following the installation instructions and hitting play! It worked straight "out-of-the-box" with no coding or setup required. We're going to have a play about with that at some point in the future, but now we needed some characters to populate our sci-fi environment.

Now there are plenty of online tutorials about creating meshes, setting up skeletons and rigs and animating characters using software like 3DS Max and Blender. But this is a whole world of development that we just don't have time to invest in! Far easier to exchange some cash for some pre-made assets from the Unity Asset Store....

Over the last couple of weeks - partly out of eagerness, and partly because we've no idea what we're doing - we've done a lot of this kind of swapping cash-for-ones-and-zeros, and have a few different assets for Unity, all of varying quality.

Some of the better assets we invested in are the StarDudes by Allan McDonald.


These are not only great-looking characters, but are relatively low-poly (so ideal to put into our mobile-friendly environment). They also have a variety of head types, to easily create different character races, and different materials/textures to quickly change the look and feel of their space suits.


The characters also come with an assortment of great ready-to-go animations. These include two idling animations (where the character stands still and looks around, casually) as well as some walking, firing and drop-down-dead animations to boot.

The slightly toony looking characters don't look at all out of place onboard our 3d spaceship - which itself has a slightly cartoony feel about it, thanks to the solid black lines and bold use of colours.


There are a few different tutorials online describing different ways of creating animations in Unity. In recent releases, it uses a system called Mechanim, which uses a simple state machine to blend between animations - there's nothing to shatter the illusion of immersive game play like a character that snaps from one animation straight into another. The Mechanim system does away with this, creating "transition blends" from one (or multiple) poses into another (or others).

It has taken some time to get used to the mix of visual drag-n-drop and script/coding approaches that are required to make Unity work, but once you know what you're doing, animating a character can be quite straightforward (until you know what you're doing, it can be a horrible, confusing, frustrating experience as there's nothing immediately obvious to tell you why some characters will happily take up an animation sequence, while others stubbornly remain in their default T-pose).

Every character (that needs animating) needs an animator controller. This is a file that describes the state machine that controls the animations. "Inside" the animator controller live all the animations, and the relationships between them.

Because Unity still supports the "legacy" method of animating (by placing the animations directly onto the model, without the use of an animator controller) and also animation by scripting (where a script placed on a model manipulates the rotation and position of the rig bones directly) simply comparing two or more models to see how they work often leads to more confusion than explanation!

Here's how we animated our StarDudes characters:

First, place a character in the scene.



Create an animation controller. At this stage, it's little more than an empty file.



Drop the controller onto the model in the scene. In the model properties you should now see the controller linked to the model


It's at this point that we need to add our animations.
With the model selected, open up the "animator" (not the animation) window from the menu


Now find the animations you want to use with this model, and drag and drop them into the animator controller window. To preview an animation, expand the file containing the animation (a lot of animations "hide" inside .fbx files so click the little grey arrow to see all the contents of the file). A single click on the animation will display it in the preview window. Once you've found the animation you want simply drag and drop it into the Animator window.


The first animation placed in the window becomes the default animation. If you add in more than one animation here, you can choose which one should be the default. The animation in orange shows the default animation - all other animations appear in grey.

At this point, you can try out the game,  and see the (default) animation being applied to the model. If all has gone well, instead of the default T-pose, the character should be playing your animation:



Flip back to the Animator window, add some more animations, and right-click and drag the transitions between the animations. Click on each transition arrow to set the criteria that triggers the transition.


For example, we might decide that we have our character idling to begin with, so we right-click and make our idling animation the default. We might then decide that should the player's speed increase beyond zero, the character should transition from idling to walking.


Any transition defaults to blending from one to the other after the first animation has finished playing. Exit time shows how quickly one will fade into the other.

We do this by creating a "one-way" transition from idling to walking, and set the parameter "speed" to "greater than zero". This means that as soon as our player speed is positive, Unity will gently blend the idling animation into the walk cycle animation. There's no need to do anything other than create this releationship - Unity takes care of making one animation transition smoothly into the other, without any nasty jumping or flailing limbs.


But now our character just walks and walks and keeps on walking. We need a way of getting him to stop, when his speed reaches zero again. This means we need to create a second transition - only this time the "direction" goes from the walk cycle to the idling animation; and this time we set the criteria to "speed equals zero".

That's it.

Now, whenever the player speed is non-zero (and positive) and the character is displaying the idle animation, Unity will ease the character into the walk cycle. And whenever the character is running the walk animation, and the player speed drops to zero, Unity will east the character into the standing-still-and-idling animation.

All with a bit of dragging and dropping, and not a single line of code!