In fact, this is exactly the approach we initially used. But it wasn't without it's problems.
To begin with, the range sometimes didn't go down. It actually went up! A bit of investigating and we think we found out why: place a hard surface in front of an ultrasonic sensor and you can get very accurate distance measurements
Easy.
But put a big lump of meat, dressed in a woolly jumper in front of the sensor (a person) and the sound is dampened. And reflected off at funny angles. And basically mangled, to the point where the return echo actually takes longer than it should to get back to the transducer. Now, instead of consistent, solid readings, you get a measurement greater than you should - not because the object is far away, but because it's squishy and has absorbed/distorted the sound waves.
High end ultrasonic sensors also include a degree of self-calibration. So the problem is exacerbated if it's getting a load of differing values reflected back. Quite often, the sensor will "calibrate out" the squishy meat-sack and it appears that nothing is there.
So when you first stand inbetween the sensor and a hard surface (like a wall) the range reading actually goes up, not down. This is ok, we can simply look for a difference in range and use that as a trigger. But as long as you stand in front of the sensor, dampening/absorbing the sound, the sensor will re-calibrate and return "nothing found" results (usually it just returns it' maximum range value, but the result is the same).
Move about, and occasionally, in among the "maxed out" values, you'll see a shorter value, indicating that the sensor has seen something up close. But the values are only just good enough to use as a trigger - and certainly not for determining how close a person is. It's the difference between a "person sensor" and a "movement sensor".
We tried a couple of other ideas - PIR sensors work well, but only as movement sensors, not person-proximity sensors. We even tried laser-range-finders. But if you wear dark clothes, it's enough to scatter the returning laser beam to make the readings inconsistent (they are designed, after all, to bounce off the walls of a house, not people).
So we came up with a bit of a weird idea: we want to detect when someone is standing in front of our device. Preferably, detect when they are also facing the device. People have faces. Why not use face detection?
- If they're in place, but facing the wrong way, we don't want to trigger.
- If they're not in place, but their arm gets in the way of a sensor, we don't want to trigger.
- If they're too far away - or too close - we don't want to trigger.
We basically want a trigger when someone is stood at the right distance, facing the device.
So why not stick a camera in the device, shove a Raspberry Pi inside it and use face detection? Most face detection routines can create a bounding box around a face - we could use these co-ordinates to determine how close (big) the face is and how far off-centre it is, to determine whether or not we should trigger.
What a handsome guy! And not looking weird at all.
So for the first time in a long while, we set about putting a fresh install on our model B+ Raspberry Pi. Getting the operating system working was easy enough - there are plenty of tutorials on how to do that - copy noobs onto an 8Gb sd card and let it do it's thing.
Getting the PiCamera working was equally as straight-forward: Google PiCamera and follow the instructions (even the sudo apt-get install stuff is just a matter of using copy-and-paste and letting the Linux/Debian operating system do its thing)..
Installing SimpleCV wasn't quite so easy.
The instructions at https://github.com/OpenLabTools/OpenLabTools/wiki/Using-the-Raspberry-Pi-camera-module-with-SimpleCV looked simple enough. But simply didn't work!
Trying the sudo pip install command that a lot of websites link to caused a whole raft of error messages, with no real clue about what was going on, nor how to fix it (I'm sure the answer is in there, but for a Linux-noob, using copy-n-paste to install stuff, it all looked like gobbledegook!)
Luckily, the answer was found at http://raspberrypi.stackexchange.com/questions/6806/simplecv-installation-on-raspberry-pi which suggested that trying to "pip install" something meant the package was download and unzipped/deflated into memory. And that was just too much for our little Pi to cope with.
So we used sudo wget to download the zip file to disk, then sudo pip install file to install from a file location, rather than a URL. This time the install worked just fine, and it wasn't long before we were bashing Python scripts to capture faces and highlight their locations:
Although our code is, by now, heavily modified, here's where we started from, capturing an image from the Raspberry Pi camera and processing it, to find a face:
http://www.open-electronics.org/raspberry-pi-and-the-camera-pi-module-face-recognition-tutorial/
#!/usr/bin/python2.7
# Programma test Haar Features
import picamera
from SimpleCV import Image
import time
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_preview()
time.sleep(10)
camera.capture('foto.jpg')
foto=Image("foto.jpg")
print(foto.listHaarFeatures())
trovati=foto.findHaarFeatures('face.xml')
if trovati:
for trovato in trovati:
print "Trovato alle coordinate : " + str(trovato.coordinates())
trovato.draw()
else:
print "Non trovato"
camera.stop_preview()
foto.save('foto1.jpg')
foto.show()
time.sleep(10)
# Programma test Haar Features
import picamera
from SimpleCV import Image
import time
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_preview()
time.sleep(10)
camera.capture('foto.jpg')
foto=Image("foto.jpg")
print(foto.listHaarFeatures())
trovati=foto.findHaarFeatures('face.xml')
if trovati:
for trovato in trovati:
print "Trovato alle coordinate : " + str(trovato.coordinates())
trovato.draw()
else:
print "Non trovato"
camera.stop_preview()
foto.save('foto1.jpg')
foto.show()
time.sleep(10)
The last stage for our project is to have the camera continually running (it takes about 10 seconds to initialise, so we'll set it running, then leave it) and use the GPIO pins to make a request. We'll use an Arduino/PIC or other microcontroller to send a pin high - effectively asking the Raspberry Pi "can you see a face". During the image processing, the Pi holds a different GPIO pin high to act as a "busy" line. When the controller detects a high-to-low transition on the busy line (as the Pi returns it low to say "finished processing" the controller can then query the "face found" GPIO on the Pi. The Pi, in turn, will raise this pin high is a face has been found in the image, or pull it low if none have been found.
The nice thing with the SimpleCV processing is that you can use something like:
faces=foto.findHaarFeatures('face.xml')
if faces:
for face in faces:
c = face.coordinates()
x = c[0]
y = c[1]
dist = face.distanceFrom()
h = face.height()
w = face.width()
print "Face found at coordinate : " + str(x)+"," + str(y)
print "Distance from centre: " + str(dist)
print "Size: " + str(h) + "h " + str(w) + "w"
if faces:
for face in faces:
c = face.coordinates()
x = c[0]
y = c[1]
dist = face.distanceFrom()
h = face.height()
w = face.width()
print "Face found at coordinate : " + str(x)+"," + str(y)
print "Distance from centre: " + str(dist)
print "Size: " + str(h) + "h " + str(w) + "w"
and then compare the location and size of the face bounding box(es) to decide whether or not to include that face in your list of results
So we can use only the largest (closest) object, or compare it's x/y co-ordinates to determine how "off-centre" it is. Or perhaps ignore only faces that have a specific height/width (as a crude distance calculator). The possibilities are endless!
No comments:
Post a Comment