SONBI ROBOT HUMAN DETECTION USING KINECT AND RASPBERRY PI

A. OBJECTIVE

To build the software system of the raspberry pi inside Sonbi and
integrate the Microsoft Kinect onto to Raspberry pi and make it
interactive with Sonbi robot in a way when people stands in front of
Kinect, the Sonbi robot waves his arms to the people.


B. HARDWARE SYSTEM


Sonbi has the following basic hardware items mounted inside of his
chest chassis:
• Raspberry Pi w/8GB Flash
• Pololu Maestro 24
• Microsoft Kinect
• ATX 500 Watt PS
• Misc parts (proto boards, wire, mechanical)


Raspberry pi:

• 700 MHz ARM1176JZF-S core processor
• 512 MB SDRAM
• Powered via microUSB (5V)
• Ethernet, HDMI, and 2 USB ports for peripherals
• Raspbian OS
• Widely used, lots of documentation!

Pololu maestro 24


• 24 Channels
• Pulse rate up to 333Hz
• Script size up to 8KB
• Up to 1.5 amps per channel
• 2 Power Options USB/power header
• Scripting or native API support

Raspberry pi and Pololu interface


• Simple wiring
– Power, Gnd, Tx-Rx, and Rx-Tx
• TTL serial port
– By default Pi uses serial port for console input/output
– Edit /etc/inittab and /boot/cmdline.txt to change default
and free serial port for use
• Great tutorial at:
http://shahmirj.com/blog/raspberry-pi-and-pololu-servocontroller-using-c
Microsoft Kinect:
• An RGB camera that stores three channel data in a 1280×960
resolution. This makes capturing a color image possible.
• An infrared (IR) emitter and an IR depth sensor. The emitter
emits infrared light beams and the depth sensor reads the IR
beams reflected back to the sensor. The reflected beams are
converted into depth information measuring the distance
between an object and the sensor. This makes capturing a
depth image possible.
• A multi-array microphone, which contains four microphones
for capturing sound. Because there are four microphones, it is
possible to record audio as well as find the location of the
sound source and the direction of the audio wave.
• A 3-axis accelerometer configured for a 2G range, where G is
the acceleration due to gravity. It is possible to use the
accelerometer to determine the current orientation of the
Kinect

Vertical Tilt angle: 27 deg
• Frame Rate :30 fps

C.INTEGRATING THE KINECT WITH RASPBERRY PI


The following are the steps taken to integrate the Kinect with
Raspberry Pi.


Connecting the Microsoft Kinect and its sensor drivers on the raspberry pi:


This process is one of the tedious parts of the project, as one must be aware that Kinect works on windows and to make it work on unixbased operating system we need to manually install all the libraries and drivers associated with it which is hard and takes a lot of man hours in resolving the issues. The steps taken and libraries and list of packages installed are given in the section building software systems.

Using Kinect’s full capability:


To use all the features of Kinect such as depth sensors, IR sensor, mic and motors to tilt the camera we need libraries that can do this. The RPI by default has OpenCV and Open GL/GLES mounted on it, but these doesn’t support (very soon it will) depth sensors and motors yet, so we need OpenNI or Libfreenect package to be installed. Either one is enough but I decided to install both. To test and understand, you can run sample programs, which are available in the OpenNI and Libfreenect folders. I have already complied and built the binaries. One can run it just go the “bin” folder and running the samples by
./”sample program.”


D. BUILDING SOFTWARE SYSTEM

Libfreenect:
Libfreenect is a userspace driver for the Microsoft Kinect. It runs on
Linux supports
• RGB and Depth Images
• Motors
• Accelerometer
• LED
Audio is a work in progress
To build libfreenect, you’ll need
• libusb >= 1.0.13
• CMake >= 2.6
• python == 2.* (only if BUILD_AUDIO or BUILD_PYTHON)
For the examples, you’ll need
• OpenGL (included with OSX)
• glut (included with OSX)
• pthreads-win32 (Windows)
git clone https://github.com/OpenKinect/libfreenect cd libfreenect mkdir build cd
build cmake -L .. make # if you don’t have make or don’t want color output #
cmake –build

sudo apt-get install git-core cmake pkg-config build-essential libusb-1.0-0-dev
sudo adduser $USER video sudo adduser $USER plugdev # necessary? # only if you are
building the examples: sudo apt-get install libglut3-dev libxmu-dev libxi-dev

Wrappers:


Interfaces to various languages are provided in wrappers/. Wrappers
are not guaranteed to be API stable or up to date.
• C (using a synchronous API)
• C++
• C#
• python
• ruby
• actionscript
• Java (JNA)


OpenNI:


Requirements:
1) GCC 4.x
From: http://gcc.gnu.org/releases.html
Or via apt: sudo apt-get install g++
2) Python 2.6+/3.x
From: http://www.python.org/download/
Or via apt: sudo apt-get install python
3) LibUSB 1.0.x
From: http://sourceforge.net/projects/libusb/files/libusb-1.0/
Or via apt: sudo apt-get install libusb-1.0-0-dev
4) FreeGLUT3
From: http://freeglut.sourceforge.net/index.php#download
Or via apt: sudo apt-get install freeglut3-dev
5) JDK 6.0
From:
http://www.oracle.com/technetwork/java/javase/downloads/jdk6u32-downloads-1594644.html
Or via apt: sudo add-apt-repository “deb

Optional Requirements (To build the documentation):
1) Doxygen
From:
http://www.stack.nl/~dimitri/doxygen/download.html#latestsrc
Or via apt: sudo apt-get install doxygen
2) GraphViz
From: http://www.graphviz.org/Download_linux_ubuntu.php
Or via apt: sudo apt-get install graphviz


Building OpenNI:


1) Go into the directory: “Platform/Linux/CreateRedist”.
Run the script: “./RedistMaker”.
This will compile everything and create a redist package in the
“Platform/Linux/Redist” directory. It will also create a distribution in
the “Platform/Linux/CreateRedist/Final” directory.
2) Go into the directory: “Platform/Linux/Redist”.
Run the script: “sudo ./install.sh” (needs to run as root)
The install script copies key files to the following location:
Libs into: /usr/lib
Bins into: /usr/bin
Includes into: /usr/include/ni
Config files into: /var/lib/ni
If you wish to build the Mono wrappers, also run “make
mono_wrapper” and “make mono_samples”


E. PERSON DETECTION AND SONBI’S ACTION:


The Raspberry PI runs a program bootscript_sonbi.sh
The “bootscript_sonbi.sh” run the command “python facedetect.py
–cascade=face.xml 0”
You will need to download this trained face file:
http://stevenhickson-code.googlecode.com/svn/trunk/AUI/Imaging/face.xml
The facedetect.py runs the face detection algorithm and triggers the
“Sonbi” executable.The Sonbi binary is responsible for bringing the
servo motors into action. The flowchart of the process is below.

Source: SONBI ROBOT HUMAN DETECTION USING KINECT AND RASPBERRY PI

Scroll to Top