EMAR Mini – Emergency Assistance Robot

EMAR Mini is a minature version of EMAR, an open-source Emergency Robot Assistant to assist doctors during the COVID-19 pandemic.

The following guide will take you through setting up and installing EMAR Mini Emergency Assistance Robot.

The Raspberry Pi 4 homes the EMAR Mini software and powers the Intel hardware.

DISCLAIMER

You should always be very careful when working with electronics! We will not accept responsibility for any damages done to hardware or yourself through full or partial use of this tutorial. Use this tutorial at your own risk, and take measures to ensure your own safety.

V1 Required Hardware

  • 1 x Raspberry Pi 4
  • 1 x IntelĀ® RealSenseā„¢ D415
  • 1 x IntelĀ® Neural Compute Stick 2
  • 1 x Breadboard
  • 4 x Tower Pro SG90 Servos
  • Jumper wires

Prerequisites

HIAS Sever

This system requires a fully functioning HIAS server. Follow the HIAS server installation guide to setup your HIAS server before continuing with this tutorial.

STLs For 3D Printing

For this tutorial you will need to have already printed your EMAR Mini. Follow the STLs For 3D Printing guide to complete the 3D printing part of this project.

Raspberry Pi OS Lite

In this tutorial we will use Raspberry Pi OS Lite (Buster). First of all download the image from the Raspberry Pi OS download page, extract the image file, and write it to an SDK card. In our project we have used a 64GB SD card.

Once you have done this, insert it in your Raspberry Pi 4, when you have logged in, use the following command to update your device and then open the Raspberry Pi configuration application. You need to expand your filesystem, setup your keyboard preferences and connect your RPI4 to your network.

sudo apt-get update && sudo apt-get upgrade
sudo raspi-config

Installation

Now you need to install the EMAR Mini hardware, software and dependencies.

Device Security

First you will harden your device security.

Remote User

You will create a new user for accessing your server remotely. Use the following commands to set up a new user for your machine. Follow the instructions provided and make sure you use a secure password.

sudo adduser YourUsername

Now grant sudo priveleges to the user:

usermod -aG sudo YourUsername

Now open a new terminal and login to your server using the new credentials you set up.

ssh YourNewUser@YourServerIP

SSH Access

Now let's beef up server secuirty. Use the following command to set up your public and private keys. Make sure you carry out this step on your development machine, not on your server.

Tips

  • Hit enter to confirm the default file.
  • Hit enter twice to skip the password (Optionalm, you can use a password if you like).
ssh-keygen

You should end up with a screen like this:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/genisys/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/genisys/.ssh/id_rsa.
Your public key has been saved in /home/genisys/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:5BYJMomxATmanduT3/d1CPKaFm+pGEIqpJJ5Z3zXCPM genisys@genisyslprt
The key's randomart image is:
+---[RSA 2048]----+
|.oooo.. |
|o .o.o . . |
|.+.. + |
|o o o . |
| .o .+ S . . |
| =..+o = o.o . . |
|= o =oo.E .o..o .|
|.. + ..o.ooo+. . |
| .o++. |
+----[SHA256]-----+

Now you are going to copy your key to the server:

ssh-copy-id YourNewUser@YourServerIP

Once you enter your password for the new user account, your key will be saved on the server. Now try and login to the server again in a new terminal, you should log straight in without having to enter a password.

ssh YourNewUser@YourServerIP

Finally you will turn off password authentication for login. Use the following command to edit the ssh configuration.

sudo nano /etc/ssh/sshd_config

Change the following:

#PasswordAuthentication yes

To:

PasswordAuthentication no

Then restart ssh:

sudo systemctl restart ssh

If you are using ssh to do the above steps keep your current terminal connected. Open a new terminal, attempt to login to your server. If you can login then the above steps were successful.

The remainder of this tutorial assumes you are logged into your device. From your development machine, connect to your device using ssh or open your local terminal if working directly on the machine.

ssh YourUser@YourServerIP

UFW Firewall

Now you will set up your firewall:

sudo ufw enable
sudo ufw disable

Now open the required ports, these ports will be open on your server, but are not open to the outside world:

sudo ufw allow 22
sudo ufw allow OpenSSH

Finally start and check the status:

sudo ufw enable
sudo ufw status

You should see the following:

Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
22 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)

Fail2Ban

Fail2Ban adds an additional layer of security, by scanning server logs and looking for unusal activity. Fail2Ban is configured to work with IPTables by default, so we will do some reconfiguration to make it work with our firewall, UFW.

sudo apt install fail2ban
sudo mv /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo rm /etc/fail2ban/action.d/ufw.conf
sudo touch /etc/fail2ban/action.d/ufw.conf
echo "[Definition]" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
echo " enabled = true" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
echo " actionstart =" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
echo " actionstop =" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
echo " actioncheck =" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
echo " actionban = ufw insert 1 deny from <ip> to any" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
echo " actionunban = ufw delete deny from <ip> to any" | sudo tee -a /etc/fail2ban/action.d/ufw.conf
sudo nano /etc/fail2ban/action.d/ufw.conf
sudo sed -i -- "s#banaction = iptables-multiport#banaction = ufw#g" /etc/fail2ban/jail.local
sudo nano /etc/fail2ban/jail.local
sudo fail2ban-client restart
sudo fail2ban-client status

You should see the following:

Shutdown successful
Server readyStatus
|- Number of jail: 1
`- Jail list: sshd

Python Dependencies

sudo apt install python3-pip
sudo pip3 install geolocation
sudo pip3 install paho-mqtt
sudo pip3 install psutil
sudo pip3 install numpy
sudo pip3 install requests
sudo pip3 install zmq

Create EMAR Device In HIAS

Head to your HIAS Server and navigate to Robotics->EMAR->Create. In the Device settings, select your desired iotJumpWay Location and Zone, a name for your EMAR device, the IP and MAC address of your Raspberry Pi. The Real-Time Object Detection & Depth settings can be left with the default settings. If you modify the ports and the directory name you need to change these when updating the HIAS Server Proxy settings below.

HIAS Server Proxy

You need to update the HIAS Server Proxy settings so that the proxy_pass can correctly redirect traffic to your Raspberry Pi.

To do this you need to edit the NGINX configuration. Use the following command on your HIAS server to edit the file with Nano:

sudo nano /etc/nginx/sites-available/default

Towards the top of the file you will find the settings that control the proxy for EMAR/EMAR Mini. You need to change ###.###.#.## to the IP address of your Raspberry Pi.

If you changed the Stream Port, Stream Directory or Socket Port settings in the HIAS EMAR UI, you need to update these here also.

location ~* ^/Robotics/EMAR/Live/(.*)$ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/tass/htpasswd;
proxy_pass http://###.###.#.##:8282/$1;
}

Once you have saved and exited the configuration, you need to reload the NGINX server:

sudo systemctl reload nginx

Update Device Settings

Now you need to update the device settings using the credentials provided in the HIAS UI. If you changed the Stream Port and Socket Port settings you should also update them in this configuration file.

sudo nano confs.json{
"iotJumpWay": {
"host": "",
"port": 8883,
"ip": "localhost",
"lid": 0,
"zid": 0,
"did": 0,
"dn": "",
"un": "",
"pw": ""
},
"EMAR": {
"ip": ""
},
"Realsense": {
"server": {
"port": 8282
},
"socket": {
"port": 8383
}
},
"MobileNetSSD": {
"bin": "Model/MobileNetSSD_deploy.bin",
"classes": [
"background",
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor"
],
"inScaleFactor": 0.007843,
"meanVal": 127.53,
"size": 300,
"threshold": 0.6,
"xml": "Model/MobileNetSSD_deploy.xml"
}
}

IntelĀ® RealSenseā„¢ D415

Now we will install the software for the IntelĀ® RealSenseā„¢ D415.

MAKE SURE YOUR REALSENSE IS NOT PLUGGED IN

After unsuccessfully following a number of Intel's tutorials to install Realsense on a Raspberry Pi 3 and Raspberry Pi 4 and multiple OS, I was finally pointed in the direction of the LibUVC-backend installation. To make this work for our project, you need to modify the downloaded libuvc_installation.sh file and carry out an extra step.

As per the guide, download the file first with:

wget https://github.com/IntelRealSense/librealsense/raw/master/scripts/libuvc_installation.sh

Then modify:

cmake ../ -DFORCE_LIBUVC=true -DCMAKE_BUILD_TYPE=release

To:

cmake ../ -DFORCE_LIBUVC=true -DCMAKE_BUILD_TYPE=release DBUILD_PYTHON_BINDINGS=bool:true

This will install the Python bindings we need to run PyRealsense. Now continue with:

chmod +x ./libuvc_installation.sh
./libuvc_installation.sh

And finally open your bashrc file

sudo nano ~/.bashrc

And add the following to the end of the file before saving and closing.

export PYTHONPATH=$PYTHONPATH:/usr/local/lib

You can now plug in your Realsense to the Raspberry Pi and test by using the following to check if your device is recognized and opens successfully:

rs-enumerate-devices

And finally the following to test that PyRealsense is working:

python3
import pyrealsense
exit()

If you don't get any errors from import pyrealsense, everything is setup correctly for your Realsense.

IntelĀ® Distribution of OpenVINOā„¢ Toolkit

Again the official Intel tutorials failed in one way or another, I finally came across a very good tutorial on PyImageSearch. The following guide uses the parts relevant to our project and allows you to quickly setup OpenVINO on your Raspberry 4.

sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libcanberra-gtk*
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python3-dev
cd
wget https://download.01.org/opencv/2020/openvinotoolkit/2020.1/l_openvino_toolkit_runtime_raspbian_p_2020.1.023.tgz
tar -xf l_openvino_toolkit_runtime_raspbian_p_2020.1.023.tgz
mv l_openvino_toolkit_runtime_raspbian_p_2020.1.023 openvino
nano ~/.bashrc

Now add the following line to the bottom of your bashrc file before saving and closing.

source ~/openvino/bin/setupvars.sh

IntelĀ® Neural Compute Stick 2

Again we use instructions provided in the PyImageSearch tutorial to install NCS2 on the Raspberry Pi.

sudo usermod -a -G users "$(whoami)"
cd
sh openvino/install_dependencies/install_NCS_udev_rules.sh

Connect the neck

First of all, push your final servo through the top ofĀ Body-Middle.stlĀ and screw it in place. Next screw the servo arm to the bottom of the neck and attach to the servo. You may need some glue to keep this part secure.

Source: EMAR Mini ā€“ Emergency Assistance Robot


About The Author

Muhammad Bilal

I am highly skilled and motivated individual with a Master's degree in Computer Science. I have extensive experience in technical writing and a deep understanding of SEO practices.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top