NextCloud Server on Raspberry Pi

A prototype to minimize the number of staffs having to interact w/ people to notify them wearing masks live streaming while operating.

During these unprecedented times – COVID-19 pandemic – one of the most crucial precautions to falter the detrimental effects of coronavirus is to avert the spread of the virus worldwide as much as possible. Especially as we know that it is spreading even faster than seasonal influenza viruses: “With the worldwide spread of the novel Coronavirus Disease 2019 (COVID-19), caused by the SARS-CoV-2 virus, we are experiencing a declared pandemic. One of the largest preoccupations about this new virus is its notable ability to spread given the absence of any effective treatment, vaccine, and immunity in human populations. Epidemiologists quantify the ability of infectious agents to spread by estimating the basic reproduction number (R0) statistic (Delamater et al., 2019), which measures the average number of people each contagious person infects. The new coronavirus is transmitting at an average R0 between 2.7 and 3.2 (Billah, Miah & Khan, 2020; Liu et al., 2020), which is greater than seasonal influenza viruses that spread every year around the planet (median R0 of 1.28, Biggerstaff et al., 2014).(1)”

The spreading of infectious diseases, including COVID-19, depends on human interactions. However, in an environment where behavioral patterns and physical contacts are evolving due to mass transportation and globalization, measuring human interactions to apply necessary regulations and stipulations is a major challenge yet. And, since it is difficult to exclude the possibility of asymptomatic cases due to the long incubation period, issuing regulations is becoming even more critical, such as “Face mask use indoors remains very important in settings with poor ventilation and where there are lots of people nearby.” But, even strict regulations and precautions are not enough to avert the spread of coronavirus alone. We need to reduce human interactions as much as possible.

According to the following research, we know that coronavirus keeps spreading and affecting businesses, even with strict regulations but continuing human interactions: “The aviation sector has been experiencing an unprecedented crisis since March 2020. Indeed, almost all airports have been paralyzed following the outbreak of the Covid-19 pandemic. Euro control had announced a significant 88% reduction in the number of flights by May 1, 2020 (Eurocontrol, 2020a, 2020b). The flow of international traffic contributed significantly to the spread of the virus worldwide (Kraemer et al., 2020). In Europe, for example, it seems that the areas least affected by the virus are those where no international airport is located. One of the main characteristics of COVID-19 is its long incubation period, which currently averages 5.2 days (Guan et al., 2020). Contagiousness during the incubation period is one of the reasons why COVID19 spreads so widely compared to other viruses, making it extremely difficult to exclude the possibility of asymptomatic passengers passing through the airport (Postorino et al., 2020; Pullano et al., 2020).(2)”

After researching the mentioned topics, I wanted to contribute to the solution of applying regulations while reducing human contact and interactions as much as possible. Thus, I decided to create this prototype, which aims to minimize the number of staff having to interact with people to notify them wearing masks; while applying regulations by detecting people without a mask and fining them with a penalty receipt automatically.

To decrease interactions as much as possible, I designed this prototype as an all-in-one service with hardware and software. The device follows these protocols while operating:

  • Live streams while operating
  • Receives commands (direction and speed) as Python Arguments from the PHP web application (Mask Detection Robot Dashboard)
  • Detects people without a mask automatically using the object classification
  • Captures pictures of people without a mask after detecting
  • Sends pictures of people without a mask to the web application as evidence
  • Prints the penalty receipt after detecting, including the QR code of the payment page showing the payment options and the corroborating evidence of the issued fine – the captured picture

As software, I developed a web application in PHP, named Mask Detection Robot Dashboard, which shows the live stream, executes Python script including Arguments to control the robot chassis, saves the captured images, and generates a payment page for each receipt.

And, I developed an application in Python, named Autonomous Mask Detection Robot, which detects people without a mask, captures pictures, sends the captured pictures to the PHP web application as evidence, and prints penalty receipts with unique receipt numbers.

As hardware, I used a DFRobot HUSKYLENS AI Camera to detect people without a mask by utilizing its integrated object classification mode. And, to be able to make the device move on command, I used a Black Gladiator – Tracked Robot Chassis and an L298N Motor Driver Module. Then, I added a Tiny (Embedded) Thermal Printer to the device to print the receipt after detecting people without a mask.

Lastly, I used a USB webcam to live stream and capture pictures of people without a mask when detected. To supply the robot chassis and the thermal printer, I used a 12V external battery with an MB102 Power Supply Module.

Huge thanks to DFRobot for sponsoring this project.

Sponsored products by DFRobot:

⭐ Gravity: HUSKYLENS with Silicone Case | Inspect

⭐ Black Gladiator – Tracked Robot Chassis | Inspect

⭐ Embedded Thermal Printer – USB – TTL Serial | Inspect

Step 1: Detecting people without a mask with the object classification mode in HuskyLens

I chose to use the DFRobot HuskyLens AI camera in my project since it has an embedded screen showing the results of face mask detection. In that regard, I could display the results without making the device more complicated with screen connections. Also, HuskyLens includes build-in algorithms supporting six functions – face recognition, object tracking, object recognition, line tracking, color recognition, and tag recognition – controlled with an easy-to-use interface.

However, we need to activate the object classification mode to detect face masks since the other embedded functions are not capable of face mask detection. The object classification function of HuskyLens can learn from multiple photos of different objects by built-in machine learning algorithms. After completing the object classification learning, when HuskyLens detects the learned object, it can recognize and display the object ID number. Well, the more it learns, the more accurate the recognition can be.

You can get more information about HuskyLens features and functions here.

We need to upgrade the firmware version to be able to use the object classification function. Follow the instructions below to upgrade the HuskyLens firmware:

⭐ Click the General Settings to view the version number.

Download the HuskyLens Uploader for Windows here or in the Downloads below. If requested, you may need to install the CP2102N chip driver here.

Download the latest firmware (V0.5.1Norm) file here or in the Downloads below.

Run the HuskyLens Uploader, a small black cmd window will pop up first, and after a while, the interface window will appear, then click the Select File button to load the firmware file.

Click the Upload button. Wait about 5 minutes to complete the uploading.

Then, you should see the upgraded version number on the settings menu.

After upgrading the firmware version, we need to train the object classification algorithms to learn people with and without masks as different classes. To train and test HuskyLens algorithms, I used the pictures in this dataset provided by Prajna Bhandary.

You can download the pictures I used in the Downloads below –

Source: NextCloud Server on Raspberry Pi

About The Author

Muhammad Bilal

I am highly skilled and motivated individual with a Master's degree in Computer Science. I have extensive experience in technical writing and a deep understanding of SEO practices.

Scroll to Top