Use a Raspberry Pi to make your “dumb” car smarter!
Story
INSPIRATION
I think smart cars are great, but buying a brand-new car just for a few extra features seems difficult to justify. At the moment, I’m currently limited to my “dumb” car, however, that doesn’t prevent me from attempting to enhance its intelligence on my own!
The term “Smart Car” can have dozens of different meanings depending on who you ask. So let’s start with my definition of a smart cart:
- Touchscreen interafce
- Backup camera that let’s you know if an object is too close
- Basic information about the car, such as fuel efficiency
- Maybe bluetooth connectivity?
I’m not sure which, if any, of those items I’ll have any success with, but I guess we’ll find out.
THE BACKUP CAMERA
The first obvious addition to our smart car wannabe is a backup camera. There are many kits out there that makes adding a backup camera pretty simple. But most of them require making modifications to the car itself, and since I’m just wanting to test a proof of concept, I don’t really want to start unscrewing and drilling into my car.
Regardless, I went ahead and ordered a cheap $13 backup camera and a USB powered LCD screen.
A unique attribute to remember of cameras of this kind is that they require an additional supply of power. Normally it is advisable to solder them to a reverse light in your vehicle so that they light up when the vehicle is placed in reverse gear. Since I don’t want to make any changes to my car right now, I’ll simply connect it to a few batteries instead. I will attach it to the license plate by using reliable duct tape.
I connected my 5′ LCD to the camera by running an RCA cable from the dashboard. This particular LCD screen can be powered using a USB connection, so I connected it to a USB car charger (commonly found in older vehicles).
Once I turned on the car, the screen lit up right away and displayed the camera’s image. Functions as promised! This could be a great option for individuals who simply want to install a backup camera in their vehicle without any additional features. I believe I am capable of improvement.
Introducing the Raspberry Pi. The Raspberry Pi is an ideal choice for a smart vehicle as it functions as a compact computer with numerous input and output options. You have the option of using a standard USB webcam or the Pi Camera when connecting a camera to the Pi. Both cameras do not need an external power source. Just be certain that you have an abundance of wires to connect to the rear of the vehicle.
Starting the car caused both the Pi and the screen to activate. One clear negative is the time it takes for the Pi to start up…something I’ll need to think about later. To see the Pi cam, I accessed the terminal and executed a basic script (a script that can be configured to automatically start up later).
raspivid –t 0
or
raspivid –t 0 —mode 7
After hitting enter, a feed of the video camera popped up! The nice thing about video on the Pi is that you can analyze it, and maybe even set up an alert system if an object gets too close! So let’s work on that next!
OBJECT DETECTION
METHOD 1
In terms of commercial backup cameras, I have typically observed two versions available. The initial approach involves utilizing a fixed overlay that displays color gradients to help users gauge the proximity of an object visually. The second approach utilizes a camera along with a sensor that is able to detect the proximity of an object to the vehicle and notify you if there is insufficient distance between them.
Since the first method seems easier, let’s try that one first. Essentially, it’s just an image overplayed on top of a video stream, so let’s see if recreating it is as easy as it sounds. The first thing we’ll need is a transparent image overlay. Here’s the one I used (also found on my github repository):
The picture displayed is precisely 640×480, coincidentally the resolution my camera will stream at. This was done on purpose, but you are welcome to adjust the picture dimensions if you are broadcasting in a different resolution.
Next we’ll create a python script that utilizes the PIL python image editor and the PiCamera (if you are not using a Pi Camera, then adjust the code for your video input). I just named the file image_overlay.py
import picamera
from PIL import Image
from time import sleep
#Start a loop with the Pi camera
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
camera.start_preview()
img = Image.open(‘bg_overlay.png’)
img_overlay = camera.add_overlay(img.tobytes(), size=img.size)
img_overlay.alpha = 128
img_overlay.layer = 3
while True:
sleep(1)
Saving it and testing it out by running “python image_overlay.py”, I tested it out on a small scale using a toy car to see how it worked. It worked like a charm, and there was practically no lag!
Putting it in the car and starting the program, the outcome was just as delightful! However, it is crucial to ensure that your camera is calibrated carefully so that the lower edge of the video scene aligns closely with your car’s bumper. In the images below, you can see that the camera angle was too high, resulting in the test object appearing further away than indicated by the camera.
The test for Method 1 was highly successful, however, it was also extremely simple. Having a system that detects the distance between an object and the car, and alerts you if it comes too close, would be beneficial. As I previously stated, the majority of cars with this capability rely on external sensors to identify objects. I’m hesitant to install more external devices in my car, so I’ll try using computer vision to identify objects.
I can utilize OpenCV as my computer vision API with Python. This will enable me to examine the content of the image and establish parameters for anything that is discovered. The concept involves setting a boundary near the car bumper on the video footage for the “alarm zone”. Next, I will make sure it identifies any significant objects present in the video. If the object’s bottom area crosses into the designated “alarm zone”, an alert message should be sent.
I will connect a piezo buzzer to the Raspberry Pi by attaching the positive leg to Pin 22 and the negative leg to a ground pin in order to use it as the alert sound.
Before starting on the code, we have to install OpenCV on the Pi first. Luckily the Pi can do this through the Python “pip” command
pip3 install opencv-python
Once OpenCV is installed, we can create a new Python file and start on the code. For the fully documented code, you can visit my github repository. I just named my file car_detector.py
import time
import cv2
import numpy as np
from picamera.array import PiRGBArray
from picamera import PiCamera
import RPi.GPIO as GPIO
buzzer = 22
GPIO.setmode(GPIO.BCM)
GPIO.setup(buzzer, GPIO.OUT)
camera = PiCamera()
camera.resolution = (320, 240) #a smaller resolution means faster processing
camera.framerate = 24
rawCapture = PiRGBArray(camera, size=(320, 240))
kernel = np.ones((2,2),np.uint8)
time.sleep(0.1)
for still in camera.capture_continuous(rawCapture, format=“bgr”, use_video_port=True):
GPIO.output(buzzer, False)
image = still.array
#create a detection area
widthAlert = np.size(image, 1) #get width of image
heightAlert = np.size(image, 0) #get height of image
yAlert = (heightAlert/2) + 100 #determine y coordinates for area
cv2.line(image, (0,yAlert), (widthAlert,yAlert),(0,0,255),2) #draw a line to show area
lower = [1, 0, 20]
upper = [60, 40, 200]
lower = np.array(lower, dtype=“uint8”)
upper = np.array(upper, dtype=“uint8”)
#use the color range to create a mask for the image and apply it to the image
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
dilation = cv2.dilate(mask, kernel, iterations = 3)
closing = cv2.morphologyEx(dilation, cv2.MORPH_GRADIENT, kernel)
closing = cv2.morphologyEx(dilation, cv2.MORPH_CLOSE, kernel)
edge = cv2.Canny(closing, 175, 175)
contours, hierarchy = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
threshold_area = 400
centres = []
if len(contours) !=0:
for x in contours:
#find the area of each contour
area = cv2.contourArea(x)
#find the center of each contour
moments = cv2.moments(x)
#weed out the contours that are less than our threshold
if area > threshold_area:
(x,y,w,h) = cv2.boundingRect(x)
centerX = (x+x+w)/2
centerY = (y+y+h)/2
cv2.circle(image,(centerX, centerY), 7, (255, 255, 255), -1)
if ((y+h) > yAlert):
cv2.putText(image, “ALERT!”, (centerX -20, centerY -20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255),2)
GPIO.output(buzzer, True)
cv2.imshow(“Display”, image)
rawCapture.truncate(0)
key = cv2.waitKey(1) & 0xFF
if key == ord(“q”):
GPIO.output(buzzer, False)
break
Alright, saving it and testing it out on a small scale, it did pretty well. It detected a lot of unnecessary objects, and I did notice that sometimes it would detect shadows as objects.
Loading it up in the car and testing it out in a real-world scenario, the results were surprisingly accurate! It was near perfect conditions, however. I don’t know how the results would have varied if this were at night.
In general, I was happy and shocked by the outcomes, but I would not rely on it in subpar conditions. This doesn’t mean it couldn’t be successful, it simply means that it’s currently basic code and would benefit from further testing and debugging (preferably by a reader!)
Method 2 was quite interesting, but method 1 is more dependable in various scenarios. If you were to do this for your car, I recommend choosing method 1.
After that, I will attempt to connect to the OBDII port of the car and find out what information I can retrieve!
CONNECTING TO ON BOARD DIAGNOSTICS (OBDII)
In the US, cars have been required to have an On Board Diagnostics port (OBDII) since 1996. Other countries adopted the same standard a bit later.