Webcam Streaming Video On Raspberry Pivia Browser

Compiling FFMpeg For Webcam Streaming 

First thing we need to do is to get a version of ffmpeg that can stream. If you haven’t installedGit on your Raspberry Pi, do that first.sudo apt-get install git

Once git was installed, I went into /usr/src to download the source for ffmpeg.
cd /usr/srcgit clone git://
Git retrieved the source code and we will need to build ffmpeg from scratch. If you need soundfor ffmpeg, you will need to also install the libasound2-dev package which enables ALSA.
cd ffmpeg./configuresudo make && make install
Compiling ffmpeg on the Pi will take a while, probably a good idea to leave it overnight. After
it’s done do the following.

Add the following lines into /etc/apt/sources.list

sudo deb-src sid mainsudo deb wheezy main non-free3.

sudo apt-get update4.

sudo apt-get install deb-multimedia-keyring

Remove the second line from /etc/apt/sources.list
deb wheezy main non-free6.

sudo apt-get source ffmpeg-dmo

You should now have a folder called ffmpeg-dmo-0.11 <

The version number mightchange as time goes by8.

Change the directory to the folder containing the source. e.g.
cd ffmpeg-dmo-0.11

setup the source

compile and install ffmpeg
sudo make && make install

Configuring ffmpeg

Once ffmpeg is installed, we need to create a configuration file to enable ffmpeg to stream toffserver.ffserver is what will host the stream.1.

We need to create a configuration file for ffserver, we will place it in /etc/ and call itffserver.conf

Port 803.


MaxClients 105.

MaxBandwidth 500006.




file /tmp/webcam.ffm10.

FileMaxSize 10M11.




Feed webcam.ffm15.

Format mjpeg16.

VideoSize 640×48017.

VideoFrameRate 1018.

VideoBitRate 200019.

VideoQMin 120.

VideoQMax 10
The last stanza defines the size of the stream, and bitrate. If the parameters don’t suit each
other, then the stream will not be smooth.21.

Next, the following command needs to be put into a .sh file. This will allow you to start
streaming by just running the .sh file. Let’s call it and put it in /usr/sbin as the
file needs to be run as root.
ffserver -f /etc/ffserver.conf & ffmpeg -v verbose -r 5 -s 640×480 -fvideo4linux2 -i /dev/video0 http://localhost/webcam.ffm

Once the .sh file has been created, and the above code has been placed into it, you need tomake the file executable by running chmod +x /usr/sbin/

Start Streaming Video Via Browser

Once the shell script and configuration file has been created, you can start streaming by running
When you run it, you should get this as a result.
** 1 dup!2 fps= 5 q=2.6 size= 51136kB time=00:06:56.40 bitrate=1006.0kbits/sdup=359 drop=0
And ffmpeg is now streaming, and you should be able to access the video stream from webaddress on
When I tried streaming, I found out that the Logitech webcam only supports 30fps and 15fps, sothe driver automatically forces it to 15fps.
Streaming from raw video to mjpeg isn’t very cpu

intensive for the Pi but it can’t keep up
streaming at a good fps on the Logitech Webcam. I tried it on the lowest resolution possiblewhich is 176×144 and I still had a about 2 second delay.

I tried some of my other cheaper webcams which didn’t have mjpe
g format and they performed a bit better than the Logitech one in the Raw streaming, maybe because the fact that the Logitechforced 15 fps.Almost doubling the resolution to 320×240 resulted in barely any video on the Logitech withffmpeg stalling altogether, and not streaming to ffserver at all, and the same for the cheapwebcam.

Problems with Webcam Streaming –Webcam Support

Some people were having issues with video streaming would either not start or would stop veryquickly. I had a look at the settings that I was using to stream from my Raspberry Pi and I wasusing 320×240 at 10fps.I increase the resolution to 640×480 and I started having similar issue. I was getting errors likethe one below
[video4linux2,v4l2 @ 0x1d4e520] The v4l2 frame is 40588 bytes, but 614400bytes are expected

After some googling, it seems there are webcams that can’t stream video from mjpeg directly,
and frame size might be reduced and therefore does not work. Lowering the resolution to
160×120 helps but it’s not what we want.
I tried a second spare webcam, and it worked perfectly on 640×480 without any tweaking. Seems
that some webcams just can’t do what we want them to do !

Raspberry Pi Face Recognition UsingOpenCV

About a year ago, I created a Wall-E robot that does object and face recognition. It uses Arduinoas the controller and need to communicate with a computer that runs the face detection programto track the target. Raspberry Pi face recognition has become very popular recently. With the powerful processor on Raspberry Pi, I can connect it with the Arduino using i2c on the robot andrun the object recognition program on-board. It could become a truly independent, intelligentWall-E robot!However, building such a robot will be a project for near future. In this article, I will be showingyou how to do basic Object Recognition on the Raspberry Pi using Python and OpenCV. I willalso show you a simple open loop face tracking application using pan-tilt servos to turn thecamera around. Note: Please be careful about the indentation in the Python codes, sometimes my blog decides tomess this up randomly.I wrote an article on how to use SSH and VNC to control and monitor the Raspberry Pi, and
that’s what I will be using in this project.

Installing OpenCV For Python

To install OpenCV for Python, all you have to do is use apt-get like below:
sudo apt-get install python-opencv
To test the installation of OpenCV, run this Python script, it will switch on your camera for videostreaming if it is working.
1 import cv


3 cv.NamedWindow(“w1”, cv.CV_WINDOW_AUTOSIZE)

4 camera_index = 0

5 capture = cv.CaptureFromCAM(camera_index)


7 def repeat():

8 global capture #declare as globals since we are assigning to them now

9 global camera_index

10 frame =cv.QueryFrame(capture)

11cv.ShowImage(“w1”, frame)

12 c = cv.WaitKey(10)

13 if(c==”n”): #in “n” key is pressed while the popup window is in focus

14 camera_index += 1 try the next camera index

15 capture = cv.CaptureFromCAM(camera_index)

16 if not capture: #if the next camera index didn’t work, reset to 0.

17 camera_index = 0

18 capture =cv.CaptureFromCAM(camera_index)


20 while True:

21 repeat()

Simple Example of Raspberry Pi FaceRecognition

This example is a demonstration for Raspberry Pi face recognition using haar-like features. itfinds faces in the camera and puts a red square around it. I am surprised how fast the detection is given the limited capacity of the RaspberryPi (about 3 to 4 fps). Although it’s still much slowerthan a laptop, but it would still be useful in some robotics applications.

You will need to download this trained face file:


3 The program finds faces in a camera image or video stream and displays a redbox around them.


5 import sys

6 import as cv

7 from optparse import OptionParser


9 min_size = (20, 20)

10 image_scale = 2

11 haar_scale = 1.2

12 min_neighbors = 2

13 haar_flags = 0


15 def detect_and_draw(img, cascade):

16 # allocate temporary images

17 gray = cv.CreateImage((img.width,img.height), 8, 1)

18 small_img = cv.CreateImage((cv.Round(img.width / image_scale),

19 cv.Round (img.height / image_scale)), 8, 1)


21 # convert color input image to grayscale

22 cv.CvtColor(img, gray, cv.CV_BGR2GRAY)


24 # scale input image for faster processing

25 cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)

26 cv.EqualizeHist(small_img, small_img)


28 if(cascade):

29 t = cv.GetTickCount()

30 faces = cv.HaarDetectObjects(small_img, cascade,cv.CreateMemStorage(0),

31 haar_scale, min_neighbors, haar_flags,min_size)

32 t = cv.GetTickCount() – t

33 print “time taken for detection = %gms” % (t/(cv.GetTickFrequency()*1000.))

34 if faces:

35 for ((x, y, w, h), n) in faces:

36 # the input to cv.HaarDetectObjects was resized, so scalethe

37 # bounding box of each face and convert it to two CvPoints

38 pt1 = (int(x * image_scale), int(y * image_scale))

39 pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))

40 cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)


42 cv.ShowImage(“video”, img)


44 if__name__ ==’main‘:


46 parser = OptionParser(usage = “usage: %prog [options][filename|camera_index]”)

47 parser.add_option(“-c”, “–cascade”, action=”store”, dest=”cascade”,type=”str”, help=”Haar cascade file, default %default”, default = “../data/haarcascades/haarcascade_frontalface_alt.xml”)

48 (options, args) = parser.parse_args()


50 cascade = cv.Load(options.cascade)


52 if len(args) != 1:

53 parser.print_help()

54 sys.exit(1)


56 input_name = args[0]

57 if input_name.isdigit():

58 capture = cv.CreateCameraCapture(int(input_name))

59 else:

60 capture = None


62 cv.NamedWindow(“video”, 1)


64 #size of the video

65 width = 160

66 height = 120


68 if width is None:

69 width = int(cv.GetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH))

70 else:

71 cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH,width)


73 if height is None:

74 height = int(cv.GetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT))

75 else:

76 cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT,height)


78 if capture:

79 frame_copy = None

80 while True:


82 frame = cv.QueryFrame(capture)

83 if not frame:

84 cv.WaitKey(0)

85 break

86 if not frame_copy:

87 frame_copy = cv.CreateImage((frame.width,frame.height),

88 cv.IPL_DEPTH_8U,frame.nChannels)


90 if frame.origin == cv.IPL_ORIGIN_TL:

91 cv.Copy(frame, frame_copy)

92 else:

93 cv.Flip(frame, frame_copy, 0)


95 detect_and_draw(frame_copy, cascade)


97 if cv.WaitKey(10) >=0:

98 break

99 else:

100 image = cv.LoadImage(input_name, 1)

101 detect_and_draw(image, cascade)

102 cv.WaitKey(0)


104 cv.DestroyWindow(“video”)

To run this program, type in this command in your VNC Viewer’s terminal:

python –cascade=face.xml 0
The number at the end represents the number of your video device.

Face Tracking in Raspberry Pi with pan

In this example I will be using theWall-E Robot‘s camera and pan-tilt servo head.The idea is simple. Raspberry Pi detects the position of the face, sends a command to theArduino. Arduino will convert the command into servo position and turn the camera. I am usingi2c to connect Raspberry Pi and Arduino. Note: I am still trying to optimize the code for this example, so the result is still not great, but itgives you the idea how it works. I will come back and update this post as soon as I am happywith the result.

Source: Webcam Streaming Video On Raspberry Pivia Browser

Scroll to Top
Read previous post:

The LT8491 from Analog Devices is a buck-boost switching regulator battery charge controller that implements a constant-current constant-voltage (CCCV) charging profile used for most...