Monday, April 26, 2010

Making the BeagleCam - frame processing for objects

After a bit of fiddling with opkg - opencv samples were installed. The plan is to make an office behaviour analysis system with a hallway camera. The school already has several surveillance cameras but I can't really get to them, but I did make a javascript based frame-by-frame loop for this system.

After testing the opencv samples for a bit I realized that a filtered image stream can be very handy in saving processing and storage. The idea is to utlize gstreamer chaining to slot in an openCV facedetect module and save only frames where face is detected.

I can currently capture timestamped frames using gstreamer:

gst-launch v4l2src num-buffers=1 ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! ffmpegcolorspace ! jpegenc ! filesink location=$(date +"%s").jpg

Then I can use OpenCV peopledetect and facedetect in the chain with queued up frames and start saving once the detection starts producing output frames. This will greatly reduce the amount of boring hallway frames and the board will concentrate on processing interesting events i.e. people approaching the camera. This can then be used for interesting data mining work such as time spent in the office, walking speed, hallway meetings between colleagues and number of days without changing shirts.

Time to ensure the gstreamer opencv module works properly on beagleboard.

2 comments:

jpiat said...

How do you plan to slot in an opencv filter into the gstreamer pipe. I'am currently working on a simpler example (ball detection) and i would like to stream the result from the beagle for easy debugging. Do you know how to inser an opencv filter into a gstreamer chain ?

Tisham Dhar said...

I am planning to use and work on the nascent gstreamer-opencv plugin. There is a link to the project on launchpad contained within the main body of the article. Here it is again:
https://launchpad.net/gst-opencv