The FIRST Robotics Team that I work with decided to install two cameras on the robot, but it took awhile for us to figure out the best way to actually stream the camera data. In previous years, we had used Axis IP cameras — but this year we had USB cameras plugged into the control system. Initially we used some streaming code that came from WPILib, but it wasn’t particularly high performance. Then we heard of someone who was using mjpg-streamer, which sounded exactly like what we wanted to use!
Of course, we needed to connect to the stream from python 3. I looked around, and while there were some examples, they didn’t perform quite as well as I would have liked. I believe if you compile with OpenCV with ffmpeg, it has mjpg support builtin, but it was quite laggy for me in the past. So, I wrote a reasonably efficient python mjpg-streamer client — in particular, I partially parse the HTTP stream, and reuse the image buffers when reading in the data, instead of making a bunch of copies. It works pretty well for us, maybe you’ll find it useful the next time you need to read an mjpg-streamer stream from your Raspberry Pi or on your FRC Robot!
I’m not going to explain how to compile/install mjpg-streamer, there’s plenty of docs on the web for that (but, if you want precompiled binaries for the roboRIO, go to this CD post). Here’s the code for the python client (note: this was tested using OpenCV 3.0.0-beta and Python 3):
import re from urllib.request import urlopen import cv2 import numpy as np # mjpg-streamer URL url = 'http://10.14.18.2:8080/?action=stream' stream = urlopen(url) # Read the boundary message and discard stream.readline() sz = 0 rdbuffer = None clen_re = re.compile(b'Content-Length: (\d+)\\r\\n') # Read each frame # TODO: This is hardcoded to mjpg-streamer's behavior while True: stream.readline() # content type try: # content length m = clen_re.match(stream.readline()) clen = int(m.group(1)) except: return stream.readline() # timestamp stream.readline() # empty line # Reallocate buffer if necessary if clen > sz: sz = clen*2 rdbuffer = bytearray(sz) rdview = memoryview(rdbuffer) # Read frame into the preallocated buffer stream.readinto(rdview[:clen]) stream.readline() # endline stream.readline() # boundary # This line will need to be different when using OpenCV 2.x img = cv2.imdecode(np.frombuffer(rdbuffer, count=clen, dtype=np.byte), flags=cv2.IMREAD_COLOR) # do something with img? cv2.imshow('Image', img) cv2.waitKey(1)
Hi,
I’m interested about your project, thus I’d like to ask a few questions. I would be pleased if you can reply to them.
– what is a latency? (my problem is a huge latency, approx 4sec somtetimes)
Best Regards,
Szabolcs Kovacs