Difference between revisions of "Explorer/RaspberryPi/Visualprocessing/Motionvector/Run"

From PaparazziUAV
Jump to navigation Jump to search
(One intermediate revision by the same user not shown)
Line 7: Line 7:
client<br />
client<br />
gst-launch-1.0 rtspsrc location=rtsp://RASPBERRYPI_IP:8554/test ! rtph264depay ! avdec_h264 !  xvimagesink sync=false
gst-launch-1.0 rtspsrc location=rtsp://RASPBERRYPI_IP:8554/test ! rtph264depay ! avdec_h264 !  xvimagesink sync=false
Adam Heinrich thesis
An Optical Flow Odometry Sensor Based
on the Raspberry Pi Computer
The VideoCore uses two hardware motion estimation blocks for video encoding. A coarse motion estimation (CME) block estimates displacement
in pixel resolution and a subsequent fine motion estimation (FME) block is able to estimate the displacement in a sub-pixel resolution
The motion estimation block uses a block matching method to estimate the displacement (∆u, ∆v). For each macroblock in the current frame, the closest match is found in the previous frame within a given range. Vectors from the CME block can be obtained directly from the encoder while vectors from the FME block are encoded in the final H.264 bitstream
For each P-frame, the encoder provides a buffer which contains a single 32-bit value for each 16 × 16 px macroblock [Upt14, Hol14]. The most significant 16 bits represent a Sum of Absolute Differences (SAD) value. The SAD value is a measure of the estimated motion’s quality: the lower the SAD, the better match has been found. The other 16 bits represent motion in horizontal and vertical directions (8-bit signed integer per direction)
The number of macroblocks provided by the CME is constant for each frame.
Moreover, the analysis shows that the CME, in fact, estimates motion in two-pixel resolution (i.e. only even values are present).
A video_splitter component which has the ability to split video streams to multiple outputs. The video_splitter performs format conversion to grayscale so it is not necessary to configure the format at the camera’s output (the camera’s output format is optimized for the most efficient encoding)
both encoder_buffer_callback() and splitter_buffer_callback() contain a single line code which passes buffers to the main application for further processing.
Currently the cv.cpp is limited to 640x480 grayscale image. This can be easily modified (see function cv_init()).

Revision as of 01:00, 29 June 2020

rm /tmp/camera*
/home/pi/RaspiCV/build/raspicv -v -w 640 -h 480 -fps 30 -t 0 -o /dev/null -x /dev/null -r /dev/null -rf gray
gst-rtsp-server-1.14.4/examples/test-launch  "shmsrc socket-path=/tmp/camera3 do-timestamp=true ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 ! omxh264enc ! video/x-h264,profile=high  ! rtph264pay name=pay0 pt=96 config-interval=1" ""

client
gst-launch-1.0 rtspsrc location=rtsp://RASPBERRYPI_IP:8554/test ! rtph264depay ! avdec_h264 ! xvimagesink sync=false