Tuesday, April 21, 2020

Birds Eye View on NVIDIA Jetson Boards


A Bird's Eye View (BEV) is an elevated view of an object from above, with a perspective as though the observer were a bird. It is often used in the making of blueprints, floor plans, maps and car parking systems. The result is obtained with a technique that generates an aerial view of a scene based on various lateral perspectives. Several cameras are used to obtain those lateral views, usually at least 4, to get a 360 view around an object. 
The project experiments with the notion of Bird’s Eye in Desktop PC and NVIDIA Jetson platforms.
To get a good field of view with the minimum amount of cameras, for example 180 degrees per camera, the image will be generated with a fisheye perspective. The RidgeRun’s BEV engine is able to remove the input distortion and using a preconfigured settings file generate an output image. This may be appreciated on Figure 1.
Figure 1. Example of a BEV image generated from 4 lateral fisheye cameras.
The process is composed of 6 smaller steps. First, the buffers are obtained from the camera. Next the fisheye distortion is removed. The third and fourth steps generate the elevated perspective with the technique known as IPM (Inverse Perspective Mapping). Fifth and sixth steps adjust the image to obtain the desired output section for each camera by applying an enlargement to match the perspective with the other cameras and by cropping the relevant section of the image. A summary of the process is depicted in Figure 2.
Figure 2. Simplified process to generate the BEV
The C++ library was developed in a modular way, open to include new image processing frameworks, but was initially tested with the help of OpenCV.
Version 0.2.0 includes:
  • Fisheye and rectilinear test data sets (synthetic and real)
  • Jupyter notebooks with detailed prototypes
  • Fisheye correction and BEV computation
  • C++ library with OpenCV based implementation
  • CUDA acceleration
  • Extensive set of examples
Some areas under development:
  • Image Stitching integration
  • GStreamer support
Learn more in our developers wiki or check out the API reference.
Contact us to be part of this research!

Sunday, December 8, 2019

NVIDIA Jetson TX2-Camera image capture latency measurement techniques.

For better understanding of this latest blog from RidgeRun, please read the RidgeRun blog on Diving deep into NVIDIA Jetson TX2 - Video Input system and camera image capture latency.

The techniques used to measure the Jetson TX2 image capture latency are outlined in this blog.

Two timestamps are used to measure the latency :

SOF : The first timestamp is taken when the pixels of a frame start to arrive at the VI (t0), this is the start-of-frame (SOF).
The second timestamp is usually obtained at the time when the frame is available on a userspace application (t1).

Techniques can be divided into two main categories, techniques that use the CHANSEL_PXL_SOF (referred to as SOF) and the techniques that do not use the SOF .

Outline of both the methods:

Techniques that use the SOF timestamp




CHANSEL_PXL_SOF can be used as t0 and then, t1 can be the timestamp obtained at the instant when the frame arrives to userspace using ideally the same clock used to set the CHANSEL_PXL_SOF, then the latency can be computed as follows:
Latency = t1 - t0

Techniques that do not use the SOF timestamp

LED test
This test involves the use of a led connected to the TX2 GPIO.
The camera must start capturing when the led is still off, and then a kernel module must turn the led on and then record the current CLOCK_MONOTONIC timestamp to be used as t0. Each image that arrives to userspace must be timestamped also with the CLOCK_MONOTONIC at the instant when it's available, this will be t1.



More technical details about the techniques, working with TSC (Time Stamping System Clock) for the latency measurements and rtcpu, v4l2 tracing details are explained in the RidgeRun developer wiki page : NVIDIA Jetson TX2 - VI Latency Measurement Techniques

Contact Us

Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of the engineering support, product and Professional Services.
Please email to support@ridgerun.com for technical questions and for an evaluation version (if available).
Contact details for sponsoring the RidgeRun GStreamer projects are available at Sponsor Projects page.

Thursday, November 14, 2019

NVIDIA Jetson Xavier Multi-Camera Artificial Intelligence Demo from RidgeRun

Jetson Xavier Multi-Camera Artificial Intelligence Demo from RidgeRun

This demo from RidgeRun shows the capabilities of the Jetson Xavier by performing :

  • Multi-camera capture through FPD-LINK III with Virtual Channels support,
  • Display of each individual camera stream on a grid, 
  • Application of CUDA video processing filters, classification and detection inference, 
  • Video stabilization processing and video streaming through the network.

Please watch Jetson Xavier Multicamera + AI + Video Stabilization + CUDA Video Processing FIlters demo from RidgeRun :

Demo components:

D3 Engineering-Nvidia-Xavier FPD-Link III interface card
                                   
D3 Engineering-D3RCM-OV10640-953 Rugged Camera Module

The 8 camera streams are downscaled to 480x480 resolution and displayed on a grid. Following are the extra processing is applied to different camera streams:

Camera_1: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the streams with CUDA video processing filters.

Camera_2: Sobel in X-axis CUDA video filter applied with GstCUDA plugin.

Camera_3: Border Enhancement CUDA video filter applied with GstCUDA plugin.

Camera_4: Grayscale CUDA video filter applied with GstCUDA plugin.

Camera_5: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the stream with video stabilization processing.

Camera_6: Video stabilization processing applied with GstNvStabilize plugin. 

Camera_7: InceptionV1 Classification Inference applied with GstInference plugin using GPU accelerated TensorFlow.

Camera_8: TinyYoloV2 Detection Inference applied with GstInference plugin using GPU accelerated TensorFlow.

One individual camera stream selected by the user from the demo menu is streamed to the network using the GstWebRTC plugin and an OpenWebRTC application.

Demo setup, demo features in detail, demo code and performance profiling information are explained in this RidgeRun & D3 Engineering - Nvidia Partner Showcase : Jetson Xavier Multi-Camera AI Demo RidgeRun Developer Wiki.

Contact Us

Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of the engineering support, product and services.                                                                              
You can also send an email to support@ridgerun.com for a technical support, more information about the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Thursday, October 17, 2019

RidgeRun's V4L2 interface for PCIe connected FPGAs : FPGA V4L2 PCIe Driver

HW acceleration is an essential component of modern embedded systems. With ever increasing real-time demands and low power requirements, long are the days where single CPUs systems could fullfill today's market expectations. Among available accelerators, FPGAs typically excel in flexibility and performance, at the cost of integration complexities. It's common to see every FPGA integrated differently in every product, with different interfaces and home-made APIs. Despite this, one common remark that can be observed is that they are typically connected via high bandwidth PCIe.
RidgeRun is developing a single, standard V4L2 interface for PCIe connected FPGAs for a variety of vendors and models. No matter your HW setup, the FPGA is exposed as a combination of camera and display devices. This allows out-of-the-box usage with OpenCV, GStreamer, Libav, browsers and any other standard software that communicates via V4L2 calls.
This V4L2-FPGA driver act as an alternative to solve FPGA-SoC communication in a more standard way, without sacrificing communication performance and make you able to concentrate in the FPGA hardware description. V4L2-FPGA driver allows communicating with an external FPGA using the V4L2 API.
Figure 1. Software stack description using RidgeRun's V4L2 FPGA.
This project consists of three subsystems which allow for the acceleration of algorithms on custom hardware as shown in the following image:
Figure 2: V4L2 Data Flow
Current on-going development is targeted for the PicoEVB Xilinx module on the NVIDIA Xavier. Contact us if you are interested in sponsoring the port to your hardware configuration. 
Contact Us
Please visit our main website https://www.ridgerun.com for the RidgeRun online store or https://www.ridgerun.com/contact for pricing information of the engineering support, product and services. You can also send an email to support@ridgerun.com for a technical support, more information about the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Wednesday, September 18, 2019

GStreamer Video Stabilizer for NVIDIA Jetson Boards



Many applications require the removal of undesired camera movement. Professional video stitching, medical imaging such as colonoscopy or endoscopy and localization of unmanned vehicles are a few examples of use cases that benefit from video stabilization. Unfortunately, this is a very resource consuming technique that may be unfeasible for real time operations on resource constrained systems such as embedded systems.
The following video provides a hands-on overview of GstNvStabilize on the works!



GstNvStabilize is GStreamer based video stabilizer for NVIDIA Jetson boards. It's based on VisionWorks and OpenVX hardware processing units to accelerate the stabilization for real time applications.

Latest v0.4.0 release include:
- Region-of-interest configuration via GStreamer caps - Smoothing level configuration via GStreamer property - Smart compensation limit to avoid black borders - GPU acceleration - Supported platforms: - NVIDIA Jetson Xavier - NVIDIA Jetson TX1/TX2 - NVIDIA Jetson Nano

Learn more in our developer's wiki:
https://developer.ridgerun.com/wiki/index.php?title=GStreamer_Video_Stabilizer_for_NVIDIA_Jetson_
Boards

Purchase directly from our website:
https://shop.ridgerun.com/products/gstnvstabilize?_pos=1&_sid=0951b9cf7&_ss=r 

Contact Us
Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of 
the engineering support, product and services. 
You can also send an email to support@ridgerun.com for a technical support, more information about 
the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Thursday, September 5, 2019

Nvidia Jetson Xavier multi camera Artificial Intelligence demo showcase by RidgeRun

This demo from RidgeRun shows the capabilities of the Jetson Xavier by performing :
  • Multi-camera capture through FPD-LINK III with Virtual Channels support, 
  • Display of each individual camera stream on a grid, 
  • Application of CUDA video processing filters, classification and detection inference, 
  • Video stabilization processing and video streaming through the network.

RidgeRun demo screen:
RidgeRun & D3 Engineering Nvidia Partner Showcase Jetson Xavier Multi-Camera AI Demo.

Demo components:
D3 Engineering-Nvidia-Xavier FPD-Link III interface card
                                   
D3 Engineering-D3RCM-OV10640-953 Rugged Camera Module

The 8 camera streams are downscaled to 480x480 resolution and displayed on a grid. Following are the extra processing is applied to different camera streams:

Camera_1: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the streams with CUDA video processing filters.

Camera_2: Sobel in X-axis CUDA video filter applied with GstCUDA plugin.

Camera_3: Border Enhancement CUDA video filter applied with GstCUDA plugin.

Camera_4: Grayscale CUDA video filter applied with GstCUDA plugin.

Camera_5: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the stream with video stabilization processing.

Camera_6: Video stabilization processing applied with GstNvStabilize plugin. 

Camera_7: InceptionV1 Classification Inference applied with GstInference plugin using GPU accelerated TensorFlow.

Camera_8: TinyYoloV2 Detection Inference applied with GstInference plugin using GPU accelerated TensorFlow.

One individual camera stream selected by the user from the demo menu is streamed to the network using the GstWebRTC plugin and an OpenWebRTC application.

Demo setup, demo features in detail, demo code and performance profiling information are explained in this RidgeRun & D3 Engineering - Nvidia Partner Showcase : Jetson Xavier Multi-Camera AI Demo RidgeRun Developer Wiki.

Contact Us

Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of the engineering support, product and services.                                                                              
You can also send an email to support@ridgerun.com for a technical support, more information about the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Thursday, May 30, 2019

GstCUDA: RidgeRun presentation at NVIDIA GTC 2019 on GStreamer and CUDA integration.

RidgeRun engineers presented a GstCUDA, a framework developed by RidgeRun that provides an easy, flexible and powerful integration between GStreamer audio/video streaming infrastructure and CUDA hardware-accelerated video processing at NVIDIA GTC 2019.

GstCUDA: Easy GStreamer and CUDA Integration




Please Watch the Video



For more information please contact us at support@ridgerun.com or for purchase related questions post your inquiry at our Contact Us page.