Sunday, December 8, 2019

NVIDIA Jetson TX2-Camera image capture latency measurement techniques.

For better understanding of this latest blog from RidgeRun, please read the RidgeRun blog on Diving deep into NVIDIA Jetson TX2 - Video Input system and camera image capture latency.

The techniques used to measure the Jetson TX2 image capture latency are outlined in this blog.

Two timestamps are used to measure the latency :

SOF : The first timestamp is taken when the pixels of a frame start to arrive at the VI (t0), this is the start-of-frame (SOF).
The second timestamp is usually obtained at the time when the frame is available on a userspace application (t1).

Techniques can be divided into two main categories, techniques that use the CHANSEL_PXL_SOF (referred to as SOF) and the techniques that do not use the SOF .

Outline of both the methods:

Techniques that use the SOF timestamp




CHANSEL_PXL_SOF can be used as t0 and then, t1 can be the timestamp obtained at the instant when the frame arrives to userspace using ideally the same clock used to set the CHANSEL_PXL_SOF, then the latency can be computed as follows:
Latency = t1 - t0

Techniques that do not use the SOF timestamp

LED test
This test involves the use of a led connected to the TX2 GPIO.
The camera must start capturing when the led is still off, and then a kernel module must turn the led on and then record the current CLOCK_MONOTONIC timestamp to be used as t0. Each image that arrives to userspace must be timestamped also with the CLOCK_MONOTONIC at the instant when it's available, this will be t1.



More technical details about the techniques, working with TSC (Time Stamping System Clock) for the latency measurements and rtcpu, v4l2 tracing details are explained in the RidgeRun developer wiki page : NVIDIA Jetson TX2 - VI Latency Measurement Techniques

Contact Us

Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of the engineering support, product and Professional Services.
Please email to support@ridgerun.com for technical questions and for an evaluation version (if available).
Contact details for sponsoring the RidgeRun GStreamer projects are available at Sponsor Projects page.

Thursday, November 14, 2019

NVIDIA Jetson Xavier Multi-Camera Artificial Intelligence Demo from RidgeRun

Jetson Xavier Multi-Camera Artificial Intelligence Demo from RidgeRun

This demo from RidgeRun shows the capabilities of the Jetson Xavier by performing :

  • Multi-camera capture through FPD-LINK III with Virtual Channels support,
  • Display of each individual camera stream on a grid, 
  • Application of CUDA video processing filters, classification and detection inference, 
  • Video stabilization processing and video streaming through the network.

Please watch Jetson Xavier Multicamera + AI + Video Stabilization + CUDA Video Processing FIlters demo from RidgeRun :

Demo components:

D3 Engineering-Nvidia-Xavier FPD-Link III interface card
                                   
D3 Engineering-D3RCM-OV10640-953 Rugged Camera Module

The 8 camera streams are downscaled to 480x480 resolution and displayed on a grid. Following are the extra processing is applied to different camera streams:

Camera_1: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the streams with CUDA video processing filters.

Camera_2: Sobel in X-axis CUDA video filter applied with GstCUDA plugin.

Camera_3: Border Enhancement CUDA video filter applied with GstCUDA plugin.

Camera_4: Grayscale CUDA video filter applied with GstCUDA plugin.

Camera_5: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the stream with video stabilization processing.

Camera_6: Video stabilization processing applied with GstNvStabilize plugin. 

Camera_7: InceptionV1 Classification Inference applied with GstInference plugin using GPU accelerated TensorFlow.

Camera_8: TinyYoloV2 Detection Inference applied with GstInference plugin using GPU accelerated TensorFlow.

One individual camera stream selected by the user from the demo menu is streamed to the network using the GstWebRTC plugin and an OpenWebRTC application.

Demo setup, demo features in detail, demo code and performance profiling information are explained in this RidgeRun & D3 Engineering - Nvidia Partner Showcase : Jetson Xavier Multi-Camera AI Demo RidgeRun Developer Wiki.

Contact Us

Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of the engineering support, product and services.                                                                              
You can also send an email to support@ridgerun.com for a technical support, more information about the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Thursday, October 17, 2019

RidgeRun's V4L2 interface for PCIe connected FPGAs : FPGA V4L2 PCIe Driver

HW acceleration is an essential component of modern embedded systems. With ever increasing real-time demands and low power requirements, long are the days where single CPUs systems could fullfill today's market expectations. Among available accelerators, FPGAs typically excel in flexibility and performance, at the cost of integration complexities. It's common to see every FPGA integrated differently in every product, with different interfaces and home-made APIs. Despite this, one common remark that can be observed is that they are typically connected via high bandwidth PCIe.
RidgeRun is developing a single, standard V4L2 interface for PCIe connected FPGAs for a variety of vendors and models. No matter your HW setup, the FPGA is exposed as a combination of camera and display devices. This allows out-of-the-box usage with OpenCV, GStreamer, Libav, browsers and any other standard software that communicates via V4L2 calls.
This V4L2-FPGA driver act as an alternative to solve FPGA-SoC communication in a more standard way, without sacrificing communication performance and make you able to concentrate in the FPGA hardware description. V4L2-FPGA driver allows communicating with an external FPGA using the V4L2 API.
Figure 1. Software stack description using RidgeRun's V4L2 FPGA.
This project consists of three subsystems which allow for the acceleration of algorithms on custom hardware as shown in the following image:
Figure 2: V4L2 Data Flow
Current on-going development is targeted for the PicoEVB Xilinx module on the NVIDIA Xavier. Contact us if you are interested in sponsoring the port to your hardware configuration. 
Contact Us
Please visit our main website https://www.ridgerun.com for the RidgeRun online store or https://www.ridgerun.com/contact for pricing information of the engineering support, product and services. You can also send an email to support@ridgerun.com for a technical support, more information about the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Wednesday, September 18, 2019

GStreamer Video Stabilizer for NVIDIA Jetson Boards



Many applications require the removal of undesired camera movement. Professional video stitching, medical imaging such as colonoscopy or endoscopy and localization of unmanned vehicles are a few examples of use cases that benefit from video stabilization. Unfortunately, this is a very resource consuming technique that may be unfeasible for real time operations on resource constrained systems such as embedded systems.
The following video provides a hands-on overview of GstNvStabilize on the works!



GstNvStabilize is GStreamer based video stabilizer for NVIDIA Jetson boards. It's based on VisionWorks and OpenVX hardware processing units to accelerate the stabilization for real time applications.

Latest v0.4.0 release include:
- Region-of-interest configuration via GStreamer caps - Smoothing level configuration via GStreamer property - Smart compensation limit to avoid black borders - GPU acceleration - Supported platforms: - NVIDIA Jetson Xavier - NVIDIA Jetson TX1/TX2 - NVIDIA Jetson Nano

Learn more in our developer's wiki:
https://developer.ridgerun.com/wiki/index.php?title=GStreamer_Video_Stabilizer_for_NVIDIA_Jetson_
Boards

Purchase directly from our website:
https://shop.ridgerun.com/products/gstnvstabilize?_pos=1&_sid=0951b9cf7&_ss=r 

Contact Us
Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of 
the engineering support, product and services. 
You can also send an email to support@ridgerun.com for a technical support, more information about 
the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Thursday, September 5, 2019

Nvidia Jetson Xavier multi camera Artificial Intelligence demo showcase by RidgeRun

This demo from RidgeRun shows the capabilities of the Jetson Xavier by performing :
  • Multi-camera capture through FPD-LINK III with Virtual Channels support, 
  • Display of each individual camera stream on a grid, 
  • Application of CUDA video processing filters, classification and detection inference, 
  • Video stabilization processing and video streaming through the network.

RidgeRun demo screen:
RidgeRun & D3 Engineering Nvidia Partner Showcase Jetson Xavier Multi-Camera AI Demo.

Demo components:
D3 Engineering-Nvidia-Xavier FPD-Link III interface card
                                   
D3 Engineering-D3RCM-OV10640-953 Rugged Camera Module

The 8 camera streams are downscaled to 480x480 resolution and displayed on a grid. Following are the extra processing is applied to different camera streams:

Camera_1: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the streams with CUDA video processing filters.

Camera_2: Sobel in X-axis CUDA video filter applied with GstCUDA plugin.

Camera_3: Border Enhancement CUDA video filter applied with GstCUDA plugin.

Camera_4: Grayscale CUDA video filter applied with GstCUDA plugin.

Camera_5: No extra processing, just normal camera stream. Intended to be used as a point of comparison against the stream with video stabilization processing.

Camera_6: Video stabilization processing applied with GstNvStabilize plugin. 

Camera_7: InceptionV1 Classification Inference applied with GstInference plugin using GPU accelerated TensorFlow.

Camera_8: TinyYoloV2 Detection Inference applied with GstInference plugin using GPU accelerated TensorFlow.

One individual camera stream selected by the user from the demo menu is streamed to the network using the GstWebRTC plugin and an OpenWebRTC application.

Demo setup, demo features in detail, demo code and performance profiling information are explained in this RidgeRun & D3 Engineering - Nvidia Partner Showcase : Jetson Xavier Multi-Camera AI Demo RidgeRun Developer Wiki.

Contact Us

Please visit our Main Website for the RidgeRun online store or Contact Us for pricing information of the engineering support, product and services.                                                                              
You can also send an email to support@ridgerun.com for a technical support, more information about the features, evaluation version (if available) or for a details about how to sponsor a new feature.

Thursday, May 30, 2019

GstCUDA: RidgeRun presentation at NVIDIA GTC 2019 on GStreamer and CUDA integration.

RidgeRun engineers presented a GstCUDA, a framework developed by RidgeRun that provides an easy, flexible and powerful integration between GStreamer audio/video streaming infrastructure and CUDA hardware-accelerated video processing at NVIDIA GTC 2019.

GstCUDA: Easy GStreamer and CUDA Integration




Please Watch the Video



For more information please contact us at support@ridgerun.com or for purchase related questions post your inquiry at our Contact Us page.







Monday, March 18, 2019

RidgeRun at GTC 2019 as a NVIDIA Jetson partner

RidgeRun is excited about the new things coming for the NVIDIA Jetson partners and our team will be at  GTC 2019!!




RidgeRun Engineering Manager with Jensen Huang CEO NVIDIA

For more information please contact us at support@ridgerun.com or for purchase related questions post your inquiry at our Contact Us page.

Sunday, March 17, 2019

RidgeRun support "Armstrong" Robot at The 2019 FIRST® Robotics competition by providing NVIDIA Jetson support


FIRST Robotics Competition [1] Team 102, The Gearheads, from Somerville High School in NJ, demonstrating the "Armstrong" - The robot for the 2019 season. They hope that the combination of Nvidia TX1, Auvidea J90LC, and drivers from RidgeRun will provide a cost effective, state of the art Computer Vision solution.  Go Gearheads!

[1] https://www.firstinspires.org/robotics/frc

RidgeRun help getting the kernel built with the IMX219 V4L2 driver for TX1 on the Auvidea J90-LC for state of the art Computer Vision solution for The 2019 FIRST® 2019 season.

Pictures from the FIRST Robotics Competition at Bridgewater, NJ on March 16, 2019.
"Armstrong" - The robot.

Please note the Ridgerun graphic on the "Armstrong" - The robot.

"Armstrong" - The robot in action.

More close-up Ridgerun graphic on the "Armstrong" - The robot


Team 102, The Gearheads, qualified for the finals out of 35, went on to tie in the semi finals and ultimately finished very favorably.


Next competition is the Greater Pittsburgh Regional in PA on Mar 20 - 23.


Watch out! and we will keep updating as the competition progress.

For more information please contact us at support@ridgerun.com or for purchase related questions post your inquiry at our Contact Us page. 

Wednesday, February 6, 2019

GstInference - Bringing AI to GStreamer

AI is everywhere. Well, maybe this is not entirely true yet, but will be in a few years from now. I heard a prediction that 1,000,000,000 video cameras will be installed by 2020. At RidgeRun we've been working hard to join this revolution, and we want to share some of the technology we're developing with the community.

The truth is that the initial face-off with AI is never easy. A quick Google search reveals a handful of frameworks available to implement a neural network. For the sake of this example suppose you choose TensorFlow. You grasp the whole graph oriented processing paradigm, and finally understand and successfully run the image classification example you are trying out. Cool! This is exactly what your product needs to do. Almost.

Now to fit all the pieces together. Here's a small list of the most immediate tasks for your project:

  • Modify the example to receive images from the camera, rather than PNG images.
  • Scale down the 1920x1080 camera images to 224x224 needed by the neural network.
  • Render a label in the image according to the network prediction.
  • Save resulting images in a MP4 file.

If you are familiar with GStreamer, you immediately envisioned a nice, simple pipeline. Something like the following:


Putting it this way doesn't feel that overwhelming after all.

That's exactly where GstInference fits in this whole scenario. It's a set of GStreamer elements that execute AI models on a video stream. We've taken care of all the complexities and boilerplate code so that all the framework details are hidden, and you just link in the inference element as any other. And not only TensorFlow, but Caffe, NCSDK, TensorRT, and others as well (yet to come).

Here's how the whole pipeline looks altogether:

Note how the modular architecture of GStreamer allows you to make the most of the available hardware accelerators in your platform. The scaling could be done using the Image Signal Processor, the encoding using the Video Processing Unit and the inference using the GPU. Let's take it one step further: using the exact same pipeline you can migrate the inference processing from the GPU to the Tensor Processing Unit by simply changing the backend property in GstInference from TensorFlow to TensorRT, for example.

RidgeRun's main design goal is simplicity for the user.

We have made an early release available. Feel free to give it a try and give us your feedback. We'd love to hear from you!

https://developer.ridgerun.com/wiki/index.php?title=GstInference
https://github.com/RidgeRun/gst-inference/