Thursday, December 27, 2018

RidgeRun's Sony IMX219 CMOS Image Sensor Linux Driver for NVidia Jetson Xavier and Jetson TX1/TX2

This blog highlights the RidgeRun support for Jetson Xavier and Jetson Tegra platform on developing a CMOS Image Sensor Linux Driver for Sony IMX219.

Driver Features:

  • L4T 31.1 and Jetpack 4.1
  • V4l2 Media controller driver
  • One camera capturing (TODO: to expand to 6 cameras)
  • Tested resolution 3280 x 2464 @ 15 fps
  • Tested resolution 720p @ 78 fps
  • Tested resolution 1640x1232 @ 30 fps
  • Tested resolution 820x616 @ 30 fps
  • Tested with J20 Auvidea board.
  • Capture with v4l2src and also with nvarguscamerasrc using the ISP.
Images attached here are taken during the developing driver for Sony IMX219 image sensor, testing various image capture and display options, performance and latency measurement in our R&D lab located in CostaRica. Various tests are carried out using GStreamer pipelines. 

Enabling and building the driver with Auvidea J20 Expansion board, Example GStreamer pipelines, Performance, Latency measurement details are shared in the RidgeRun developer wiki's mentioned at the end of this blog.

RidgeRun's Sony IMX219 Linux driver for Jetson Xavier

Output image of the IMX219 camera sensor image capture @1640x1232 resolution with nvcamerasrc GStreamer element for the Jetson Xavier platform:




RidgeRun's Sony IMX219 Linux driver latency measurements on Jetson Xavier

Frames showing how the glass to glass latency measurement method is setup by our team while testing for image capture using IMX219 camera mode at 3280x2464@16fps resolution for Xavier platform.


Glass to glass latency measured is 215 ms (07:772 minus 07:557).Time readings can be seen in the displays.

RidgeRun's Sony IMX219 Linux driver for Jetson TX1

Image below is showing the IMX219 capture @1640x1232 resolution with nvcamerasrc on the Jetson TX1 platform. Camera aimed at computer monitor on the left which is reflecting the wall and ceiling shown on the right:



RidgeRun's Sony IMX219 Linux driver latency measurements on Jetson TX1

Image below is captured while measuring the Jetson TX1 glass to glass latency for 1080p 30fps IMX219 camera mode:


Glass to glass latency measured is 130 ms ((13.586 minus 13.456).Time readings can be seen in the displays.

You can find more information about the driver in these developer wiki's from RidgeRun : 


For technical information, please email to us at support@ridgerun.com or for purchase related questions post your inquiry at our Contact Us page.

Wednesday, December 26, 2018

RidgeRun's USB Video Class Gadget Library - LibGUVC v1.4.0 - NVidia Xavier and NXP iMX Support

The UVC Video Class Gadget Library or libguvc for short is a platform agnostic library that simplifies the development of UVC based gadget devices by encapsulating the most of the UVC communication leaving just the basic setup to the user. The USB video class gadget runs on top of the UVC function driver in the user space and takes care of the communication between the user application and the linux driver stack.

Release of libGUVC v1.4.0, is now supporting NVIDIA Jetson platform along with previously supported NXP-iMX6 family of processors, thanks to the new bulk transfer support.

It has never been easier to implement a UVC application on your hardware, libGUVC make it easy to interact with the UVC driver and expose a variety of useful features such as:

  •     USB 2.0 and USB 3.0 support
  •     Isochronous and bulk endpoint support
  •     YUV2, MJPEG and H264 video streaming support
  •     Extension Unit support
  •     MMAP and UserPtr support.

Since the driver is agnostic to the platform, you can run it on almost any platform with quality USB and UVC drivers, making it really simple to convert it into a UVC capable device.

 libGuvc in action - Running libGuvc on NVidia Xavier

 


 libGuvc in action - Running libGuvc on NXP-iMX6

 

You can find more information about the library in this developer wiki from RidgeRun Engineering : USB Video Class Gadget Library - libguvc

For more technical information please contact us at support@ridgerun.com. Please post your inquiry at our Contact Us page for purchase related questions and also since the libguvc is platform agnostic, you can request for a custom demo image for any other platform using this link.





Wednesday, December 12, 2018

RidgeRun supporting The 2019 FIRST® Robotics by providing NVIDIA Jetson support

RidgeRun love helping academic projects! RidgeRun is proud to help the teams on FIRST Robotics Competition [1] with software for embedded systems to improve the acquisition, processing and analysis of Audio and Video signals!

[1] https://www.firstinspires.org/robotics/frc

Michael, Amanda, and Emily (in the pic below) of FIRST Robotics Competition Team 102, The Gearheads, from Somerville High School in NJ, investigate the NVIDIA Jetson technology from RidgeRun.  They hope that the combination of Nvidia TX1, Auvidea J90LC, and drivers from RidgeRun will provide a cost effective, state of the art Computer Vision solution for the 2019 season. Go Gearheads!


RidgeRun help getting the kernel built with the IMX219 V4L2 driver for TX1 on the Auvidea J90-LC for state of the art Computer Vision solution for The 2019 FIRST® 2019 season.

Watch out! and we will keep updating as the Competition progress.

For more information please contact us at support@ridgerun.com or for purchase related questions post your inquiry at our Contact Us page.


Monday, December 10, 2018

RidgeRun - NVIDIA Xavier - Deep Learning Tutorials using Jetson Inference

Jetson-inference is a training guide for inference on the TX1 and TX2 using NVIDIA Deep Learning GPU Training System (DIGITS)

This blog details the summary of original Jetson-inference training from NVIDIA with a focus on inference part.

You can learn about following details in this developer wiki from RIdgeRun Engineering : NVIDIA Xavier - Deep Learning - Deep Learning Tutorials - Jetson Inference

Building jetson-inference.

Classifying Images with ImageNet.

Locating Object Coordinates using DetectNet.

Image Segmentation with SegNet.

and run a Live Demo.

With jetson-inference you can deploy deep learning examples on the NVIDIA Xavier in a matter of minutes. An example application is shown below.

The input is an image and it outputs the most likely class and the probability that the image belongs to that class using ImageNet classification network. ImageNet is a classification network trained with a database of 1000 objects.


Fig1: imagenet-console output image

It detects the image as 'Boston bull, Boston terrier' with imagenet class id of 0195 at 96.305% classification accuracy. Image recognition networks output a class probabilities corresponding to the entire input image.

Detection networks, on the other hand, find where in the image those objects are located. DetectNet accepts an input image, and outputs the class and coordinates of the detected bounding boxes.


                            Fig2: detectnet-console output image using coco-dog pretrained model


If you are new to the Xavier or planning on getting one, please visit our Jetson Xavier Wiki page.

Article related : 

Read this blog on deep reinforcement learning Deep Reinforcement Learning on the Jetson Xavier

For more information please contact us at support@ridgerun.com


Thursday, November 15, 2018

Testing Deep Reinforcement Learning on the Jetson Xavier with PyTorch


Jetson-reinforcement is a training guide, provided by NVIDIA, for deep reinforcement learning on the TX1 and TX2 using PyTorch. The tutorial is not currently officially supported on the Jetson Xavier. We provide instructions to get the Deep Q Learning 'cartpole' demo running on the Xavier.

The objective of this example is to balance a pole that is attached by an un-actuated joint to a cart, which moves along a friction-less track. Deep Q Learning solves the problem by generating actions based just on pictures of the environment and the received reward.



If you want to test this demo on your Xavier please visit our  jetson-reinforcement wiki page.
If you are new to the Xavier or are planning on getting one please visit our Jetson Xavier wiki page.

Monday, November 12, 2018

Working with CUDA on the Jetson Xavier

A lot of CUDA samples are included , one of these samples is imageDenoising. This sample demonstrates two adaptive image denoising techniques: KNN and NLM, based on computation of both geometric and color distance between texel



Check out the samples included with CUDA and what they do in CUDA Samples.

If you are new to the Xavier or are planning on getting one please visit our Jetson Xavier wiki page.

Thursday, November 8, 2018

Tuning Jetson Xavier's Performance

The JetPack provides the tegrastats utility program which reports memory, processor and gpu usage, power consumption and temperature for Tegra-based devices.




The JetPack also provides with a command line tool called nvpmodel which can modify the performance for a given power budget. It provides power budgets for 10W, 15W, 30W and a no-budget mode for max performance. This will modify number of CPUs online, maximum frequency for CPU, GPU, DLA, PVA and number of online PVA cores. Values set by nvpmodel will persist across power cycles.


Finally the jetson_clocks.sh script provides the best performance for the current nvpmodel by setting the clock frequencies to the max frequency and disabling dynamic frequency scaling.


For examples on how to use this utilities please visit performance tuning wiki page.

If you are new to the Xavier or are planning on getting one please visit  Jetson Xavier wiki page.

Tuesday, November 6, 2018

RidgeRun - GStreamer Deep Learning inference plugin: GstInference

GstInference is an open-source project from RidgeRun Engineering that provides a framework for integrating deep learning inference into GStreamer.

Check out the presentation from RidgeRun Engineering team about our latest development on GstInference at Edinburg GStreamer Conference 2018.

GstInference: A GStreamer Deep Learning Framework : https://gstconf.ubicast.tv/videos/gstinference-a-gstreamer-deep-learning-framework/



For more information please contact us at support@ridgerun.com









Deploying Deep Learning on the Jetson Xavier using the Deep Learning Accelerator

jetson-inference is a training guide for inference and deep learning on Jetson platforms. It uses NVIDIA TensorRT for efficiently deploying neural networks.The "dev" branch on the repository is specifically oriented for Jetson Xavier since it uses the Deep Learning Accelerator (DLA) integration with TensorRT 5.


With jetson-inference you can deploy deep learning examples on the Xavier in a matter of minutes. Some of the example applications are showed below.


ImageNet is a classification network trained with a database of 1000 objects. The input is an image and it outputs the most likely class and the probability that the image belongs to that class.



Image recognition networks output a class probabilities corresponding to the entire input image. Detection networks, on the other hand, find where in the image those objects are located. DetectNet accepts an input image, and outputs the class and coordinates of the detected bounding boxes.



For more examples and a tutorial on how to get jetson-inference running in your Xavier please visit our jetson-inference wiki page.

If you are new to the Xavier or are planning on getting one please visit our Jetson Xavier wiki page.



Thursday, September 6, 2018

RidgeRun support for The 2019 FIRST® Robotics Competition

RidgeRun support for The 2019 FIRST® Robotics Competition season.

RidgeRun believes supporting education is the right thing to do and because of this RidgeRun is proud to help the teams on the FIRST Robotics Competition [1] with software for embedded systems to improve the acquisition, processing and analysis of Audio and Video signals!


For more information please contact us at support@ridgerun.com

Thursday, April 12, 2018

Object tracking in Jetson TX1/TX2 using GStPTZR

RidgeRun's new GStPTZR element allows to crop, zoom and rotate a video stream, simulating the behavior of a pan/tilt/zoom/rotate PTZR video camera.

These features, paired with information obtained from a jetson-inference pre-built model, can be used to provide a video stream focused on the detected object.

Captured video (left) is provided to a jetson-inferencce model. The model detects a person and provides the location. GstPTZR is used to crop the area of interest as a separate stream.
Using the GstPTZR element in an already-existing GStreamer pipeline is easy, and can provide a simple way to focus on the important parts of the video stream. 

Captured video (left) and the cropped version obtained with GStPTZR (right) that allow for detection of an object on a specific area of the video.
RidgeRun's GStPTZR is highly customizable and can be used for a wide variety of applications, both paired with detection models and other specific use cases.

For more information, visit www.ridgerun.com/gstptzr and contact us at support@ridgerun.com to request an evaluation version for your application.

The examples in the pictures above were created using models from the Jetson Inference guide.