Streaming data from 4 cameras with a small carrier board: rapid prototyping

Embedded vision components have always been popular and used in numerous applications. What all these applications have in common is the need to integrate more and more functions into a small space. Often, it is also advantageous to have these systems make decisions at the edge. To support such systems, including rapid prototyping capabilities, Teledyne FLIR offers the Quartet™ embedded TX2 solution.

Streaming data from 4 cameras with a small carrier board: rapid prototyping

Embedded vision components have always been popular and used in numerous applications. What all these applications have in common is the need to integrate more and more functions into a small space. Often, it is also advantageous to have these systems make decisions at the edge. To support such systems, including rapid prototyping capabilities, Teledyne FLIR offers the Quartet™ embedded TX2 solution. This custom carrier board easily integrates up to 4 USB3 machine vision cameras at full bandwidth. It includes the NVidia Jetson deep learning hardware accelerator, pre-integrated with Teledyne FLIR’s Spinnaker® SDK. Often, it is also beneficial to have these systems make decisions at the edge, especially in the areas of inspection, mobile robotics, transportation systems, and various types of unmanned vehicles.

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 1: Prototype setup for all four applications

In this very practical article, to highlight what Quartet is capable of, we describe the steps to develop an ITS (transportation system)-inspired prototype running four applications simultaneously, three of which employ deep learning:

• Application 1: Recognizing License Plates Using Deep Learning
• Application 2: Vehicle Type Classification Using Deep Learning
• Application 3: Vehicle Color Classification Using Deep Learning
• Application 4: Looking through the windshield (through reflections and glare)

Introducing Quartet™ Embedded Solutions for TX2 – Teledyne FLIR Machine Vision

Streaming data from 4 cameras with a small carrier board: rapid prototyping

Shopping List: Hardware and Software Components

1) SOM for processing:

New Teledyne FLIR Quartet carrier boards for TX2 include:

• 4 TF38 connectors with dedicated USB3 controller
• Nvidia Jetson TX2 Module
• Teledyne FLIR’s powerful and easy-to-use Spinnaker SDK pre-installed to ensure plug-and-play compatibility with Teledyne FLIR Blackfly S board level cameras
• Nvidia Jetson deep learning hardware accelerator enables complete decision-making system on a compact single board

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 2: Quartet embedded solution with TX2 for 4 Blackfly S cameras and 4 FPC cables.

2) Camera and cable

• 3 standard Teledyne FLIR Blackfly S USB3 board level cameras with the same rich feature set as the boxed version for the latest CMOS sensors for seamless integration with Quartet
• 1 custom camera: Blackfly S USB3 board level camera with Sony IMX250MZR polarization sensor
• Cable: TF38 FPC cable, can transfer power and data in a single cable, saving space

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 3: Blackfly S Board Level Camera with FPC Cable

3) Lighting: LED lights provide ample lighting to avoid motion blur of license plates.

Application 1: Recognizing License Plates Using Deep Learning

Development time: 2-3 weeks, mostly to make it more robust and run faster

Training images: Included with LPDNet

For license plate recognition, we deployed an off-the-shelf license plate detection (LPDNet) deep learning model from Nvidia to detect the location of the license plate. To recognize letters and numbers, we used the Tesseract open source OCR engine. The camera is a Blackfly S board-level 8.9-megapixel color camera (BFS-U3-88S6C-BD) equipped with a Sony IMX267 sensor. We limit the detection area of ​​license plate detection to speed up the operation and utilize tracking to improve robustness. The output includes the bounding box of the license plate, and the corresponding license plate characters.

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 4: Transferring license plate bounding boxes and license plate characters.

Application 2: Vehicle Type Classification Using Deep Learning

Development time: ~12 hours including image acquisition and annotation

training images: ~300

For vehicle classification, we used transfer learning to train our own deep learning object detection model with three toy cars (SUV, sedan, and truck). We acquired about 300 training images of this setup taken at various distances and angles. The camera is a Blackfly S board-level 5-megapixel color camera (BFS-U3-51S5C-BD) equipped with a Sony IMX250 sensor. We marked out the bounding box of the toy car, which took about 3 hours. We performed transfer learning to train our own SSD MobileNet object detection model, which took about half a day on an Nvidia GTX1080 Ti GPU. With a GPU hardware accelerator, the Jetson TX2 module can efficiently perform deep learning inference and output the bounding box of the car, along with the corresponding vehicle type.

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 5: Transfer bounding boxes and preset vehicle types, and confirmed confidence factors

Application 3: Vehicle Color Classification Using Deep Learning

Development Time: Re-used models from Vehicle Type Application, with an additional 2 days for color sorting, integration and testing

Training images: 300 of the same images as the “Vehicle Type Application” were reused

For vehicle color classification, we ran the same deep learning object detection model as above to detect cars, and then performed image analysis on bounding boxes to classify their colors. The output includes the bounding box of the car, and the corresponding vehicle color. The camera is a Blackfly S board-level 3-megapixel color camera (BFS-U3-32S4C-BD) equipped with a Sony IMX252 sensor.

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 6: Preset Color Types for Transfer Bounding Box and Confirmation

Application 4: Looking through the windshield (through reflections and glare)

Glare reduction is critical for traffic-related applications, such as viewing HOV lanes through windshields, checking seat belt compliance, and even checking for cell phone use while driving. To do this, we custom built a camera that combined a Blackfly S USB3 board-level camera with a 5-megapixel Sony IMX250MZR polarization sensor. This plate-level polarizing camera is not a standard product, but Teledyne FLIR can easily switch to a different sensor to provide custom camera options to demonstrate its anti-glare capabilities. We simply stream the camera image through Teledyne FLIR’s SpinView GUI, which offers various “Polarization Algorithm” options, such as four-channel mode, glare reduction mode, which can Display a glare reduction effect on a stationary toy car.

Streaming data from 4 cameras with a small carrier board: rapid prototyping
Figure 7: The Spinnaker SDK GUI provides various “Polarization Algorithm” options, such as four-channel mode, glare reduction mode, which can Display the glare reduction effect on a stationary toy car. Four-channel mode can display 4 images corresponding to 4 different polarization angles.

Overall system optimization

Streaming data from 4 cameras with a small carrier board: rapid prototyping

While the four prototypes work independently, we note that the overall performance is rather poor when all deep learning models are run simultaneously. Nvidia’s TensorRT SDK provides deep learning inference optimizers and runtimes for Nvidia hardware such as the Jetson TX2 module. We optimized our deep learning model with the TensorRT SDK, resulting in a ~10x performance improvement. On the hardware side, we attached a heatsink to the TX2 module to avoid overheating as the module gets quite hot when all applications are running. Ultimately, we managed to achieve good frame rates when running all four applications together: 14 fps for vehicle type recognition, 9 fps for vehicle color classification, 4 fps for automatic license plate recognition, and 8 fps for polarized cameras.

We developed this prototype in a relatively short time due to the ease of use and reliability of the Quartet embedded solution and the Blackfly S board level camera. The TX2 module pre-installed with the Spinnaker SDK ensures plug-and-play compatibility with all Blackfly S board-level cameras that deliver reliable transmission over the TF38 connector at full USB3 bandwidth. Nvidia provides several tools to facilitate the development and optimization of TX2 modules. Quartet is now available online at fir.com and through our offices and worldwide network of resellers.

The Links:   SEMIX604GB176HDS M190EG02 V4 FF200R12KE3

Related Posts