“For industrial applications using robots and automated guided vehicles, the interaction between infrared light and the corresponding sensors is critical. Manufacturers have several different options for capturing real-world information in a 3D system.
“
For industrial applications using robots and automated guided vehicles, the interaction between infrared light and the corresponding sensors is critical. Manufacturers have several different options for capturing real-world information in a 3D system.
A giant leap is taking place in mobility. This is true whether in the automotive sector, where autonomous driving solutions are being developed, or in industrial applications using robotics and automated guided vehicles. The various components in the whole system must cooperate with each other and complement each other. The main goal is to create a seamless 3D view around the vehicle, use this image to calculate object distances and initiate the next move of the vehicle with the help of special algorithms. In fact, three sensor technologies are used at the same time here: LiDAR (LiDAR), radar, and cameras. Depending on the specific application scenario, these three sensors have their own advantages. Combining these advantages with redundant data can greatly improve security. The better these aspects are coordinated, the better the self-driving car will be able to navigate its environment.
01 Direct Time of Flight (dToF)
In the time-of-flight approach, system manufacturers use the speed of light to generate depth information. In short, directed light pulses are fired into the environment, and when the light pulse hits an object, it is reflected and recorded by a detector near the light source. By measuring the time it takes for the beam to reach the object and return, the object distance can be determined, while in the dToF method the distance of a single pixel can be determined. The received signals are finally processed to trigger corresponding actions, such as vehicle evasion maneuvers to avoid collisions with pedestrians or obstacles. This method is called direct time-of-flight (dToF) because it is related to the exact “time-of-flight” of the beam. LiDAR systems for autonomous vehicles are a typical example of dToF applications.
02 Indirect Time of Flight (iToF)
The indirect time-of-flight (iToF) approach is similar, but with one notable difference. Illumination from a light source (usually an infrared VCSEL) is amplified by a dodging sheet and pulses (50% duty cycle) are emitted into a defined field of view.
In the downstream system, a stored “standard signal” will trigger the detector for a period of time if the light does not encounter an obstacle. If an object interrupts this standard signal, the system can determine the depth information of each defined pixel of the detector based on the resulting phase shift and the time delay of the pulse train.
03 Active Stereo Vision (ASV)
In the “active stereo vision” approach, an infrared light source (usually a VCSEL or IRED) illuminates the scene with a pattern, and two infrared cameras record the image in stereo.
By comparing the two images, downstream software can calculate the required depth information. Lights support depth calculations by projecting a pattern, even on objects with little texture such as walls, floors, and tables. This approach is ideal for close-range, high-resolution 3D sensing on robots and automated guided vehicles (AGVs) for obstacle avoidance. It is also used in optical inspection of production line parts, security cameras and surveillance. ams Osram has a portfolio of dot matrix projectors that provide high contrast dot matrix illumination in NIR, making the system resistant to light. The relatively simple system design of active stereo vision has a positive impact on the cost of controlling the overall system. However, due to system reasons, the cameras need to be separated, so this method requires a lot of installation space.
Due to trends such as increasing automation of industrial production or logistics, the demand for corresponding technologies is also increasing. Depending on the respective technical approach and end application, some light sources are more suitable than others.
For LiDAR, there are two different main systems that can be used to acquire 3D point clouds: flash LiDAR and scanning LiDAR. Scanning LiDAR systems consist of a focused pulsed laser beam directed at a small solid angle by a mechanically rotating mirror or a microelectromechanical systems (MEMS) mirror.
Controlling the high-power pulsed laser beam to emit only into a small solid angle enables a much larger reach using optical power compared to 3D flash memory systems. EEL products can be selected for these system architectures. They emit a lot of light in a compact space with a small emission area, resulting in excellent power and range. Therefore, EEL is already used in many solutions. For more than 15 years, ams Osram has been the leading LiDAR laser manufacturer, producing more than 10 million chips in the field, none of which are defective. As a semiconductor specialist in this field, ams Osram identified 905 nanometers as a common wavelength for today’s LiDAR systems. Compared to systems with other wavelengths, the 905nm solution offers excellent system efficiency, outstanding reliability, and attractive system cost.
Recently, a new technology related to LiDAR has been mentioned more and more frequently: VCSEL (Vertical Cavity Surface Emitting Laser). VCSELs combine the attributes of two lighting technologies: the high power density and simple packaging of infrared LEDs, and the spectral width and speed of lasers. The advantages of this technology, including excellent beam quality, simple design, and advances in miniaturization, are contributing to the growth of the VCSEL market. In general, VCSELs may require more installation space than EEL transmitters, but have advantages in certain applications. For example, the radiation properties of VCSELs make them particularly suitable for flash-based LiDAR systems and industrial applications such as robotics and autonomous mobile robots. With 3D flash LiDAR, the pulsed laser beam is fired over the entire relevant solid angle at once. To obtain a point cloud of a certain resolution, an nxm array of photodetectors (an array of photodetectors or a CMOS ToF chip) is required.
Regardless of the system approach customers prefer, ams Osram offers all common approaches in an extremely broad portfolio, including infrared LEDs, VCSELs and EELs. ams Osram is a global leader in providing VCSELs and EELs for LiDAR. Both products have excellent optical performance and efficiency. Additionally, customers can choose from a variety of edge-emitter package designs suitable for their systems, whether it is a TO metal can, plastic, or SMT with a peak output power of 120 W, etc. In the field of VCSELs, ams Osram offers a wide range of wavelengths (680 to 940 nm), power levels (7mW to >60 W) and multiple viewing angles. In addition to their small size, these products feature excellent reliability and leading-edge VCSEL technology for a wide range of applications.
About the Author:
Matthias Hönig
Matthias Hönig is the Global Marketing Manager for the Visualization and Laser business of the Osram Opto Semiconductors product line at ams, responsible for LiDAR applications in the automotive and industrial sectors.
The Links: PT76S16A SKKT122/16E