Image Sensor Platform Will Accelerate Deployment of Automotive Safety Features

As interest in autonomous vehicles continues to grow, the road to autonomous driving is clearly paved with road signs, such as increased deployment of advanced driver assistance systems (ADAS). Image sensors are the basis of these camera-based systems, like the eyes of a vehicle. With the advent of rear-view cameras and 360-degree surround view systems, they allow the driver to see behind and around the vehicle, making driving safer, plus the front-view camera system senses what’s in front of the vehicle, providing automatic features that help prevent collisions .

As interest in autonomous vehicles continues to grow, the road to autonomous driving is clearly paved with road signs, such as increased deployment of advanced driver assistance systems (ADAS). Image sensors are the basis of these camera-based systems, like the eyes of a vehicle. With the advent of rear-view cameras and 360-degree surround view systems, they allow the driver to see behind and around the vehicle, making driving safer, plus the front-view camera system senses what’s in front of the vehicle, providing automatic features that help prevent collisions . Regulations introduced by government agencies and safety ratings such as the New Car Assessment Program (NCAP) have fueled the rapid adoption of automotive cameras, prompting automakers to introduce these systems on their vehicle platforms, from luxury to mass-market models. As a result, the number of automotive cameras produced globally each year more than doubled from 2013 (47 million) to 2017 (110 million) and will exceed 200 million in 2024, according to Japanese market research firm Techno Systems Research. .

In addition, the 360-degree surround view system is based on four cameras mounted on both sides of the vehicle to provide the driver with a low-speed parking and maneuvering advantage. The cameras can also be used to provide features such as lane departure warning and blind spot detection, alerting the driver before it is necessary to change lanes. Cameras are even being used to replace mirrors, in the form of Camera Monitoring Systems (CMS), which allow drivers to see traditional side-mirror views on displays inside the car without blind spots and without the need for external mirrors ( Because jurisdictions can relax mirror-requirement regulations) – a big advantage for both fuel economy and industrial design. Meanwhile, the front-view camera system is used not only to Display video to the driver, but also to detect what is happening in front of the vehicle to provide additional safety and convenience features. These systems avoid collisions with automatic emergency braking, bring the car to a stop when a sudden obstacle in the vehicle’s path is detected, and assist the driver with adaptive cruise control, especially in highway driving conditions. is very useful in heavy traffic situations.

Sensor Considerations

Although all of these applications use image sensors as the core device of the camera, they usually have different requirements on some parameters, such as image quality, resolution and sensor size. For example, a camera used to Display an image to the driver (such as a rear-view camera) may require a different image quality than a front-view camera used for perception, which incorporates algorithms to provide automatic emergency braking ADAS functionality . Likewise, the resolution required to implement an image sensor for a particular display format will be different from that required for computer vision applications, which require an exact minimum number of pixels for an object for an algorithm to detect and recognize correctly, directly This translates to a requirement for sensor resolution.

Image Sensor Platform Will Accelerate Deployment of Automotive Safety Features

In a forward-looking camera system, computer vision algorithms use image sensor output to detect pedestrians, vehicles, and objects and make decisions, requiring training on a series of images obtained over thousands of hours of test driving, a very expensive practice . These images need to be collected using the same system that is rolled out in production, and the image quality needs to be set before collecting this dataset. This adjustment and development of image quality requires engineers to work in conjunction with the sensor and control it with appropriate auto-exposure and auto-white balance techniques to automatically output optimal images as scene conditions change. Computer vision algorithms that are then trained using these captured images cannot then be run on cameras consisting of image sensors with different characteristics or tuning because they have been adapted to recognize profiles for a given dataset. Scalable image sensor platforms offer the same performance and similar characteristics, with different resolutions, greatly reducing the effort and cost required for manufacturers to work on multiple platforms, as they can actually reuse engineering development work and A training dataset of images.

Imaging Challenge

When considering any automotive imaging system, the performance of the image sensor and its ability to capture a wide range of scene content or dynamic range is a key parameter. Dynamic range measures how much scene contrast a sensor can capture, or simply how well it handles very bright areas and dark areas or shadows in the same scene. This is a common problem with car cameras, when you consider that the car will be driving under an underpass in the early evening when the sun is below the horizon. If the dynamic range of the sensor is too low, critical details in the scene may be overlooked, causing the driver or computer vision algorithms to fail to notice objects in the scene, resulting in an unsafe situation.

Another aspect that manufacturers now have to deal with is the increasing use of light-emitting diodes (LEDs) in traffic signs as well as in automotive headlamps and taillights, adding to the challenges of automotive imaging. LED lighting is usually controlled using pulse width modulation (PWM): LEDs switch on and off at different duty cycles to control the intensity of the light and save power, too fast for the human eye to perceive. But while our eyes can’t perceive the flickering of these lights, image sensors usually pick up when the LEDs are on and off. This means that the image sensor output will show this flickering effect when displayed as video. This is something we don’t want, and can create safety concerns – and to complicate things further, there is no standard for how often pulsed LEDs are used in vehicles or traffic signs. For front-view cameras with computer vision algorithms, the flickering effect means traffic signs can be misinterpreted or missed entirely, while for vision applications like CMS or rear-view cameras, it can distract and confuse drivers.

Put this scene in the background and imagine a scene with a bright source, like the sun below the horizon, and some darker-looking details, like a pedestrian on the side of the road in the shade of a nearby tree. To capture this high dynamic range scene, common image sensors will compensate for bright parts of the scene by using shorter exposure times to avoid saturation and overflow of the scene. In the presence of a pulsing light source (such as a car’s LED headlights) in the same scene, the short exposure time used to capture the bright details of the scene will cause the image sensor to capture a frame when the headlights are actually turned off. While it will then combine multiple exposures to output a high dynamic range image, in bright daylight scenes some or all of the exposures described above will ignore the LEDs and cause flickering of the headlamps. If you extend the exposure time and try to capture the scene with the LEDs on, parts of the scene that are hit by the bright light will be overexposed, greatly reducing the dynamic range, losing detail, and ultimately resulting in unacceptable image quality.

High dynamic range and reduced LED flicker

The solution to this problem is an image sensor that can achieve greater dynamic range within a single frame, so that bright areas can be captured with exposure times long enough to capture pulsed light sources during “on” periods without excessive Expose the scene. ON semiconductor‘s Hayabusa™ automotive image sensor platform successfully addresses this problem with an innovative pixel technology that enables Super-Exposure. The technology uses an innovative design and process that can store more charge within the sensor, enabling exposure times five times longer than conventional image sensors of the same size currently used in automobiles before reaching saturation. Through this pixel technology, the Hayabusa™ image sensor achieves high dynamic range imaging of over 120 dB while reducing LED flicker.

Part of the solution lies in the construction of Hayabusa™ super-exposure pixels. Through a new design and fabrication process, each backside-illuminated 3.0-micron pixel stores more than 100,000 charged electrons from incident light, which greatly exceeds that of the same pixel size. 20,000 electrons provided by conventional CMOS image sensors. This results in a single super exposure capturing 95 dB of dynamic range, covering most of the scene, while the Hayabusa sensor can add a second, very short exposure that expands the dynamic range to over 120 dB by capturing the brightest parts of the scene.

In order to achieve the effect of suppressing LED flicker while maintaining a high dynamic range output, the superexposure can be limited to a time long enough to capture the entire period of the lowest frequency pulsed LED expected in the scene: if we consider 90 Hz, this is equivalent to Exposure time of about 11ms (during which the LED may only be on for 1/10th of that time or less). The sensor has a capacity of 100,000 electron charges, allowing exposure for such long periods of time without losing detail in bright areas, while capturing the pulsing signal that the LED “turns on”. Second, the short exposure time combined with a proprietary on-chip algorithm ensures an expanded dynamic range while preserving the areas in the scene with pulsed LEDs captured by the superexposure, preventing them from being lost by double exposure. As a result, the sensor is able to capture more than 120 dB of dynamic range in the scene, while preserving the area made up of pulsing LEDs that appear to be flickering with a normal sensor. This real-world performance makes the Hayabusa platform the best solution for developing automotive cameras that require high dynamic range to reduce LED flicker, and because all products in the platform have the same performance, manufacturers can switch between devices at will, And be able to reuse most of the engineering work and the data used to train the algorithm from one of the sensors.

Summarize

The Hayabusa platform’s automotive-compliant image sensors cover resolutions from 1 to 5 megapixels, are scalable, and provide manufacturers with configuration options for a variety of different applications. The platform’s first device, the AR0233AT, is a 2.6-megapixel sensor with both high dynamic range and LED flicker suppression, producing 1080p video at 60 frames per second.

An image sensor platform successfully addresses two of the main technical challenges in automotive imaging, while also solving practical problems for manufacturers. By creating a platform that delivers consistent performance and performance across a range of devices, developers can leverage thousands of hours of scene data to train algorithms to deploy similar ADAS functions on different vehicles, using the image best suited for the application sensor. This allows them to deploy high-end, high-resolution systems and cost-effective low-resolution systems to different vehicle platforms with minimal effort. The capabilities of the Hayabusa image sensor platform will significantly help manufacturers advance their ADAS offerings, bringing more choice to consumers, but more importantly, equipping more drivers with systems that will enhance driver and pedestrian safety on the road .

The Links:   LQ150X1LGB1 CM600DY-24H