This article originally appeared on insight.tech.
When driving a vehicle, a lot rides on the ability to see clearly. No matter what the ambient conditions—harsh glare, low light, or rain—there’s no room for error when navigating around other vehicles, pedestrians, or obstacles on the road. Thanks to advances in technology, vision solutions embedded in advanced driver assistance systems (ADAS) help human drivers detect these objects.
The vision system in an ADAS-equipped vehicle includes a set of cameras that streams real-time video from the road and the inside of the vehicle. A computer captures the frames from this video stream and feeds them to the vision processor to analyze.
Challenges in Automotive Vision Systems
Despite the important role that vision systems play in ADAS, autonomous driving, and electric vehicles, the industry lacks a consistent set of standards to evaluate them. In addition, traditional vision systems struggle to handle environmental conditions well. Given all that’s riding on effective vision in vehicles, automotive cameras need thorough testing in labs and in production before they’re packaged into ADAS solutions. Such testing is a job for frame grabbers, says Po Yuan, CEO and Founder of EyeCloud, a provider of image processing systems.
In a moving vehicle, a computer captures still frames from video to hand over to the vision processor. The frame grabber executes the same functionality in research and development settings and in preproduction testing of cameras. “The camera is a separate module that is being produced and eventually assembled onto a vehicle. But before this happens, it needs to be calibrated, tested, and quality-controlled,” Yuan says.
Frame Grabber Use Cases
A frame grabber evaluates camera functionality in the lab environment by helping test if it can deliver clear images in low light or other edge conditions.
In the production stage, manufacturers also need to calibrate cameras, making sure they’re in focus and provide non-distorted images. Here, too, frame grabbers help. “The manufacturers connect the camera to our frame grabber and assess the images so they can adjust the camera interactively,” Yuan says.
Burn-in testing in factories to see how cameras perform in continuous streaming conditions over a long period of time is yet another use case for frame grabbers. Cameras run for up to 144 hours at a time, and frame grabbers ensure the cameras capture images reliably captured without frame loss. “If there is frame loss, the camera will be disqualified,” Yuan says.
The frame grabber can also help with real-world data collection for algorithm development. In this case, the cameras mount on a car and the frame grabber captures data like signs and pedestrians and bicycles, synchronously. “Road sign detection and pedestrian detection algorithms need huge amounts of data. In the AI world, data is key and our frame grabbers help with data collection,” Yuan says.
Frame Grabber Real-World Requirements
While frame grabbers provide a lot of utility in production phases of automotive cameras, they must also tackle a few challenges.
For one thing, they need to be portable so they can be easily used for data and testing. Second, they need to synchronize with multiple cameras to simulate a real-world vehicle setting. Most ADAS solutions have a few cameras pointed at the road and inward, into the car.
Frame grabbers also need to keep up with advances in automotive camera technology. “Cameras are getting higher in resolution and higher frame rate and depth, all of which demand a higher bandwidth for the frame grabber to work with,” Yuan says.
The ECFG series from EyeCloud meets these requirements with modular circuits that can support 4-16 channels of video at a time. The series also handles higher data bandwidth requirements that are around the corner. “We make sure we understand where the industry is going and design our solution to meet those evolving requirements,” Yuan says.
Future Developments in Automotive Cameras
Part of that future-forward direction lies in other kinds of cameras, including infrared ones that can detect objects even in edge cases. SWIR (short-wave infrared) is also a contender. It’s why a multi-spectral vision system is playing a part and what the ECFG series from EyeCloud can also accommodate.
EyeCloud is also working to make the frame grabber more intelligent with edge AI. “Intel processors’ system-on-a-chip format facilitates edge AI applications because they combine image processing, the neural computer engine, as well as a CPU in one chip,” Yuan says. An AI-enabled frame grabber can be applied in robotics, surveillance, and a variety of other use cases. Instead of routing every frame from video streams to the vision processor, an intelligent frame grabber can selectively pick only the ones with relevant information. The vision processor doesn’t need endless images of the same paved road, for example, but one where a moving pedestrian or animal comes into the frame, might be more useful.
“This is the roadmap that we’re going to take on to make the frame grabber intelligent and also make data collection more relevant with Intel edge AI technology,” Yuan says. “The flexibility and intelligence we can add to this frame grabber with Intel technology makes us really excited about future growth in the market.”
This article was written by Poornima Apte, and edited by Georganne Benesch, Editorial Director for insight.tech.
EyeCloud.AI, a Gold member of the Intel Partner Alliance, is a leading supplier of edge AI vision appliances and systems. We help tech companies overcome cost and time-to-market (TTM) challenges with our expertise in advanced hardware design, camera and machine vision systems, image sensor tuning, and IoT device management. Since our founding in 2017, we have successfully delivered mass-production machine vision solutions for global customers in autonomous driving, electric vehicles, mobility robots, and surveillance. EyeCloud also offers engineering services for customized, rapid, and cost-effective solutions.
Comments