Renesas Electronics Develops Camera Video Processing Circuit Block with Low Latency, High Performance, and Low Power Consumption for Automotive Computing SoC for the Autonomous-Driving Era

Achieves 70ms-Low Latency Vehicle Camera Video Processing and Industry-Leading Full-HD 12-channel Video Processing Performance with Only 197 mW Power Consumption

TOKYO — (BUSINESS WIRE) — February 2, 2016 — Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced the development of a new video processing circuit block for use in automotive computing system-on-chips (SoCs) that will realize the autonomous vehicles of the future.

The automotive computing SoCs for autonomous vehicles are required to integrate the functionality of both in-vehicle infotainment systems and driving safety support systems, and to operate both in parallel. In particular, driving safety support systems must be able to process video data from vehicle cameras with low latency to notify the driver of appropriate information in a timely manner. One issue that developers of in-vehicle infotainment systems and driving safety support systems face is the need to process large amounts of video data and also to perform autonomous vehicle control functions, without delays and instability.

The newly developed video processing circuit block handles processing of vehicle camera video with low latency. It can perform video processing in real time on large volumes of video data with low power consumption and without imposing any additional load on the CPU and graphics processing unit (GPU), which are responsible for autonomous vehicle control. Renesas has manufactured prototypes of the new video processing circuit block using a 16 nanometer (nm) FinFET process. In addition to 70ms-latency processing of vehicle camera video, it delivers industry-leading Full-HD 12-channel video processing with only 197 mW power consumption.

Recently, in-vehicle infotainment systems foreshadowing the future emergence of autonomous vehicles, such as car navigation systems and advanced driver assistance systems (ADAS), have made significant advances that bring them closer to becoming automotive computing systems integrating the functionality of both in-vehicle infotainment systems and driving safety support systems.

Driving safety support systems are expected to perform cognitive processing based on video transferred from vehicle cameras, such as identifying obstacles, monitoring the status of the driver, and anticipating and avoiding hazards. With the appearance of devices such as the R-Car T2 vehicle camera network SoC from Renesas, it can be anticipated that video data transferred from vehicle cameras will be encoded to video streams, and driving safety support systems must decode the received video streams. In order to make cognitive processing correctly using images from wide-angle cameras, the video data must be processed to correct for distortion. This video processing will be required to be accomplished with low latency to enable the system to notify the driver of appropriate information in a timely manner.

On the other hand, in-vehicle infotainment systems are capable of interoperating with a variety of devices and services, including smartphones and cloud-based services, and therefore data from a large number of external video sources are input to the system. At the same time, it is becoming more common for vehicles to be equipped with multiple interior displays including rear-seat monitors. This means the system must be able to handle simultaneous display of multiple video signals. In-vehicle infotainment systems must have sufficient performance to process and display large volumes of video data in real time.

The newly developed video processing circuit block can decode video streams transferred from vehicle cameras and apply distortion correction, with low latency. It performs the complex video processing required by automotive computing systems, delivering real-time performance and low power consumption, while imposing no additional load on the CPU and GPU responsible for cognitive processing tasks.

Key features of the newly developed technology:

(1) Synchronous operation among video processors, combined with pipeline operation, for video decoding and distortion correction with 70ms latency

Video codec processing basically consists of parsing processing, where performance depends on the volume of encoded stream data, and image processing, where performance depends on the image resolution. The newly developed video processing circuit block implements video encoding and decoding by using a stream processor for parsing processing and a codec processor for image processing. Since the data size of the typical video streams handled by in-vehicle infotainment systems varies greatly from frame to frame, the processing time required by the stream processor, whose performance depends on the volume of encoded stream data, varies substantially from frame to frame. On the other hand, the processing time required by the codec processor, whose performance depends on the image resolution, is the same for every frame. Consequently, the stream processor and codec processor must operate asynchronously, and this can cause large delays to become an issue.

The newly developed video processing circuit block has a synchronous operation mode that utilizes a FIFO placed between the stream processor and codec processor and can handle video streams that are roughly constant in volume from frame to frame, as is expected to be the case in driving safety support systems. It also has a mechanism whereby the codec processor outputs an interrupt to the CPU each time processing of a multiple of 16 lines has completed during frame processing, thereby allowing distortion correction to start in a later stage without waiting for frame processing to finish completely. This combination of synchronous operation and incomplete-frame pipeline operation achieves low latency of only 70 ms (a reduction of 40 percent compared with existing Renesas devices using the 28 nm process) from the reception of video streams to the completion of video decoding and distortion correction.

1 | 2  Next Page »
Featured Video
Jobs
Design Verification Engineer for Blockwork IT at Milpitas, California
CAD Engineer for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Upcoming Events
Phil Kaufman Award Ceremony and Banquet to be held November 6 at Hayes Mansion at Hayes Mansion 200 Edenvale Ave San Jose CA - Nov 6, 2024
SEMICON Europa 2024 at Messe München München Germany - Nov 12 - 15, 2024
DVCon Europe 2023 at Holiday Inn Munich – City Centre Munich Germany - Nov 14 - 15, 2024
SEMI MEMS & Imaging Sensors Summit, at International Conference Center Munich Germany - Nov 14, 2024



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise