The Most Comprehensive Portfolio of Computer Vision Neural Networks for In-Vehicle Scene Understanding AI

World’s Largest In-vehicle Dataset

We’ve amassed over the years remarkable amounts of facial, body, surface and objects data from various vehicle interior sizes, in different environments, under different lighting conditions, and with different camera types and positions. This data represents different races, genders, ages, emotions, body sizes and orientations, actions, activities…etc. Today, Eyeris holds the world’s largest in-vehicle dataset, and serves as ground truth for training our vision AI portfolio of In-vehicle Scene Understanding algorithms.

Human Behavior Understanding

Eyeris HBU portfolio of DNNs uses state of the art modeling techniques to automatically interpret complex visual behavioral patterns of occupants inside autonomous and highly automated vehicles. These visual behavioral patterns are drawn from body tracking analytics, face analytics and emotion recognition from facial micro-expression along with action and activity recognition for all occupants inside the vehicle.

Body

Detects and tracks occupants’ 10 body keypoints and estimates the corresponding value coordinates of shoulders, elbows, wrists, hips, shoulder base and center of face along with body height, width, size, posture, orientation, contour and positioning.

Face

Detects and tracks occupants’ faces, classifies 7 emotions from facial micro-expressions, predicts gender and estimates age and head pose. Additional behavioral analytics such as attention, distraction and drowsiness can be specifically tailored, through behavioral modeling, according to their respective application and use-case.

Activity Detection

Based on predefined in-vehicle environments and camera setups, we leverage temporal data from articulated upper body motion and, along with object understanding, to model human actions and predict activities of interest. These activities include, eating and drinking, driving, sleeping, using phone or laptop…etc.

Object Localization

Provides detection, classification, size, contour and position of objects inside the cabin. A robust solution for augmenting activity recognition and for cleanliness detection or identifying forgotten objects left behind such as phone, wallet, keys, laptop, bag, bottle, food items...etc.

Class

Detects and classifies pre-trained objects with their corresponding labels and confidence score.

Size

Provides objects size contour along with their corresponding coordinates.

Position

Estimates object position in the cabin along with corresponding coordinates.

Surface Classification

Classifies and positions all in-cabin surfaces from a pixel map along with their shape, including footwells, door panels, center console, etc., relative to the occupants and objects.

Pixel Map

Provides a class label to each surface pixel in the interior by measuring differences between color and texture.

Shape

Provides surface contour and surface region coordinates relative to adjacent in-cabin surfaces, occupants or objects.

Surface Class

Provides a class label to each surface in the interior by measuring differences between color and texture.

Interior Image Segmentation

In order to maximize safety and comfort, every inch of the interior space must be understood with the highest level of confidence. Our patent-pending methods for Interior Image Segmentation enable us to perceive the in-vehicle environment with a much more robust knowledge than through traditional detection and classification networks. Eyeris Interior Image Segmentation classifies every pixel in the image providing a comprehensive in-vehicle scene understanding.

Efficient Inference on AI Chip

Eyeris portfolio of Deep Neural Networks (DNNs) along with our proprietary inference accelerator algorithms enable efficient inference on automotive-grade AI chip from up to 6 camera streams simultaneously. Eyeris unique software architecture boasts a throughput of up to 10TOPS, while maintaining power consumption under 7 watts. Eyeris inference accelerator algorithms take advantage of modern AI chip hardware design pipelines to achieve optimum performance. The intelligent engine breaks down the EyerisNet inference workload through intelligent graph-based data flow designs and implementations.

Benefits of Inference on AI Chip

 

Highest Throughput

Eyeris unique software architecture boasts a throughput of up to 10TOPS upon inference of the entire EyerisNet portfolio of DNNs in real-time, and from up to 6 camera streams inside the cabin.

Edge Computing

Designed for real-time inference on the edge at high frame rate, Eyeris AI chip inference accelerator software enables time-sensitive data output without reliance on network connections.

Low Power

Designed with efficiency in mind, Eyeris proprietary inference accelerator engine allows for low-power AI chip consumption regardless of the number of Inferenced DNNs or camera streams.

Small Form Factor

Eyeris portfolio of vision algorithms targets small form factor AI chip architectures, offering a significantly reduced footprint and weight over leading GPUs.

 

Any 2D Camera

RGB-IR Sensor
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa
2D Camera Agnostic
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa
Low Res Requirements
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa
Multi Camera Support
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa

Flexible Environments

Camera Position Invariant
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa
MIPI Interface or Other
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa
Field of View Invariant
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa
Entirely Customizable
Pellentesque sodales sollicitudin massa vitae ultricies. Nam bibendum sodales dapibus. purus. Cras ut feugiat dolor. Maecenas placerat quam et molestie eleifend. Cras quam Pellentesque sodales sollicitudin massa

Optimized Deep Learning Models

Each vehicle interior presents a unique environment, with a different type of interior space, camera numbers and their placements etc., EyerisNet portfolio of DNNs offers optimized vision AI models though exclusive data collection, model training and optimization processes at our R&D lab in San Jose, CA. Robust results, fast.