
The In-cabin sensing Technology
You deserve.
Driven by excellence, inspired by you. At Eyeris, our technology was inspired by the late-night worker, the caring parent, the aspiring entrepreneur. Keeping every driver in mind, our innovative technology promises to push towards a safer and better road ahead.
THE VISION AI
In-Cabin cameras are the most common sensor used for Driver and Occupant Monitoring. Eyeris AI Software interprets the entire interior scene through these cameras.
THE SENSOR FUSION AI
Allows the ability to collect data from different sensor types to interpret the scene with redundant data for high data accuracy.

The Database
Over the years we’ve amassed remarkable quantities of data from various vehicle interior environments and conditions. The data is as diverse as the people who use our product, reflecting a multitude of races, genders, ages, body sizes, emotions, activities and unique qualities. Including a variety of objects and surfaces, our database reflects an accurate and expansive vision AI portfolio. Today, Eyeris holds the world’s largest in-vehicle dataset, a testament to the kind of innovation we strive for.
Upper Body Analytics
* Scientific study of the measurements and proportions of the human body.
Using a single 2D image sensor, we provide both 2D and 3D upper body pose estimation and tracking algorithms of all visible vehicle occupants. Eyeris upper body keypoints typically include two shoulders, two elbows, two wrists, two hips, one face center, one shoulder base and one torso center.
The combination of the aforementioned keypoints, especially in 3D space coordinates, further enables accurate assessments of vehicle occupants’ anthropometry*, ranging from the 5th percentile adult female to the 95th percentile adult male.
Eyeris upper body pose estimation and tracking algorithm enables various safety, comfort and convenience related use cases, such as dynamic airbag deployments, automatic adjustments of seats, active headrest restraints, steering wheel (for as long as there is one), etc.


The Vision AI
Our vision-based neural networks provide the richest source of information. Using the latest image sensors, our pre-trained vision AI models understand the entire in-cabin space under the widest range of lighting spectrum.
Human Behavior
Understanding
Human behavior is perhaps one of the most unique and complex components to analysis and activity recognition. Eyeris uses state of the art modeling techniques to automatically interpret complex visual behavioral patterns of occupants inside autonomous and highly automated vehicles.

Face
Visual behavioral patterns and recognition are drawn from facial micro-expression analytics.

Body
Detects and tracks occupants’ 10 body keypoints and estimates the corresponding value coordinates.

Activity
Human activity is detected and predicted from articulated upper body motion data.
Object Localization
Whether it be a phone, bottle or key, our cutting edge object detection and classification technology can identify a large array of objects.

Size
Provides objects size contour along with their corresponding coordinates.

Position
Estimates object position in the cabin along with corresponding coordinates.

Class
Detects and classifies objects with their corresponding labels.
The Image Sensor
Capturing the image is key before any processing can be done. We partner with key automotive image sensor partners to ensure our AI Software can run analytics in challenging lighting conditions. This includes analyzing scenes in either RGB and/or IR mode for night time capture. Whether a Rolling or Global Shutter image sensor, Eyeris In-cabin sensing AI can accommodate both types.
CMOS-based RGB-IR

The Inference Hardware
We integrate our software portfolio of deep neural networks with a wide range of hardware partners’ automotive-grade processors and sensors. This allows us to offer our in-cabin sensing software solutions using a combination of various processors and sensors, for added customer flexibility, and with a significantly faster time to market.
ASIC, CPU, FPGA, DSP and GPU



The Thermal Sensor
Capturing the image is key before any processing can be done. We partner with key automotive image sensor partners to ensure our AI Software can run analytics in challenging lighting conditions. This includes analyzing scenes in either RGB and/or IR mode for night time capture. Whether a Rolling or Global Shutter image sensor, Eyeris In-cabin sensing AI can accommodate both types.
CMOS-based RGB-IR


The Radar Sensor
Capturing the image is key before any processing can be done. We partner with key automotive image sensor partners to ensure our AI Software can run analytics in challenging lighting conditions. This includes analyzing scenes in either RGB and/or IR mode for night time capture. Whether a Rolling or Global Shutter image sensor, Eyeris In-cabin sensing AI can accommodate both types.
CMOS-based RGB-IR



The Sensor-Fusion AI
We complement in-cabin cameras with radars and thermal imagers for enhanced redundancy, and increased accuracy while generating unbiased data signals. This improves the vehicle’s functional safety systems and its comfort and convenience response features.

Ideal for presence detection through micro-movements of heart rate and respiration.

Richest sources of information about the in-cabin space.

Fills sensors gap, ideal for functional safety redundancy, high detection accuracy between humans, animals, & inanimate objects.
Face Analytics
Our face analytics models offer the industry’s richest driver and occupants’ face data. Consistently raising the bar for SOTA (state-of-the-art) accuracy-versus-speed performance tradeoffs.
Eyeris in-cabin face analytics models include robust multi-face detection, 3D head pose estimation, 3D eye gaze tracking, PERCLOS*, drowsiness detection, face recognition, gender and age group classification, emotion recognition, among other.
Eyeris in-cabin face analytics models are especially robust to multiple ethnicities, with varied facial attributes**, under a wide range of visible and active lighting conditions, from different camera types, locations and fields of view (FOV), and inside different vehicle geometries.
* The percentage of total time that eye is closed.
** Semantic features of face visual properties such as glasses, mustache, beard, hat, scarf, etc.

Object Analytics
Our object analytics models provide a 3-in-1 approach towards the identification of the objects that vehicle occupants interact with, as well as forgotten objects left behind. This approach enables synchronized and efficient inference of object(s) detection, classification and localization modules.
Eyeris object recognition models are pre-trained on most common objects found inside the vehicle, especially chid seat (facing forward and backward, with and without child) laptop, backpack, glasses, phone, cigarettes, bottle, etc.
Additional objects can always be trained according to customers’ needs and use cases. Additionally, our object detection models can also detect and localize in-cabin trash items and other hard-to-classify objects through image segmentation and foreground separation techniques, which can robustly provide objects’ shapes, contours and their corresponding pixel map.

Action Analytics
Using our temporal data from articulated upper body motion and object understanding, Our action recognition models can recognize pose-based or movement-based actions and gestures, human-object activities, human-human interactions, as well as human-surface actions and predictions.
Example actions and gestures include both driver and occupant related tasks such as hands-on-wheel driving, smoking, drinking, using phone, touching vehicle controls, etc.
Eyeris action recognition models can be highly sensitive to various vehicle environments….

Flexible Camera Locations
Our technology allows for flexible camera placements to provide both Driver and Occupant Monitoring to maximize camera functionality and coverage.
