top of page

Pioneering In-Cabin Sensing with Breakthrough AI.

Our unwavering commitment to advancing AI technology for in-cabin safety and comfort is evident in every aspect of our work. Inspired by the needs of drivers, passengers, and the automotive industry, our state-of-the-art solutions are meticulously designed to help provide unparalleled safety and comfort.


Discover the latest in-cabin perception breakthrough with our monocular 3D sensing AI, for advanced depth-awareness and precision.


Unveil our pioneering in-cabin sensor fusion AI, integrating camera, radar, and thermal sensors for enhanced redundancy.


Explore tailored in-cabin sensing AI solutions integrated with diverse AI-enabled automotive processors for added flexibility and seamless deployment.

Eyeris DataBank

We've accumulated a vast and diverse in-cabin dataset over the years, encompassing a wide range of vehicle interiors, individuals of various demographics, body sizes, emotions, activities, and objects. Eyeris DataBank stands as the world's largest in-cabin dataset, underscoring our unwavering commitment to innovation.

Upper Body Tracking

Using a single 2D image sensor, our monocular 3D AI models accurately estimate the upper body pose and track all visible vehicle occupants in three-dimensional space. These models identify upper body 3D keypoints, enabling precise assessments of occupants' body dimensions across a wide range from the 5th percentile adult female to the 95th percentile adult male. Eyeris' monocular 3D body pose technology enables a range of in-cabin applications, including dynamic airbag deployments, automatic seat adjustments, active headrest restraints, steering wheel adjustments, and more.

Monocular 3D Sensing AI

Introducing a significant advancement in in-cabin perception with our monocular 3D sensing AI technology. Using a single 2D image sensor, our software technology enables depth-aware perception, enhancing safety and comfort standards. Powered by vision-based neural networks, our AI models regress three-dimensional data from standard 2D image sensors, including the latest RGB-IR sensors. They analyze the entire in-cabin environment under diverse lighting conditions and camera positions.

Human Behavior


Human behavior is perhaps one of the most unique and complex components to analysis and activity recognition. Eyeris uses state of the art modeling techniques to automatically interpret complex visual behavioral patterns of occupants inside autonomous and highly automated vehicles.


Visual behavioral patterns and recognition are drawn from facial micro-expression analytics. 


Detects and tracks occupants’ 10 body keypoints and estimates the corresponding value coordinates.


Human activity is detected and predicted  from articulated upper body motion data.

Object Localization

Whether it be a phone, bottle or key, our cutting edge object detection and classification technology can identify a large array of objects.


Provides objects size contour along with their corresponding coordinates.


Estimates object position in the cabin along with corresponding coordinates.


Detects and classifies objects with their corresponding labels.

The Camera Sensor

Our AI models extract depth information from standard 2D sensors, including automotive-grade IR or RGBIR image sensors. Additionally, our monocular 3D sensing technology accommodates varying resolutions and lens Field of View (FoV), ensuring accurate depth perception even in low-light conditions. Through partnerships with industry leaders like Omnivision, ST Micro, and On Semi, Eyeris provides car makers access to cutting-edge camera technology, delivering customized in-cabin monocular 3D sensing data to meet diverse customer needs.

Agnostic AI Compute

We integrate our entire in-cabin sensing AI software portfolio with a wide range of AI-enabled automotive processors, including ASICs, CPUs, FPGAs, DSPs, and GPUs. Our unique computer vision expertise, combined with our demonstrated proficiency in embedded vision capability, empowers us to deliver flexible in-cabin sensing solutions customized to our customers' requirements, facilitating rapid market deployment.

The Thermal Sensor

Thermal imagers serve to bridge the gap between image and radar sensors, detecting subtle temperature variations through long wave infrared (LWIR) technology. This capability offers an additional layer of insight, enabling precise differentiation of objects, pets, and occupants, and facilitating various temperature-critical use cases. Drawing on our expertise with Flir and SeekThermal, we leverage thermal sensor flexibility to enhance functional safety redundancy and security.

The Radar Sensor

While optional for in-cabin sensing, radar can add significant value by providing an additional layer of insights beyond what cameras can offer. Operating mainly at a 60 GHz frequency, radar sensors can detect and confirm occupants' presence and monitor their vital signs, enabling safety redundancy and proactive monitoring in case of emergency responses. When integrated into sensor fusion solutions with cameras and thermal imagers, radar sensors can enhance the overall effectiveness and reliability of in-cabin monitoring systems.

SensoFusion AI



We lead the in-cabin monitoring industry with camera-based solutions. However, Eyeris also offers the world's first in-cabin Sensor Fusion AI solutions. Optional yet impactful, our fusion AI integrates in-cabin cameras with cabin interior radar and thermal imagers, for advanced multi-modal in-cabin monitoring.

Electromagnetic waves detect and confirm occupants' presence and vital signs through micro-movements in heart rate and respiration at 60 GHz.

Comprehensive visual insights through visible light and infrared, offering the richest information about the in-cabin environment.

Long wave infrared (LWIR) detects subtle temperature variations enabling accurate differentiation of human occupants, pets and objects.

Face Analysis


Our face analytics models lead the industry in providing comprehensive driver and occupants' face data, continually advancing accuracy while balancing speed performance. These models incorporate a wide range of robust features, including multi-face 3D landmarks, 3D head pose estimation, eye gaze tracking, drowsiness detection, face recognition, gender and age group classification, and emotion recognition. They demonstrate remarkable resilience to diverse ethnicities and facial attributes, performing effectively under various lighting conditions, across different camera types and locations, and within different vehicle geometries.

Object Recognition

Our object analytics models offer a comprehensive 3-in-1 approach to identify objects that vehicle occupants interact with and detect forgotten items left behind. This method facilitates synchronized and efficient inference of object detection, classification, and localization modules. Pre-trained on the most common objects found inside vehicles, including child seats, laptops, backpacks, glasses, phones, cigarettes, bottles, and more, our object recognition models are adaptable to various scenarios.

Behavior Modeling


We leverage temporal data from articulated upper body motion, face analysis, and object recognition to identify various pose-based or movement-based behaviors, including static and temporal actions and gestures, human-object interactions, human-human interactions, and human-surface interactions. These behaviors encompass essential tasks such as hands-on-wheel driving, smoking, drinking, using a phone, and interacting with vehicle controls. Our models seamlessly adapt to various vehicle environments, enabling insightful behavior interpretation across different cabin settings.

Flexible Camera Locations


Our monocular 3D sensing AI empowers automakers with customized camera placements to meet their specific requirements, facilitating seamless integration with diverse in-cabin interior designs. Whether for driver or occupant monitoring, this tailored approach ensures heightened safety and convenience for all occupants, creating a seamlessly integrated and user-centric in-cabin experience.

bottom of page