Vision AI For Interior
Scene Analysis

Inside Highly Automated Vehicles

EyerisNet Deliverables

EyerisNet product portfolio is trained on the world’s largest in-cabin dataset with 10M+ images to enable Real-time Data Analytics for In-vehicle Scene Understanding. From Body, Face, Activity and Object analytics, Eyeris’ vision AI algorithms provide the most accurate understanding of the automotive interior cabin space.

Optimize Safety and Comfort

Synchronized Interior and Exterior Vision is essential for optimized safety and comfort. Automotive OEMs and Mobility Fleet Operators can now harmonize Eyeris interior vision data with exterior vision to provide the best in-cabin performance.

Enabling Real-Time Intuitive Safety and Comfort Controls

Automotive OEMs and Mobility Fleet Operators can now provide new real-time intuitive features with Eyeris In-vehicle Scene Understanding AI.

Powerful Use Cases

Eyeris is the only company today delivering a complete portfolio for ISU AI that allows these valuable use cases. Through its leading, defensible, and commercially available technology, Eyeris opens the door for innovative in-cabin categories such as safety, comfort, convenience and services.

Retailers, food suppliers, entertainment companies and other service providers can now customize their offerings to mobility consumers, in real-time, based on in-vehicle contextual data.


Intelligent vehicles can identify when an object is left behind, determine cabin cleanliness for autonomous fleet operators, and provide customized services to enhance productivity.


Personalize each passenger’s zone using Intuitive Comfort Control. Adapt sound, air conditionning, seats and lighting based on their profile, position, orientation, preferences and habits.


Deploy airbags according to passengers’ body size, position, orientation & activity, monitor driver’s awareness & emotional state, and recognize foreign objects, unlawful events or a child left behind (Hot Car Act 2017).

Why Now?

With the arrival of the camera inside the vehicle cabin, the market is quickly heading to multi-camera systems, moving beyond driver monitoring toward an overall In-vehicle Scene Understanding™. Eyeris offers the only technology in the world capable of providing analytics on everything that occurs inside the vehicle with its vision AI software portfolio and AI chip inference accelerator.

Revolutionizing In-Vehicle Scene Understanding at Every Level

Eyeris gathers in-cabin intelligence at all levels of highly automated vehicles (HAVs) including L2, L3, L4 and L5 as defined by the Society of Automotive Engineers (SAE). Eyeris Human Behavior Understanding, Object Localization, and Surface Classification AI enables Automotive OEMs and Mobility Fleet Operators to provide powerful use cases, with or without the driver, starting with Safety and Comfort and naturally migrating to Convenience and Services.

Enabling a Whole New Breed of Mobility Services

New in-vehicle monetization models will emerge from understanding occupants and connecting them to outside mobility service providers, offering hyper targeted and individualized products to their anticipated needs and wishes. The future occupant-aware vehicle will provide manufacturers and other suppliers the ability to offer personalized products and services based on the billions of bits of rider data. As cars become more autonomous and users are able to conduct more activities from their cars, Eyeris ISU data streams will become even more valuable for future transportation ecosystems.

At the Forefront of the “Third Living Space” Movement

Our emotional connection to interior space of semi-autonomous and autonomous vehicles will be drastically different from traditional vehicles. Soon we won’t require command-based human machine interactions to control what we call the “Third Living Space”. Instead, it will be able to anticipate our needs intelligently based on our behavioral data and patterns through passive interfaces. It will serve as an extension of a home and office where we feel safe and are most productive. It will be an extension of our identity and activity.

We have a front-row seat to understanding and analyzing what occupants do and expect to be doing in the estimated 300 hours a year they are in this
”Third Living Space”.