A multi-modal AI engine that uses reinforcement learning and modeling techniques to automatically interpret complex human visual behavior patterns from face and body.
Visual Behavior Dataset
Over the last few years, we’ve amassed remarkable amounts of facial and body data in different environments, under different lightings, with different camera types and positionings. This data represents different races, genders, ages, emotions, body sizes, actions, activities…etc. Today, Eyeris hold the world’s largest visual behavior dataset, and serves as ground truth to training our vision AI algorithms.
Multi-Ethnic Emotional Vocabulary
Our emotion recognition algorithms analyze facial micro-expressions in real-time with the highest level of speed and accuracy. Designed with embedability in mind, Eyeris emotion recognition software is lightweight and is targeted for most modern embedded systems. Our deep learning-trained emotion recognition models are highly customizable and can be easily further refined in other setups based on environment, lighting, camera type, positioning, demographic, distance...etc.
Analyze the 7 universal emotions to augment face analytics. A key ingredient in the mix of HBU AI. After all, emotions make and shape us as humans.
Targeted for Embedded Systems
Lightweight, multi-platform algorithm support for today’s most compute architectures. Up to date with computer vision state of the art in embedability.
Customizable Deep Learning Models
A key feature for each unique environment and setup though exclusive model training and optimization. Robust results, fast.
Face Reading Analytics
Our most advanced and most comprehensive suite of face analytics in one single SDK. Derivative analytics can be specifically tailored through behavioral modeling according to their respective application and use-case.
Using standard RGB IR camera sensors, our real-time body tracking and modeling algorithms track people’s movements and estimate the articulated human body language and configuration.
Action Recognition & Activity Prediction
Based on defined environments and setups, we leverage interest points from articulated upper body motion and modeling as a basis to identify low-level human actions and predict high-level activities of interest. We also leverage temporal data in modeling motion patterns and inferring ongoing activities to increase probabilistic predictors.
Designed for real-time processing architectures on the edge, our Human Behavior Understanding suite of analytics target embeddability and speed at high frame rates for time-sensitive applications, without reliance on network connections. Our edge computing architecture lowers data transport time and increases data availability to variant processes.
Our face analytics cloud API offers a wealth of user visual behavior data from face analytics. This enables third party applications to easily and inexpensively integrate our AI solutions into their products and process collected image and video data in near real-time. Eyeris Cloud API empowers web application to better understand their users engagement and visual behavior as they interact with computer vision based interfaces.
4 Input Modalities
Eyeris computer vision AI supports 4 input modalities, namely, images, image frames, videos and live camera streams. All common video and image file formats are also supported including avi, flv, mp4, mov, jpg, bmp, and png.