Building Algorithms: Data collection and Modeling
All of EyeQuant’s predictive algorithms are derived by applying machine learning to data from studies of real users’ reactions to websites. For example, the attention map is based on eye-tracking data. The process looks something like this:
- Conduct lab-based eye-tracking research & collect aggregate eye-fixation data for the first 3 seconds of users’ experiences. Most of our eye-tracking data is collected through research partnerships with leading universities.
- Research visual characteristics of designs that may help predict eye-movements and establish methods of detecting those features in screenshots.
- Use proprietary machine learning processes to identify which visual features drive attention and assign weights that reflect the relative importance of each feature. The attention map is driven by about 50 predictive features, including luminance contrast, saturation, and edge density.
- Regularly update training data and our modelling process. Viewing patterns and design standards change gradually over time, and EyeQuant’s data is updated periodically to ensure that predictions are based on up-to-date research.
You might be wondering how we decide which visual features are important, and what their relative weights should be. The answer is simple: whichever combination of features and weights deliver the most accurate predictions.
EyeQuant’s research team is a leader in the field of evaluation methods for predictive models. The metrics used to measure accuracy depend on the type of prediction being made. For example, attention predictions are evaluated using 4 statistical metrics. This peer reviewed paper by people on our team provides a detailed look at them: Measures and Limits of Models of Fixation Selection.