The Invisible Modality: Developing Gaze Input Techniques for the Internet-of-Things
  The Internet of Things (IoT), a network of physical objects that can communicate, sense and act in the world, is rapidly expanding and is expected to account for nine billion of inter-connected devices by 2018. A popular, commercial example of an IoT application is smart lighting systems (e.g., Philips Hue). These are comprised by interconnected light bulbs can change tone, contrast or colour, depending on different user activities. The de facto input modality for these smart lighting systems – and most IoT systems – is touch input on a smartphone. This approach has several limitations: these systems cannot be operated when the user’s hands are busy (e.g., cooking); they require the user to always carry a proxy device for interaction (i.e., a smartphone); and, they require the indirect mapping of multiple physical devices to a graphical interface on a small display. To address these limitations, this proposal suggests the use of gaze as input for IoT devices. Gaze supports hands-free interaction; it empowers users with physical limitations; and gaze input is always present and available. The gaze technique proposed in this proposal is based on Orbits, a gaze interaction technique for smart watches that relies on moving targets to overcome the biggest drawback of gaze input systems: calibration. With this in mind, this proposal will: a) develop a smart lighting prototype that can be controlled through gaze input; and b), study the user performance with this system.

  • Start Date:

    1 September 2016

  • End Date:

    30 April 2017

  • Activity Type:

    Externally Funded Research

  • Funder:

    Carnegie Trust for the Universities of Scotland

  • Value:

    £7454

Project Team