TEEP-SLA is a 3-year (from December 2015 to November 2018) project, supported by Istituto Italiano di Tecnologia (IIT) and by Fondazione Sanità e Ricerca, founded by Fondazione Roma, that aims to create novel assistive technologies for ALS (Amyotrophic Lateral Sclerosis) patients. In particular, this project looks at satisfying the patients’ social interaction and communication needs with innovative patient interfaces and associated robotic technologies.
TEEP-SLA is an italian acronym for "Tecnologie Empatiche ed Espressive per Persone con SLA" (Empathic and Expressive Technologies for People with ALS), highlighting how such technologies will interpret the internal state of the users in order to facilitate their volitional acts.
The team, belonging to the Biomedical Robotics Lab of Advanced Robotics Department at IIT, is multidisciplinary, including roboticists, biomedical engineers, interaction researchers and healthcare professionals. Field studies (involving patients, their relatives, and healthcare professionals) will be an integral part of the work, serving both to refine system specifications and to assess new technologies. This research will advance the state of the art in brain-computer interface (BCI) control, enabling greatly enhanced patient experience during the use of assistive communication and social interaction systems, usually based on eye-tracking technologies (the first result of the project is an integration of this classic paradigm with the BCI, as illustrated below).
TEEP-SLA project is organized in order to provide interaction and communication technologies that are empowered by two different processes: empathic and expressive.
The technologies of TEEP-SLA are empathic because they must be able to recognize the internal state of the user, according to psychophysiological constructs of emotion and motivation, and to adapt to them both the detection of user's commands and the feedback received by the user, in order to ease the recognition of the commands themselves.
The technologies of TEEP-SLA are expressive because they must be able to mediate the intentions of the users in expressing themselves in communication towards other individuals and in control towards devices, by means of the collection and interpretation of ocular movements and physiological signals. Thanks to the contribution of the empathic modules, the result will be an engaging multisignal user interface with adaptive feedback optimized for the user's limitations, skills, needs, and goals.