Cognitive Modeling and Eye-tracking
Jump to navigation
Jump to search
Proposals
We have submitted several proposals to NSF related to cognitive modeling.
Resources
Eye-tracking resources:
- Cogain.org --- gaze-tracking community site: seems geared toward tool-building for mobility-impaired etc.
- Tobii client info --- Andrew Duchowski's Tobii client libary at Clemson
Modeling resources:
- CogTool --- visual cognitive modeling tool for UI prototyping from Bonnie John's group at CMU
- ACT-R overview --- tutorial and links from "Theories in HCI" course at Maryland
- ACT-R -- CMU ACT-R research group, with links to Lisp architecture
- Python ACT-R --- Python alternative to CMU's CLisp ACT-R environment
Reading:
- CMU CogModeling Reading List --- starting point for a collection of modeling papers, scroll down for list of pdfs
- Real Time Eye Tracking for Human-Computer Interfaces --- Amarnag et al., 2003
- CAEVA: Cognitive Architecture to Evaluate Visualization Applications --- Juarez-Espinosa, 2003
Device Manufacturers
| Manufacturer | URLs | Notes |
|---|---|---|
| Tobii | tobii.com
demos: glasses demo, usability testing with Tobii, gaze interaction |
products include: glasses, sensor integrated into 17" displays, and independent sensors.
They actually have an app store for disseminating projects built with the Tobii API. |
| Mirametrix | mirametrix.com
demos: set-up, overview+web browsing demo |
webcam-style sensor |
| SensoMotoric Instruments (SMI) | smivision.com | head-mounted and webcam-style tracking devices |
| LC Technologies | eyegaze.com
demos: overview |
not clear to me whether this is a combined sensor-display |
Other:
Agenda Items
Things to decide:
- Enthusiasm for investing in eye-tracking...
- Can we enumerate some experiment ideas? Are they useful and novel?
- David's is high, but we all know how much he'll do...
- Proposed experiments by Radu (nudging) and Caroline (multiview) might produce new hypotheses faster/better if collected data includes eye focus
- Cognitive modeling to estimate performance with brain diagram use is likely to have more to work with if data acquired includes eye tracking
- Head-mounted system, or a device that will sit in front of the display?
- Will we use this in the CAVE or for mobile devices, or just desktops?