This particular driving demonstration measures the level of attention of the driver, and the computer will track and rate that level. In the future, that rating could be used to either require more human involvement (if the person is falling asleep or not paying attention), or reduce potentially distracting stimuli by turning non-essential things off (music, phone…), or by offloading secondary tasks to the computer.
This works by observing regions of the brain that are called upon when more brain processing is required due to the complexity of the road, or to external influences (questions, distractions…). Typically, a simple drive without circulation or street signs to read requires much less brain activity than a high-speed race with multiple cars – you knew as much. Here, the computer was able to do a good job at assessing the level of attention of the driver. When Paul Crawford started to ask questions to the driver, the computer “attention rating” dropped, and the actual reduction of the driving quality could be perceived by human observers around.
At the moment, Intel is using a setup that uses a head-mounted infrared sensor to look at the activity of the top/front part of the human brain. There are more complex setup to observe this with more signals, but Intel researchers picked this one for practical reasons: other setups are much more complex and less likely to be usable in the real world.
The demo was pretty convincing and although a practical application is still far away, the fact that user attention can be quantified is a great stepping stone because metrics that are “measured” are much more likely to be improved. If you combine that with other technologies that scans for human life signs (falling sleep, heart rate…) it would not only be possible to offer meaningful safety measures one day, but computer could possibly manage the workload of their masters in real-time to have them live at “peak efficiency”.