Built into our hands are nerves which sends signals to the brain to let it know what we’re touching. For example, it lets our brain know if an object we touched is cold, is hard, software, sharp, and so on. However, how does one convey those kinds of information to a computer who does not come with similar nerves?
According to the researchers at MIT, perhaps the answer could be in simulating nerves, in which they have since created a sensor-packed glove that can help teach computers how to identify objects by touch. Dubbed the “scalable tactile glove”, this is a glove that contains 550 tiny sensors embedded into it, with each sensor designed to record pressure signals.
This means that depending on how we grasp an object and where the various pressure points are, that information could be used to help computers identify objects. Ultimately, it would allow the computer to classify objects and predict its weight without need for visual input. So far, the researchers have had some success with the glove, where it was able to identify a dataset they compiled with 76% accuracy. This is also more efficient and cost-effective compared to other systems, where this glove was made using available materials for only $10.
Ultimately, this information could be used to help train computers on how to hold certain objects by understanding how we, as humans, hold them. This means that in the future, it could lead to more nimble prosthetic limbs, or create prosthetics that are designed for specific tasks.
Filed in AI (Artificial Intelligence). Source: news.mit.edu
. Read more about