If there is one thing that separates humans from robots is context. As humans we understand context and subtext, where if we drop something on the ground, we can tell a friend to “pick it up” and where our friend will know what “it” is. Robots unfortunately for the most part still lack contextual awareness, but that’s something researchers are trying to instill.
Recently researchers at the Computer Science and Artificial Intelligence Laboratory at MIT have put out a paper in which they discuss teaching robots contextual commands, where robots will be aware of objects and the environment they are in so that users won’t need to be so specific when issuing commands.
According to CSAIL postdoc Rohan Paul, who is one of the lead authors of the paper, “Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors. This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”
As it stands there are plenty of companies working towards making AI more contextually aware. For example with Google Translate, Google has worked at making its translation process take into account the context of the sentence as opposed to literally translating word for word.