Google Research scientists at Everyday Robots have demonstrated that AI-driven robots with three fundamental skillsets rarely present in one entity: chatbot, autonomous roaming, and hand-dexterity.
These three functions are often worked on separately. For example, industrial robots can often move autonomously to transport parts or can move things to assemble parts. They typically do not do both.
And factory-style robots typically don’t understand (or listen to) what you say and would surely not respond in human language.
In that sense, Google is trying to assemble several technologies that are key to making (helper) robots useful in the “world of humans” if you consider that a factory or an automated bar is, in fact, a “world of robots.”
Typically, robots work in a “structured environment” where everything has its place, and anything unexpected could throw the robot off. On the contrary, our (human) environment is more chaotic and full of things that may be there unexpectedly.
Google uses Artificial Intelligence (AI) to teach robots to work and move in such an unstructured world to work with people. It’s a monumental effort that seems promising but will require many more years of research.
Like your Google Assistant, the robot should understand basic commands or phrases that apply to its function, such as “clean the spill” or “give me some chips.”
From there, it will plot a course of action that might involve several steps to perform the task. It is a higher level of AI where several neural networks work together to understand, plan the subsequent actions, sense the environment and take steps.
Right now, this is Research, and Google will need to know more about the real-world capabilities before envisioning a commercial application. We can’t wait to see what the next step is.
Filed in AI (Artificial Intelligence) and Google.
. Read more about