This particular concept was presented to the masses this week via a new patent application that has been published by the U.S. Patent and Trademark Office, where it is known as “User Profiling for Voice Input Processing” – basically letting it describes a system that would identify individual users when they speak aloud.
At the moment, voice control has been implemented in some way or another on a bunch of portable devices, where such systems will feature word libraries that offer a range of options so that users will need to speak aloud in order to interact with the device. Over time, such libraries could grow to be stupendous in size, bogging down the whole voice input process while taking more time to decipher, hence taxing a device’s processor.
Apple hopes to avoid this necessary evil with their system that knows a user via one’s voice, and will be able to execute instructions based on that user’s identity. This makes it possible to be more efficient in accessing the iPhone, for example, in a hands-free manner.