Voice as a natural user interface – Hacker Noon

Since a really long time, human beings have been trying to make machines more usable by adding contraptions like a mouse and a keyboard, recently we added a touch interface. But what is a more convenient way of instructing a machine than simply telling it what to do? Speech is arguably one of the most natural ways of interacting with anyone or anything. Simple things like printing a file are unnecessarily hard to do with current systems (worse, if you have to configure the printer). Wouldn’t it be nice to just tell it to print this goddamn document?

My bet is that a voice interface would be one of the key components of the ultimate computer. However right now, we have some problems. Your pet can recognize you from more than a hundred yards away. Your spouse knows when you are in a sour mood simply by your expressions and body language, and responds to you accordingly. They know when they’ve done something wrong.

Your computer doesn’t know if you are sad or angry. At the moment, it doesn’t even know if you’re in the room.

The challenge for us is not to design bigger screens, better sound quality, better looking and easier to use GUIs. It is to make machines understand and respond to you based on your needs. Computers should be able to understand various verbal and non-verbal ways in which humans interact with each other on a daily basis. When I wink to my friend from across the table in a particular situation, both he and I understand exactly what we meant. A lot was said without having to say any words.

What is interesting is that many of you are already thinking of how to solve these problems. Enough technology exists to trigger ideas that could very well be possible solutions.

read original article here