What do Bill Gates and PlayMotion Have in Common?

In actuality, not a lot. But, apparently we both believe that the keyboard will eventually become deprecated in favor of more natural inputs.

Over the past 30 years, computers have changed dramatically in terms of processing power, graphics capability, and storage capacity. However, the one thing that hasn’t fundamentally changed is how we interact with the computer. We are still tethered via keyboards, mice, joysticks, and gamepads. Even the Nintendo Wii, as cool as it is, has you tethered to the experience through a wireless controller.

We’re doing our best to change that at PlayMotion, and in some cases, on a grand scale. Some of our experiences have hundreds, even thousands of simultaneous people collaborating together using natural gestures. We believe that the human body is the ultimate input device. Simply put, the human body is capable of performing movements and gestures that cannot be replicated by a traditional input device.

Gates sees diminished role for keyboards

PITTSBURGH – People will increasingly interact with computers using speech or touch screens rather than keyboards, Microsoft Corp. Chairman Bill Gates said.

“It’s one of the big bets we’re making,” he said during the final stop of a farewell tour before he withdraws from the company’s daily operations in July.

In five years, Microsoft expects more Internet searches to be done through speech than through typing on a keyboard, Gates told about 1,200 students and faculty members Thursday at Carnegie Mellon University.

Gates also said the software that is proliferating in various branches of science, including biology and astronomy must become even more advanced.

“They’re dealing with so much information that … the need for machine learning to figure out what’s going on with that data is absolutely essential,” he said.

Microsoft is trying to establish ties not only with university computer science departments but also with reseachers in other scientific areas “to help us understand where new inventions are necessary,” Gates said.

Gates plans to retire as Microsoft’s chief software architect in July and focus on philanthropy.

Hey Bill, it’s one of the big bets we’re making as well, although I don’t think the next point on the curve is as simple as speech recognition and touch screens. It is probably a complex mix of things, including natural, untethered gesture recognition, one of our areas of interest. Let’s face it – touch screens and speech recognition have been around a long time – granted, the technologies are much better now, but I can still type faster than I can dictate (then type to correct). However, consumers are adopting touch technology now en masse (e.g. the iPhone), as well as speech recognition (e.g. voice dialing capabilities on cell devices, Microsoft’s Sync technology for cars, etc.) But we have already come to expect those technologies … we’ve seen them mature over the years.

In my opinion, the next exciting point on the curve is the nexus of computer vision, gesture recognition and visual immersion. I’ll post some more thoughts on this soon …



  1. Scott, the common denominator in the examples you give are all devices or locations or contexts where a keyboard and mouse are not feasible. In a typical office, or home office or kitchen, where there is room for a keyboard and mouse, those will continue to be the predominate input device. Nothing is as easy to use and as productive as those two things and nothing has shown any where near the ability to yet replace them. Face it, no one wants to talk to their computer or touch their monitor or wave their hands in the air if they don’t have to. Bill Gates has been saying this for 10+ years and he is still wrong. Except in non-traditional computing examples, speech and touch interfaces will not replace the keyboard and the mouse, not for a very long time.

  2. Hi Paul – thanks for stopping by.

    I agree with much of what you are saying. I remember playing around with early speech recognition software 20 years ago. We’re *barely* at the point now where we are starting to see any degree of accuracy. But as with all things tech-related these days, the time-frames are becoming compressed. It won’t take 20 years for the next step on the curve.

    The stuff we do is typically large scale – so yeah, keyboards, etc. are not really feasible. I do think a strong case can be made for the practical applications of gesture recognition, speech recognition, and touch screens – but not on the desktop. Smart homes, for example.


  3. It may be that ‘gestures’ and alternative input devices open up ways to interact with computers in new places that we have never thought of. I look at how my 6-year-old knows what to do with a Wii remote intuitively and have to go ‘hmmmmmm’….. So think ‘add’, not ‘replace’.

  4. Hi again Paul.

    I think you’re right in the sense that it will be an additive process, rather than subtractive. I see the desktop market as having the longest way to go – given the fact that everyone uses a keyboard, mouse, etc. Business productivity could be a market for new input mechanisms, but then again, only if someone proves that you can actually be “more productive” with them.

    I think the huge play for sensory inputs is on large scale affairs, and situations where you have one active participant and many watchers (e.g. Minority Report). There are others. The one assumption I can definitely bank on is that the journey to discover the next point on the curve will be a fun one … no matter how it shakes out. :)


Leave a Comment

Your email address will not be published. Required fields are marked *