Sunday 5 February 2017

Microsoft's Vision of the Future



In 2009 (which was a whopping eight years ago), Microsoft released a video showing a vision of what they believed the future of Computer UIs would look like (possibly in 2020). I only recently came across this video, but to me, even now, nearly a decade later, some of the technologies seem completely highfalutin - things that are still in the realm of sci-fi and nowhere near ready for consumers.

The first thing that this reminded me of is a topic we discussed in class a couple of weeks ago - it takes decades for a tech idea to go from the furthest reaches of the mind's of researches or visionaries to the production chain to the pockets or desks of consumers (although with the current Internet-of-Things trend, tech goes further than just your pocket or your desk!) It takes quite a few more years for new technology to mature and become highly usable and functional. The first touchscreen phone came out in the '90s. Even now, after all these years, companies release new touch interactions that further enhance productivity and usability. While the layman may believe that technology changes quickly, the reality is that it takes years-and-years before a single compelling idea can become widespread in the consumer arena.

Now, going back to the video, I wondered - some of these things certainly look impressive, but how far away are we really before we have access to such things? It seems I am not the only person to have asked myself this question, because I found my answer in the comments

Roman Yoshioka provides the following breakdown of present-day approximations to some of the conceptual technologies seen in the video:

Just to give you all some hope, these are the things that are already possible: 
0:08 (Do you have a cat? - Real time translation): Skype translator  
0:38 (Shannon's stuff - Remote classroom): Google Classroom  
0:48 (Work Schedule - Digital calendar): Outlook, Google Calendar and every other calendar app 1:08 (Flight tracking) I can only recall Cortana and Google Assistant doing this. 
1:35 (Air gesture): Leap Motion  
2:09 (Holographic info): Microsoft HoloLens, though with a huge device on, and Google Glass 
1:44 (Foldable phones): [RUMOR] the Surface Phone 
2:55 (Screen hubs): Surface Hub 
3:28 (Voice assistants): Cortana, Google Assistant, Amazon Alexa, Hound by SoundHound, etc. 
3:51 (Fingerprint recognition): Has been around for quite a while 
4:29 (Smarthomes): Maybe a combination between Alexa and Surface Hub? Possible but not done 5:12 (Image recognition): Microsoft CaptionBot, Google Photos can do it too  
5:29 (Green roofs): Possible but barely implemented

In other words, it seems we are at that point in time where truly next-gen computer UI is in that experimental, still-developing stage just as touchscreen phones were still being refined in the 90's. When I first started taking this class - a single, worrying idea worked in the back of my head. Do we really need any new innovations in UI? The keyboard and mouse are decades and decades old yet they are still the preferred choice for interacting with PCs. Touchscreens are ideal for smartphones and tablets and it doesn't seem like there are any better alternatives in the horizon. What else do we really need? But then again, what if the first GUI designer asked why we needed GUIs when the command line works so well? Why do we need more RAM? As one tech visionary so foolishly asked. It is human nature to be complacent. But this video indicates that are still innumerable things that could become more functional had they sported better UIs. It is our job as scholars of UI to find those things, document them, experiment with them, work with them, refine them, create them. This 8-year-old video has given me the inspiration I needed to believe that there is still so much work to be done before humanity can truly stop and say yep, now our work is really done.


No comments:

Post a Comment