Last week Microsoft kindly invited me along to their London offices to try out the snazzy new Hololens.
I’ve used several different virtual reality experiences in the past, and have always been blown away by them.
That’s sort of what I was expecting from the Hololens, but obviously I hadn’t done my research properly.
Rather than VR, it’s described as ‘mixed reality’ device that ‘blends 3D holographic content into your physical world, giving your holograms real-world context and scale, allowing you to interact with both digital content and the world around you.’
So, here’s what I learned about the Hololens, bearing in mind I knew very little about it prior to the meeting.
A merging of real and virtual worlds
As mentioned, I incorrectly assumed the Hololens was a VR helmet.
In fact it’s an augmented reality device, implanting virtual items into your real world environment.
Here’s the author expertly demonstrating the Hololens hand control, called the ‘air tap’
The virtual items have a fixed location, so you can walk round them and view them from different angles, or wander off and come back to them at a later date.
As such you can still interact with the people around you and carry out normal tasks, with the Hololens enabling additional digital activities rather than shutting you off in an entirely virtual world.
There’s no peripheral vision
Whereas VR headsets totally encase you in a new reality, the Hololens’ virtual images only appear within a small rectangle on the centre of the visor, and only when the Hololens is aimed in their direction.
What this means in practice is that virtual items that appear next to you are essentially invisible unless you turn to face them.
It might not seem like a massive problem, but it does hamper the UX somewhat.
For example, if you get too close to a virtual item then you have to keep moving your head around to take it all in.
And if there’s a virtual item on the floor you can’t just glance your eyes down at it, you have to tilt your whole head forwards and aim the Hololens at the object for as long as you want to view it.
It doesn’t feel very natural.
B2B seems a more obvious avenue than B2C
It seems that most of the case studies and proofs of concept have so far been developed by B2B companies.
This seems like a more obvious route for Microsoft to go down currently, as it’s easy to see how the Hololens could be used to enable collaborative working, prototyping or to give instructions to people doing some kind of technical work (e.g. a mechanic).
One of the demos I saw was developed by Case Western Reserve University to teach an anatomy class.
A computer generated skeleton appeared in the centre of the room, offering me a virtual look inside the human body.
At various stages of the demo you could see the heart beating, compare different types of bone breaks, and view the circulatory system in action.
It’s these types of functional applications that offer the best route to market, as personally I can’t see any major appeal to consumers.
One might assume that video gamers would be interested to try it, but it’s not going to compete with a VR PlayStation 4 game, in my opinion.
Equally, one of the demos I saw showed how someone might use the Hololens to decorate their living room with virtual pictures, artwork and other novelty items.
It was all quite impressive, but is having a virtual photo on your wall better than just hanging a real photo on the wall?
The Hololens also allows you to have a virtual internet browser hovering in your room, but again, I feel it’s easier just to browse the web on your smartphone or open a laptop rather than putting on a VR helmet.
The major drawback in all this is the lack of any peripheral vision, so all these virtual trinkets only appear when you’re aiming the Hololens directly at them, otherwise you’re just sat in an empty room.
All is not lost for the B2C market, though.
Microsoft is currently working with a number of automotive brands, and it’s easy to see how the Hololens could be used in a car showroom to demonstrate different vehicle colours, interiors, wheels, etc.
But I’m dubious about the broader consumer market.
Voice control works well
The voice commands I tried were all simple and pretty much worked first time.
I found this to be a major problem with the now-defunct Google Glass, which reacted randomly to nearly everything the wearer said.
That said, the commands on the Hololens were limited to things like ‘remove’ and ‘adjust’.
It feels like a prototype
The Hololens looks very slick and I enjoyed the demo, but in the 30 minutes or so I was using it the technology suffered a couple of glitches.
The virtual images are supposed to be rooted in one spot, but on occasion they would jump about as I moved around the room.
At one point I was instructed to look to my right to view some kind of virtual toolbox, but it appeared down by my feet instead.
Another demo had to be restarted when the stationary object I was viewing suddenly slid over to the other side of the room.
These small glitches didn’t bother me at a press demo, but I’d be less impressed if I’d actually bought one of the devices.
Hopefully these bugs will all be ironed out in due course.
For more on this topic, see: