The Day That Everything Changed

ARKit and ARCore bring immersive computing at scale. What we will build?

June 5, 2017 was the day that everything changed. During Apple’s 2017 World Wide Developer Conference, the company announced the beta availability of ARKit, the native software layer for blending digital objects with the real world. In the four short months since that announcement, ARKit has taken the world by storm, unleashing a series of developments that will not only put augmented reality at the forefront of mobile computing, but also provide new momentum for virtual reality. And just in time.

AR’s Long and Winding Road

Augmented Reality had been wandering in the wilderness for some years. The explosion of the smartphone market led to a crop of startups circa 2010 that brought the camera and location services together with innovative applications, such as Metaio and Layar. These types of ventures enjoyed mixed success, at best, getting acquired or pivoting, as the market was slow to materialize.

One notable exception is Vuforia, a supplier of AR middleware that spun out of Qualcomm and was purchased by CAD software giant PTC. Vuforia has continued to find purchase via a combination of enterprise and consumer applications, and powers tens of thousands of AR experiences today. But for years, even Vuforia’s AR was considered a “gimmick” for many applications, not an essential enabling layer of what many of us believe is the next step change in computing, the immersive interface.

Then along came VR.

Back In the Limelight

View at Medium.com

Looking back on the stratospheric rise of virtual reality since the Oculus Kickstarter five years ago, I am still amazed. I am obviously jaded about this space, having worked in it for over twenty years, so I was very skeptical about the possibilities when I first tried the DK1. It wasn’t just the low resolution, nausea-inducing tracking and insufferable form factor; those should have been enough to kill this thing in its infancy. No, I was more skeptical about the market, because I had been down this road before. I couldn’t imagine that consumers were ready for an immersive computing experience, because I had seen too many failures, both personal and industry wide, that were clear indications that the world was not ready for 3D.

Obviously, I was wrong about Oculus. Enough things had changed in recent years, apparently, that folks could see the potential in a fully immersive computing experience. Not just the tech industry, but consumers and customers. And so, game on: here we are, five years into the immersive computing revolution, thanks in large part to the Oculus Rift.

With the resurgence of consumer VR came a parallel renaissance in augmented reality. AR industry players wisely took advantage of the spotlight VR cast on immersive technology, and renewed their marketing efforts, riding on its coattails.

This doesn’t mean that AR was standing still that whole time. Pokémon GO and Snapchat filters shipped as mass-market consumer AR phenomena, followed by Facebook’s camera-based AR premiered at F8 this year. Also, Microsoft Hololens and Google Tango have been pushing the envelope on industrial AR hardware for several years. But I think it’s fair to say that these projects have enjoyed additional consumer awareness due to massive buzz that was building thanks to VR.

And right as AR stepped back into the limelight, ready for its closeup, along came ARKit.

Immersion at Scale

On the heels of ARKit’s release, Google announced ARCore, an Android API that brings a subset of Tango’s AR technology to consumer-grade android phones, notably Samsung S8 and Google’s Pixel line. ARCore is similar in features to ARKit, offering single-plane surface recognition, positional tracking — so you can blend true 3D graphics, not just sprites overlaid on your camera image — and light estimation.

ARKit and ARCore leave out some of AR’s more promising features, some of which require additional hardware support, such as environment reconstruction (aka SLAM). But it’s reasonable to think that, over time, those core technologies will migrate into newer generations of mobile devices. In the interim, smart software libraries like Vuforia can fill the gap with a combination of on-device APIs and cloud-based services (not to mention much-needed provide cross-platform capability between the two operating system APIs).

While the feature sets for ARKit and ARCore don’t run the gamut of what many of us would like to see for “true” AR, this is a significant development nonetheless. Because with these two systems, we now have a baseline AR capability that spans the two most popular mobile ecosystems, and reaches nearly half a billion devices.

And Apple isn’t standing still. Today, the world changed again with the company’s press event about the upcoming iPhones. iOS 11 ships on September 19th, and with it, the ability to deliver AR to between 300 and 400 million phones. The iPhone 8, coming out on September 22nd, has even tighter integration among the components that run AR. And the iPhone X, previewed today, comes with Face ID, based on a TrueDepth camera system that has a depth camera, IR camera, flood illuminator, dot projector, and an on-device neural network on a chip (the A11 bionic chip… what?) — the better to recognize you and only you, securely. The same face tracking tech is also used to drive true 3D AR filters as well as animojis— cartoon characters that speak for you in your instant messages.

It’s safe to say that Apple appears to be doubling down on its AR bet.

Now that the big guys have stepped into AR with full force, the world has taken notice. We’re seeing a huge wave of interest in AR from game developers, storytellers, brand creatives, and enterprise application developers. It’s fair to say that we could be approaching a tipping point for immersive development that we hadn’t seen with VR, because now AR promises scale, based on the phone in your pocket, with no need to strap anything funky on your head.

VR Will Have Its Day (Again)

This reinvigorated interest in AR comes at a time when VR is sputtering a bit. Many of us knew that mass-market adoption of VR was going to take a while. Unity’s CEO (and my boss) John Riccitiello spelled this out in a keynote address at the Vision Summit in 2016, a talk which has come to be known as his “gap of disappointment” speech. The basic idea was that, while we’re bullish on the long-term growth of XR, we know that things are going to take longer than a lot of folks expect.

John also did the keynote at this year’s VRLA conference, and his talk there featured a slightly more upbeat, updated version of this concept. John laid out what he thinks it’s going to take, specifically, to get there with mass adoption of XR. Have a look if you haven’t seen it.

The biggest takeaway from this talk is that we need two critical things to hit mass scale: 1) a total cost of ownership of under 1,000 dollars, and 2) a couple of hundred million units shipped.

VR is nowhere near that yet. While costs are quickly dropping, we are generously at 10 million units (not counting Google Cardboard; but that’s a debate for another day).

ARKit and ARCore, on the other hand, get us to that mass scale in one stroke.

Now, John wasn’t talking about AR in that particular keynote. He was focused on VR. But these two technologies occupy a spectrum of immersion. At the highest level, VR, AR and headset-based mixed reality are all about immersive 3D graphics. Yes, there are some points of divergence. But in general, it’s real time 3D objects, environments and characters viewed in an immersive 360 degree environment. The skills you need to build for one translate well to the other, and consumers will become more and more ready, even expectant, for interactive 3D content regardless of its delivery medium. And of course, you can use Unity to build for all of them. Mwahah…

The recent big developments in AR threaten to steal VR’s thunder. We are already seeing a bit of an exodus by developers attracted by the lure of scale and the potential for monetization that it brings. Despite VR offering a more complete and compelling trip, the user base for AR is two orders of magnitude higher. Why invest time and energy into creating something that can be seen by 10 million people, when you can put that same energy (actually, quite a bit less in most cases… AR content tends to be much less complex) into making something for an audience of half a billion? It’s hard to argue with those numbers.

But I am guessing this is simply a stage in the ongoing development of immersive computing. In the balance, I think that it’s going to be a good thing for AR to get the attention for the next little while. VR needs more time to get the pricing and form factors right. But there’s no replacement for the sheer joy of completely enveloping yourself — head, hands and body — into an experience that takes you to another place. VR enables magical realms that AR can’t, by offering complete escape. The pendulum may swing to AR for a while, but there are so many great use cases for VR that I believe it will have its day again, and the fresh energy that AR brings to the world of immersive computing should give it enough lift to keep VR rolling too.

The Right Stuff

Whoever controls the high ground of cyberspace controls the Metaverse.

Anybody who doubts that virtual reality on the web is a good idea needs to start paying attention. Last Friday, over 800 attended  the SFHTML5 Meetup, shattering attendance records, to learn about browser-based VR and chat with a high-powered group of thought leaders. For two hours, Mozilla’s Josh Carpenter, Brandon Jones from Google, Leap Motion founder David Holz, DODOcase’s Patrick Buckley and I dropped beats about creating low-cost, accessible, no-download VR just using JavaScript and your text editor.

webvrtherightstuff

After the meetup, we rolled to the Upload party to see the latest VR hardware, killer demos and body-painted go-go dancers. It was a VR night for the ages. It didn’t bother me that 100% of the demos I saw at Upload weren’t web-based. Let’s face it, it’s a lot easier and faster right now to build a Unity or Unreal-based native app for Oculus and Gear VR. But as I’ve misquoted before: we don’t do these things because they’re easy, we do them because they’re hard. It’s going to take a while yet to roll out a VR web, but in the long run we think the effort will be worth it.

And so the intrepid group of warriors that kicked this night off brilliantly laid out. Virtual Reality is for everyone, not just the few: accessible, affordable and easy to create and share with the world. It’s our shared belief that VR will be on the web in a big way, and the web itself will have a VR interface in the not-too-distant future.

If you missed it, you can check out the video here.

 

 

 

Virtually Anywhere

OR, how the Web will eat everything in its path – again.

Now that WebGL is truly everywhere, the close-knit, doggedly persistent and technically masterful group of folks who made it happen over the last several years can take a collective bow. From Vlad  Vukićević, WebGL’s creator, to the countless engineers and working group members on browser teams who created a great spec and world-class implementations, to Ricardo Cabello Miguel, aka Mr.doob for the amazing Three.js, to us camp followers and blogging faithful: congrats, felicidades, and kudos on a job well done!

But we can barely pause to enjoy the moment, because there’s a new game afoot: Virtual Reality. From the Oculus kickstarter to the Facebook acquisition, this surreal ride has already turned the industry on its ear. New products will be forged, new markets will appear out of nowhere, and new fortunes will surely be made.

And a new war begins.

Ultimately, hardware like the Oculus Rift will become a commodity. That’s just the way of it. In the long term, the big winners will be the applications and content that run on VR hardware. And those applications will need software.

But what software? Whose software?

Do a web search for “Oculus Rift demos.” You will find many portal sites featuring amazing experiences. All of the demos are native code applications for PC or Mac – solitary, wall-garden experiences; massive downloads; not integrated with the Internet. Earlier this month, I was privileged to speak at the first-ever Silicon Valley Virtual Reality Conference. The demos on the show floor were also all native “silo” applications. Given my purview, I naturally asked several of the startup founders  at the conference about their plans to use WebGL and other web technology. The reactions ranged from blank stares to outright truculence: it’ll never work, it’s too slow, and why would I want to do that? It’s a familiar tune I’ve been hearing for years.

I imagine we’ll hear it for a while longer. In the rush to snag millions in venture money while it’s popping fresh, developers have flocked to proprietary tools, closed systems, and native apps. It’s hard to blame people. This is the easy way, the obvious way. But we don’t do these things because they are easy; we do these things because they are hard.

The hard way, the right way, the way that will win in the long haul, is to build virtual reality on the Web. Using HTML5, WebGL and CSS3, we can create VR experiences that run virtually anywhere, instantly accessible, with no downloads. Integrated. Connected. Social. Mashable. Hackable. Shareable. You know: the Web. Yes, it is early. And we’ll need some extra support in the browser itself to make it work. But that’s coming.

Last night I presented early demos of WebGL applications running in Oculus VR using various browser extension hacks. It’s not great yet, it’s still very early, but it’s promising. I also gave a great talk – but Mozilla’s Josh Carpenter (@joshcarpenter) stole the show. Josh threw down about how in 5 years’ time we won’t be navigating flat pages any more, that the whole interface to information will be in 3D, experienced through virtual reality display technology and new interface paradigms. An inspiring few minutes (dude, you had me at “psychedelic”)!

Josh’s manifesto wasn’t just idle speculation. He’s already working on it. In fact, today Mozilla shared that they are working on building Oculus Rift support directly into the Firefox nightly builds. Carpenter and Vukićević gave a live talk on Air Mozilla that outlines their plans. Josh actually dropped the bomb at last night’s meetup, and then was joined by Brandon Jones of Google, who is also playing with doing the same in Chrome. There are no time frame commitments yet, but knowing these teams, it won’t be long. (Odds are they’ll probably have great support for the devices long before Oculus or anybody else actually ships a commercial headset.)

While these developments roll out, I imagine we are going to continue to see the bulk of VR development on closed platforms using proprietary tools, and active resistance, nay-saying and political posturing. For a while, it will be a war, resembling the fracas over HTML5 on mobile. But ultimately, the Web will win VR as it did mobile, and this, too, will pass.

The language of Metaverse will be JavaScript. The platform will be the browser. To paraphrase Vukićević : the Web is the Metaverse – just with a 2D interface. Well, it’s time for those walls to start tumbling down. It’s going to be amazing to watch.

In the meantime, native app devs: go forth and build silos. Create your mind-blowing Oculus demos. Stick them in front of investors – and have your checkbook ready to sign before their eyes uncross. Godspeed, I say!

But word to the wise: learn JavaScript and HTML5 too. You’re gonna need it.