Archive for the ‘Apple’ Tag
The W3C VR Workshop team. See us in full 360.
After all this time, VR really is the Web’s Next Big Thing. A two-day workshop brought the Web’s best and brightest together to define the future of the medium.
TL;DR: The web community’s next big push on browser technology puts VR and AR front and center. The Immersive Web is going to happen, and sooner than we think. It will take vision, hard work, compromise, and lots of really smart people.
The New Normal
I’m not really sure how it happened, but within two years WebVR — the technology that connects web browsers to virtual reality hardware — has gone from a wacky idea to the new normal.
WebVR was hatched a few years back by WebGL creator Vlad Vukićević of Mozilla, collaborating with Google Chrome hotshot Brandon Jones, and publicly demoed for the first time at our San Francisco WebGL meetup in the summer of 2014. The demos were crude, and the tracking sucked, but it was a start, and we were excited by the possibilities.
But over the past year, something changed. Seed-stage investors started asking me about WebVR, wanting to know the lay of the land. VentureBeat sang its praises and touted the benefits. Microsoft announced support for WebVR in Edge and contributed to the spec to make it suitable for AR. And in a penny-drop moment, a team at Oculus approached me with their plan to get into the WebVR game in a big way. This led to WebVR being featured in the keynote at Oculus Connect 3, with a demo by my fledgling startup and WebVR prominently placed in the Oculus Developer Portal.
It appears that WebVR has arrived. In a previous post I went into some detail about why I think the timing is now. But that only covers the why. How we are going get there has been by no means clear, till now.
The Final Frontier
While all this WebVR goodness was brewing, a group from the World Wide Web Consortium (W3C), the people that brought you the Web, organized a Workshop on Web & Virtual Reality. The workshop, hosted by Samsung in San Jose, CA on October 19th and 20th, brought together browser makers, content creators, application developers, and technical experts to discuss enabling technologies and standards for an open VR infrastructure.
The very existence of this meeting showed that the powers-that-be in W3C understand the importance of VR as the new frontier of web development. That’s a big win, in and of itself. But the reality of the event went beyond that. The technical quality of the presentations, the concrete plans shared by product vendors, and the positive energy and spirit of collaboration showed how seriously the industry is taking this initiative. In my view, this workshop was another watershed, in a watershed month for VR that included Daydream View, PlayStation VR and Oculus Connect 3.
The W3C Workshop covered broad ground and when it could, went deep. After two days of lightning talks, panel sessions and breakouts, my ears were fairly bleeding from the information overload. I am sure the organizers will post full meeting notes soon. (See the schedule page for the detailed list of speakers and topics.) In the meantime, here are some highlights.
- Sean White keynote. An old friend from the VRML days, now VP of Technology Strategy at Mozilla, Sean White delivered a homey keynote that hearkened back to early VR and the collaboration that built the web, setting the tone for the rest of the workshop.
- WebVR API Update and Browser Support. There are more tweaks coming in the WebVR APIs, leading to a likely 1.2 version before broad adoption. Google, Mozilla, Samsung and Oculus shared concrete plans and expected ship dates for desktop and mobile VR browsers.
- Lightning talks. A barrage of 5-minute lightning talks covered UI design, accessibility, 360 video formats and metadata, immersive audio, multi-user networking, and declarative VR languages and 3D file formats.
- Breakout sessions. We split the group into sessions on various topics, including high performance VR implementations, hyperlinking in VR, and extending the browser DOM into 3D.
My fingerprints can be seen on a lot of the Workshop’s content, but I take particular pride in one development that I hadn’t even planned for. glTF, the new file format standard for 3D scenes, continues to build steam, with a groundswell of endorsement from industry partners over the last several months. glTF figured prominently in many discussions over the two days. Folks even floated the idea of glTF as a built-in format that browsers could natively read and display (analogous to JPEG images for 2D), with immediate application as 3D favicons, hyperlink transition animations, and built-in layer graphics, e.g. for splash screens and heads-up displays. Whoa. Mind blown.
Rip Van VRML
As fun a time as this was for me, it was at times surreal. A generation of brilliant technologists had locked themselves in meeting rooms to design the Metaverse, along the way rehashing ideas we explored two decades before, such as 3D scene representation, VR interface design, and shared presence. In earnestness and with great enthusiasm, the kids at this workshop were reinventing wheels left and right. But how could they know? Many of them were in middle school the first time around… if they were even born yet.
A modern day Rip Van Winkle, I had fallen asleep during a 1996 VRML meeting, and woke up twenty years later in the same room. Then I began to realize that things were different. People were holding these little computers in their hands. The Internet was fast and, apparently, connected everybody on the planet. And you could fit VR equipment in your backpack! Most of all, the people leading the discussions weren’t crazy futurists on the fringe; they were sane futurists working at mainstream companies.
It was 2016, and the real world was ready for the virtual one. While most of the problems are the same as twenty years ago, now we’re looking at them through a post-Facebook, post-Pokemon Go lens, and building on technology that is thousands of times more powerful.
A Community Effort
The W3C Workshop explored a vast landscape of technologies and standards, interface design and best practices, and tools and frameworks. While it was a great kickoff, it was just that: a kickoff. There will be much hard work going forward.
We already have a head start, and some success under our belt. WebVR is maturing and really working now, with 90FPS head tracking and support in many browsers. glTF is a ratified standard from Khronos, with steadily growing support. Much of what we discussed at the workshop simply extends existing standards like HTML Video and Web Audio. So we’re not tackling any of this from a standing start, or from an ivory tower. The output from the workshop will be brought back to working groups at W3C and Khronos, or new groups will form to tackle new pieces of the problem.
That, generally, is how the process will unfold. But it’s not just about process; it’s about people. People were the key to the success of this workshop. The organizers, Dominique Hazael-Massieux, Anssi Kostiainen, and Chris Van Wiemeersch, worked tirelessly to put on a first-class event, extremely well-run with top-notch content. The familiar names of WebGL lore — Josh Carpenter, Brandon Jones, Michael Blix and Laszlo Gombos — have been joined by new ones, like Justin Rogers and Amber Roy of Oculus, Nell Waliczek and Adalberto Foresti of Microsoft, Ada Rose of Samsung, Kevin Ngo and the A-Frame team from Mozilla, and Shannon Norrell and other energetic community-builders. There were numerous positive contributions and, given the headiness of the subject matter, the mood remained light throughout the proceedings. There was a real spirit of cooperation and hope for the future.
If we can bring a fraction of this energy to bear in the coming months, we will make great progress. The movement is growing. We have enough people on this, with big brains, pure motivations and a shared vision. And that’s good… because it takes a village to build a Metaverse.
Virtual Reality app store censorship has claimed its first (non-porn) victim. As reported today in Ars Technica, VR journalist Dan Arthur created Ferguson Firsthand, a 3D recreation of the Michael Brown shooting, packaged as a Google Cardboard app for the iOS store. The app store booted the piece on the grounds that it referred to a “specific event”, and therefore its scope was “too narrow” to be considered a valid application.
I’m sure the appnazi behind this moronic decision was just doing his job, just following orders, as they called it back in 1945. And more’s the pity. In this instance, the result was both tragic and ironic. But more, it points to a fundamental deficiency of app store models. App stores aren’t set up for timely delivery of topical information. They’re set up for apps. Um, whatever those are. In this case, Arthur created an app to package up a story he wanted to tell, which, in the infinite wisdom of the store, was deemed too insignificant a hunk of content to warrant publication. I imagine if the piece had been included in a larger pack of content, say, Tragic Stories of Policy Brutality in America, 2015, then the app store might have approved it. (Would it have?)
Ferguson Firsthand is really a news story. But it’s packaged as an app for technical reasons: at the moment, the only way to get virtual reality delivered to people on a mobile device today is to package an app. With all due respect to its creator, this should never have been an app. It should be a web experience, instantly published, and instantly accessible without restriction and without app store gatekeepers. This is an issue of consumer convenience, but more importantly, it’s an issue of journalistic freedom.
Imagine news sites in the early days of the web. What if, back then, to get your daily news, you had to download a PDF? The web wouldn’t have happened– and you wouldn’t be reading this story right now. Information needs to be free, and the web is the key to that freedom. The Ferguson Firsthand incident is a sad outcome, and a perfect illustration of why we need WebVR, DIYVR, and an open ecosystem for VR in general.
The Metaverse is too big for an app store.
While the tech press was busy fondling itself over porn as the week’s big VR story, a more significant development went largely unreported. In a recent blog post, Oculus Chief Architect Atman Binstock published the lavish min hardware specs for the Oculus Rift. Binstock also announced the company’s decision to suspend all OS X and Linux development indefinitely. The news undoubtedly came as a gut-punch to the VR faithful. The lack of universal platform support means that any dreams people might have had about VR for the masses will have to be put on hold — either that or it’s time to look elsewhere for salvation.
At least we can stop deluding ourselves about one thing. The Oculus Rift is for games — period; full stop. The announcement makes this crystal clear, but in hindsight it shouldn’t come as a surprise. We saw early hints of the direction at the first Oculus Connect developer event, where it was evident that our little clubhouse of VR believers had been invaded by refugees from console and mobile gaming. The escalating hardware specs and the omnipresence of shoot-em-up content in the demo salon made it feel more like a GDC than a first-ever conference devoted to building a shared virtual future.
In the months since Connect, the Oculus team had done a respectable job supporting the SDK for other operating systems. And Oculus reps have been gracious whenever asked about applications that are obviously out of their gaming-first comfort zone. So it seems as if the company was really trying for a while there. But in the end it looks like they’ve decided to hunker down. I understand the strategy, and I actually think it’s the right choice for company. Developing for one platform makes the job easier. Focusing on a well-understood, lucrative product category reduces the business risk. Competition from the Vive and Project Morpheus has raised the stakes — we may have a real dogfight on our hands next year. Last but not least, Oculus is on the hook to ship something soon, and I’m sure Facebook management’s patience isn’t infinite. I suppose it’s better for the Rift to be a success at something than not at all, so: godspeed, Oculus. But where does this leave the rest of us?
There’s hope coming from a couple of quarters. For desktops we have the Vive and OSVR. Valve has a good track record with supporting Mac and Linux, and HTC is committed to supporting all platforms, so it’s reasonable to expect we’ll get some love there. But — hello — nobody has a Vive in hand just yet. They ship over the next few months. OSVR is fully open, so I don’t think it’s out of the question that we are going to have solid cross-platform capability on those devices. Last time I looked, not that many people were using OSVR, but the move by Oculus just might open new inroads for it.
What about WebVR, you may be wondering? Oculus is the only desktop device that browsers support right now. Sign of the times: Josh Carpenter, my pal on the Mozilla VR team, told me they “just bought a bunch of PCs” and he’s got one on his desk next to his Mac Pro. Sigh.
On the mobile side, things are brighter, but still murky. Gear VR is the top choice, but it’s far from ubiquitous, and definitely not cross-platform. Cardboard looks to be the ultimate winner, but we’ll need more high-res phones and faster tracking. I hear the Cardboard team has been staffing up with high-profile talent, so maybe these are on the way soon.
Long story short… there’s no short story. Platforms are proliferating, and each of us is going to have to pick a battle. Oculus has made a choice which ultimately will benefit the industry — by all means go forth and make VR gaming a mainstream category! — but in the short term they have broadened the gap between game developers and everyone else.
Finally, you have broader considerations that might follow what you would call the “falling domino” principle. You have a row of dominoes set up, you knock over the first one, and what will happen to the last one is the certainty that it will go over very quickly…
— Dwight D. Eisenhower, April 7, 1954, on the rise of communism
It’s taken a while, but it looks like the final domino is about to fall. The global onslaught of WebGL was already unstoppable – once Microsoft got on board with IE 11 last year – but now it’s official, at least on the desktop. Today at WWDC, Apple announced that WebGL will turned on by default in Safari on the upcoming Mac OS X 10.10, code-named ‘Yosemite’.
Now, for the 5+% of web users who browse this way, it’s good news. I would assume that this takes desktop WebGL adoption to near 100% – blacklisted cards and ancient desktop hardware being the exceptions.
But of course, the $64B question on everybody’s minds is: what about mobile Safari? That’s what most people care about. Well, Apple didn’t say anything at WWDC, but the site HTML5 Test, a browser capability testing site, reports that WebGL is running on iOS 8.0! If you’ve got the beta installed, please go try it out and confirm this. I haven’t done yet. Also here is an independent confirmation by Jay Moretti, tweeted by AlteredQualia.
So you could have a beginning of a disintegration that would have the most profound influences...
Thanks to some friends at Amazon, I was fortunate enough to get hold of a Kindle Fire HDX the minute they hit retail. Naturally, the first thing I did was pop open the Silk browser and, hoping against hope, opened a web site with WebGL.
Apple, Google: take notes.
This changes EVERYTHING.
WebGL continues to gain in popularity. New fun showcases are coming out all the time, and we are starting to see experimentation from big media names like Disney with its Find Your Way to OZ movie promo, and The New York Times showing a visualization of last year’s record hang-gliding flight. Google also recently released mobile WebGL support in the latest beta of Chrome for Android, and Blackberry continues to lead with its most-conformant HTML mobile implementation that has included WebGL for over a year.
With big brands, great content and growing mobile support, 2013 is already shaping up to be a great year. Is this the year Microsoft relents, and Apple uncorks WebGL for iOS? Here’s hoping.
WebGL just got beat hard with the ugly stick. As reported by Ken Russell of the Chrome team, there was a graphics driver issue on “some Mac OS hardware” that caused a nasty bug which corrupted memory. The culprit turned out to be antialiased rendering, that great technique for smoothing out jaggy lines along edges by blending adjacent pixels. The Chrome team decided to fix the bug by turning antialiasing off on all offending machines until Apple deals with a couple of bugs in their NVIDIA drivers. This sounds safe enough, and I can’t blame them. Better safe than sorry, especially when it comes to trashing computer memory.
Except that it turns out that when citing “some Mac OS hardware” Ken meant: “The affected machines are those with NVIDIA and Intel GPUs.” Um, that’s pretty much any Macbook. Including mine. Suddenly one day some awesome WebGL demos I wrote got visibly less awesome after Chrome auto-updated. Then I checked the same demos in Firefox and it turns out they followed suit and recently pushed a similar fix. The net result is that several of my previously beautiful projects now look shite. I am seriously bummed.
I could investigate one of several techniques for doing my own antialiased rendering via writing custom GLSL shaders. But that’s some R&D and work load I hadn’t been budgeting for. What would be much better is if Apple fixes this situation ASAP. Dean Jackson from Apple is on the case (thanks for the email replies Dino!) and I hope they fix it soon! If you’re having issues like I am, maybe add your voice to the chorus. The email archive for this thread contains info on the already-filed bug reports and an address for filing new bugs with Apple.
Just a heads-up for everyone who may have wondered why their WebGL got ugly overnight… meantime I am seriously considering (shudder) buying a new Windows laptop to show off my stuff. Somebody stop me.