I’ve been following Oculus Rift since the Kickstarter a couple years back, and while I didn’t help kickstart the project it has always been an intriguing idea. Of course Oculus ended up being purchased for a large chunk of cash and VC funding, but that’s a different story. Having tested DevKit 1 and DevKit 2, I was really interested to see what changes have been made with the latest prototype. The short answer is that ventilation has improved (less fogging up of the glasses), the display resolution is now higher, the screen refreshes faster, tracking is better, and combined with the VR Audio the experience is more immersive than ever.

To be clear, this is the first time Oculus Crescent Bay has been demonstrated publicly and the first time ever that Oculus has shown VR Audio to anyone outside the company. They held private screenings for the press and other "VIPs", and on the way there we passed by the Oculus booth that had a long line of people waiting to experience Oculus. Being able to jump the line and go into a private screening, I can’t help but feel a bit sorry for them. As for Crescent Bay, things have come a long way since the last time I tried Oculus (DevKit 2 at NVIDIA in September, if you’re wondering).

For Crescent Bay, Oculus put together a series of short demos that lasted about five minutes I’d estimate. All of these used positional audio, so as you turned or leaned in, you got a clear sense of the sounds moving around you. This isn’t anything really new, as we’ve had audio HRTF (Head Related Transfer Functions) doing positional audio for a while now, but combined with the goggles and stereoscopic 3D it’s very immersive. Oculus has licensed Visisonics’ 3D audio libraries, though they’re doing a lot of customizations to make things work with the Oculus Rift obviously. I had seen some of the demos before, and some of them were more in line with what you would expect from indie games; a few however were really designed to impress.

One was a city-scape that looked a bit like Gotham, with your view positioned on a platform high in the air. Looking down and stepping off the edge of the platform definitely gives a sense of vertigo, though the demo didn’t let you plummet towards the ground sadly. (And when you look down and can’t see your feet or any other representation of your persona, it definitely removes you a bit from the experience.) Another sequence has a T-Rex come stomping around a corner, similar to a scene from Jurassic Park. I was admiring the level of detail when the dinosaur puts his face right next to you, opens his mouth, and roars. The little bits of spittle flying through the air are a nice touch. Finally, there’s a slow motion on-rails sequence where your view moves forward toward a large alien robot with bullets, missiles, and even cars flying through the air – NVIDIA called this the “car flip demo” back in September. This was one demo where I definitely noticed the increase in visual fidelity thanks to the higher resolution display and the VR Audio.

In terms of the hardware, Oculus wouldn’t provide us with very many specifics of the display, but all indications are that they’re using a 2560x1440 OLED display like that in the Samsung Galaxy 4 Note. While they wouldn’t tell us the actual resolution, however, they did tell us some of the changes that they’ve made since DevKit 2. DK1 obviously was the starting point, and it used a 1280x720 60Hz LCD. While it looked okay, pixilation was very visible and there was some ghosting between images. For DK2, Oculus switched to a 1920x1080 OLED display, and they were able to drive it at 75Hz. They also use minimal persistence where the image is shown on the OLEDs for 2ms and then the image is blacked out, which works better with our eyes and doesn’t lead to ghosting as much – but it was still present at times with DK2. Crescent Bay has increased the refresh rate to 90Hz, with 2ms showing the image and then blacking out the screen, and that combined with an increase in resolution helps to improve the visuals even more. This was the first time I didn't notice any ghosting on Oculus.

One thing Oculus wouldn’t comment on is a release date. The hardware at this point could probably ship and people would be really impressed, but there’s a lot of work yet to be done with interacting with the environment and the user interface. I wouldn’t be surprised if Crescent Bay gets released to developers as DevKit 3 later this year, but other than some cool tech demos this isn’t really something end users would want/need just yet. It could easily be a couple more years before public release and by then we might see 4K or even 8K displays in the goggles. More important however is that we’ll need compelling games and other software that people can actually use, and that will take time more than anything.

I should also note that I was able to try a few other VR headsets at CES. The first was from SoftKinetic, and they mounted a forward facing 3D camera on the Oculus DK2 to allow you to interact with the environment using your hands. The demo involved reaching into the space in front of you to “grab” boxes, stack them up, and then you could whack them around and knock them over – all with your virtual hands floating in the air. This helped place you in the environment, but it is still early in development. The second was similar in some ways, in that it involved a forward facing camera mounted on the goggles, and it was at the Razer booth. You were supposed to hold your hands in front of you and fire and ice would appear in your left and right hands, which you could then throw at flaming or freezing floating skulls to “kill” them. It was a game of sorts, and the goggles use different software and hardware than the Oculus Rift, but the demo at least for me was a bit raw – most of the time my hands wouldn’t actually appear in front of me. Oculus also had Samsung's Gear VR (powered by Oculus) available, but the software being run wasn't at the same level as the Crescent Bay demo, and the hardware seemed more like a cross between DK1 and DK2.

There’s definitely a lot of interesting stuff being done with VR these days, and compared to the stuff I saw back in the 90s what we have now is truly impressive. Large polygons have given way to impressively realistic textures and models, and the positional tracking and latency are very good as well. It’s not perfect yet but we’re getting there. It’s going to be interesting to see who manages to release a public product first and what software we’ll end up using, and I’m looking forward to seeing more over the coming years.

Source: Oculus

Comments Locked

21 Comments

View All Comments

  • edzieba - Tuesday, January 13, 2015 - link

    "I was really interested to see what changes have been made with the latest prototype. The short answer is that ventilation has improved (less fogging up of the glasses), the display resolution is now higher, the screen refreshes faster, and combined with the VR Audio the experience is more immersive than ever."

    The msot significant changes with Crescent bay are the increased refresh rate (touched on later in the article), new lenses (a lens/fresnel hybrid doublet), a light diffusion filter to improve perceived fill factor, but above all; huge improvements to position tracking. Not only is there an increase in fidelity and further reduction in latency and jitter, but the headset is now visible to track from all angles (due to the rear marker coverage), and the camera's minimum tracking distance has been reduced, increasing usable tracking volume and moving it closer to the camera.
  • edzieba - Tuesday, January 13, 2015 - link

    There's also the removal the the 'black smear' effect the DK2's panel had (where screen area at 0 brightness would under/overshot when adjacent to bright objects, whereas screen area at 1 level above black would not), which was a hardware limitation of how the Note 3 panel was driven.
  • bilago - Tuesday, January 13, 2015 - link

    I used the CB prototype at Oculus Connect and there was still black smearing.
  • slatanek - Tuesday, January 13, 2015 - link

    I don't understand how the Rift would be of no interest as of yet. It only depends on the games you wanna play. Everything that relys on a seated experience (in reality) would be much better of by using VR headset than monitors. Take sim-racing for instance. The Rift is made for this. Even 3 monitors setup won't give you the immersion of seating in the car. Same goes for plane sims etc. I think the general idea is that it's made for playing FPS's while it's actually a perfect match for sims.

    As for the consumer model - 4K would be great but what graphic card would we use to drive this thing in stereo? I always hoped that the res on this will be something reasonable (1440p max) so that a setup allowing the use of Rift would be semi-affordable. I agree with the Oculus founders - res is not as important as is ghosting, latency, refresh rates etc.
  • jhoff80 - Tuesday, January 13, 2015 - link

    "I don't understand how the Rift would be of no interest as of yet. It only depends on the games you wanna play. "

    Because software- and configuration-wise, it's a mess. You need to often switch between extended mode and direct modes depending on what game/demo you want to play, many games are built on older versions of the SDK, there's a lot of people who can't run direct mode at all due to crashes, etc.

    Having not used Crescent Bay, I'm not 100% sure, but it sounds like the hardware is pretty much ready (or at least as ready as they can get given current display technology), but on the software side it's just nowhere near consumer-ready.
  • anthill - Tuesday, January 13, 2015 - link

    A 4k screen could work, users with lower end hardware would just upscale 1080p to 4k. By the time the rift releases either late this year/early next year going by what they have said, AMD and Nvidia will have newer gpu's out. Nvidia also did announce they are working on VR SLI so maybe when the rift releases those who want to run 4k@90hz will have SLI gtx 980ti's or whatever the next high end nvidia card is.

    The big issue Oculus is facing isn't hardware though it's going to the the availability of quality content on release day.
  • cmdrdredd - Tuesday, January 13, 2015 - link

    Cause flicking my mouse is faster to get a shot off than turning my head.
  • markbanang - Wednesday, January 14, 2015 - link

    Even with current resolutions, reliable 90Hz operation will require pretty beefy graphics hardware, way more than the cost of the Rift itself.

    So what I want to see is foveal tracking integrated into the Rift.

    Only a tiny proportion of your field of view is 'high resolution', the further you get from the fovea, the lower the resolution resolution gets. If you track the movements of the eye, you can partition your high resolution display into high and low resolution zones. When the fovea is positioned such that you are looking at one part of the display, that area is rendered at full detail while the rest of the screen is rendered at a lower detail level, saving you huge amounts of processing. It would also be really easy to tune, the faster your hardware, the larger high detail zone would be, and you could dynamically scale the size of the high detail zone of the next frame in response to length of time it took to render the previous frame.

    Using this technique you could get the performance you need with far less computation (I can imagine driving a 4k Rift display with a GTX970 for instance). This technique could be used for monitors too, but there you would need to track both head and eye simultaneously, and the area the fovea would cover an arc of many more pixels.

    This technique would, however, require high speed/accuracy eyeball tracking hardware to be integrated into the Rift, along with SDK support, graphics API support and probably game support., so it will be a long way off. I can see that it might be more likely in Mobile Rift, where making efficient use of GPU resources is even more important.

    Anyway, roll on eye tracking, that's what I say...
  • OG84 - Wednesday, January 14, 2015 - link

    Don't think this will work. Our eyes/brain would clearly notice the transition between high and low resolution. It would be like a tunnel vision. Besides that, I don't think there should a current rendering technique where you could slowly fade to lower resolutions. You could make different zones with circular clipping but our eyes would probably notice a very hard border.
  • Lindorfer - Wednesday, January 14, 2015 - link

    You undersell just how high speed that tracking would have to be. Our eyes saccade (move and fixate) extremely fast. It's much harder than the head tracking Oculus is just managing to resolve now. To realize any benefit without gross artifacts you would have to reliably render a new frame probably ten time faster than needed on today's Oculus for head tracking. It turns out tracking and responding with a relevant new frame fast enough is much harder than just rendering the whole frame at high resolution.

    There are cool things that can be done if we get really good eye tracking, eg. automated and accurate eye calibration, perspective adjustment for the movement of the pupil, simulated focal adjustments. Performance optimization, however, may be the hardest to achieve.

    If you're interested in a deep dive, these videos with Lucky Palmer and John Carmack address some of the difficulties:
    https://www.youtube.com/watch?v=gn8m5d74fk8&fe...
    https://www.youtube.com/watch?v=8CRdRc8CcGY

Log in

Don't have an account? Sign up now