Display processors usually aren’t really much a common topic in the press and only few companies actually do advertise the capabilities beyond a simple mention of the maximum resolution. As the ecosystem evolves, there’s however an increasing amount of new features added into the mix that adds more nuances to the discussion that go beyond resolution, colour depth or gamut.

Two years ago, we saw the release of Arm’s new Mali-D71 display processor which represented a branch new architecture and foundation for the company upcoming DP IP blocks. The D71 brought to market the brunt feature requirements to drive most of today’s higher resolution or higher framerate displays, along with providing robust and smart composition capabilities.

Today’s announcement covers the new D77 which is an evolutionary upgrade to the D71. The new IP generation brings new features that go beyond one would normally expect of a display processor, expanding its capabilities, and in particular enables the new block to open up a slew of new possibilities for AR and VR use-cases.

Currently display processors mostly act as the compositing engines inside of SoCs, meaning they take in the pixel data generated by GPU or other SoC blocks and composite them into a single surface, and handle all the required processing that is required to achieve this.

Typically today’s display controllers lie towards the end of the display pipelines in an SoC, just before the actual physical inferface blocks which transform the data into signals for say HDMI or MIPI DSI, at which point we find ourselves outside of the SoC and connect to a display panel’s DDIC SoC. Here Arm promises to provide straightforward solutions and work closely with third-party vendors which provide IPs further down the chain.

The new Mali-D77 being based on the D71 comes with all of its predecessors capabilities, with a large emphasis on AR and VR features that promise to vastly improve the experience in product employing the IP.

Among the main features are “Asynchronous Timewarp”, “Lens Distortion Correction” and “Chromatic Aberration Correction”, which provide some new unique use-cases for display processors, along with continuing to provide further improvements in the baseline capabilities of the IP such as more layers as well as higher resolutions and framerates.

Asynchronous timewarp is an interesting technique for AR and VR whose main goal is to reduce the motion to photon latency. In a normal GPU to display operation, the display always simply display the last GPU render frame. The problem with this approach is that the update interval of this render is limited by the actual rendering framerate which is a characteristic of the GPU’s performance capabilities. This causes a hard limitation for AR and VR workloads which require very high visual frame-rates in order to provide a better experience and most importantly avoid side-effects such as motion sickness caused by delayed images.

Timewarp is able to disconnect the GPU render from what is actually scanned out to the display. Here, the D77 is able to integrate position data updates such as from motion sensors in HMDs into the most recent rendered GPU frame, post-process it with the motion data, and deliver to the display an updated image. In this new process, the user effectively will see two different frames displayed even though the GPU will have only rendered one.

Effectively this massively reduces the motion to photon latency in AR and VR use-cases, even though the actual rendering framerate doesn’t actually change. Avoiding doing the work on the GPU also reduces the processing workload, which in turn opens up more performance to be dedicated to the actual rendering of content.

In addition to ATW, the D77 is also able to correct for several optical characteristics of VR lenses such as pincushion effect. The IP is able to be programmed with the characteristics of a used HMD system and will be able to correct for distortions by applying an inverse effect (in this case a barrel distortion) to compensate for the distortion on the lenses.

This optical compensation also applies to chromatic aberration correction. Similarly, the D77 needs to be aware of the optical characteristics of the lens in use, and will be able to post-process the outputting image in an inverse effect – eliminating the resulting experienced image artefacts when viewed through the lens. It’s to be noted that the spatial resolution of the correction achievable here is limited by the actual resolution of the display, as it can’t correct something that is smaller than a pixel in dimensions.

The benefits of these new techniques on the DP is that it enables a significant amount of processing savings on the part of the GPU, which is much higher power.

What this also opens up is a possible new generation of “dumber” HMDs in the future without a GPU, powered by some other external system, yet providing the same latency and optics advantages as described above in a cheaper integrated HMD SoC.

Performance characteristics of the D77 are as follows: 4K60 with 8 layers or 4K120 with 4 layers. In smartphones with Android the higher layer number is a requirement so I don’t envision 4K120 to be a thing beyond special use-cases.

For VR use-cases, the D77 is able to handle up to 4 VR layers in which (ATW, correction, etc) the maximum resolution if up to 1440p120 or 4K90 with respectively 4+4 or 2+2 layers.

Overall, the new Mali-D77 is exciting news for AR and VR. While we’re expecting the IP to be used in smartphones in the next few years, the most exciting news for today in my opinion is that it enables higher quality standalone HMDs, expanding Arm’s market beyond the typical smartphone SoC. Arm unofficially described the VR ecosystem as currently being in the “trough of disillusionment” after the last few years of peaked expectations.

The next few years however will see significant progress made in terms of improving the VR experience and bringing more consistent experiences to more people. Certainly the D77 is a first step towards such a future, and we’re excited to see where things will evolve.

Related Reading:

Comments Locked

18 Comments

View All Comments

  • mode_13h - Wednesday, May 22, 2019 - link

    First, ASW isn't as important as ATW. The point of ATW is to keep users from getting sick. ASW is mainly to hide the lower rendering framerate of the GPU, which certainly improves the experience but isn't as crucial. That said, it would clearly benefit mobile/standalone use-cases and therefore will probably grace a future generation of this DPU.

    As for reaching kHz update rates, once you get above a certain point, I doubt you don't need fancy interpolation for every one of those frames. Maybe use ASW to double the framerate, then simple ATW to fill in any further frames.

    Finally, Oculus is not Apple or Google. If they go down the road of their own SoC, it will certainly involve using off-the-shelf IP, such as this DPU. Even MS just used Tensilica cores, in the HPU of their Hololens. However, I think current sales volumes and margins are probably too low to justify a custom SoC.
  • skavi - Wednesday, May 15, 2019 - link

    nice, always have thought timewarp should be done on the display processor.
  • ajp_anton - Wednesday, May 15, 2019 - link

    A little cheap to use the same image for the lense's chromatic aberration and the computed inverse, considering one would assume they know very well how to produce the inverse...
  • ballsystemlord - Thursday, May 16, 2019 - link

    Spelling and grammar corrections:
    "Two years ago, we saw the release of Arm's new Mali-D71 display processor which represented a branch new architecture and foundation for the company upcoming DP IP blocks."
    "brand" not "branch":
    "Two years ago, we saw the release of Arm's new Mali-D71 display processor which represented a brand new architecture and foundation for the company upcoming DP IP blocks."

    "...the display always simply display the last GPU render frame."
    Excess words, missing "s". Maybe:
    "..the display always displays the last GPU render frame."

    "eliminating the resulting experienced image artefacts when viewed through the lens."
    i before e... :)
    "eliminating the resulting experienced image artifacts when viewed through the lens."
  • jordanclock - Thursday, May 16, 2019 - link

    Artefact is a legitimate spelling of artifact.
  • mode_13h - Wednesday, May 22, 2019 - link

    Correct. I actually prefer that spelling, when I'm referring to a side-effect.
  • mode_13h - Wednesday, May 22, 2019 - link

    > ... the display always simply display the last GPU render frame.

    I think it should actually be:

    "... the display always simply displays the last GPU-rendered frame."
  • mode_13h - Wednesday, May 22, 2019 - link

    > What this also opens up is a possible new generation of “dumber” HMDs in the future without a GPU, powered by some other external system, yet providing the same latency and optics advantages as described above in a cheaper integrated HMD SoC.

    Actually, what I hope to see is a low-cost SoC with this DPU being used in wireless, PC-based HMDs. Not that you couldn't also put it in a bigger SoC so the HMD can also be used standalone, but I think the key to *really* good wireless PC HMDs is doing the ATW in the HMD, to help offset the latency introduced by the wireless transmission.

Log in

Don't have an account? Sign up now