The GPU compute portion of Leo is fairly simple to explain, as you can see in the video below. Lights are one of the most complex parts of a 3D scene to render properly, and various techniques have evolved to allow complex lighting to, well, basically work. Some work better than others, but most are computationally painful, or have other rather serious drawbacks like deferred rendering.
Leo at GDC
At GDC this year, we asked AMD to explain the GPU compute functions in Leo, and as you can see, they did. Instead of globally computing illumination, Leo breaks the screen up into 32*32 pixel tiles, a bit over 2000 of them for a 1080p screen, an then uses compute functions to figure out which lights are visible in which tile. This obviously drops the lighting workload per tile by a massive amount and takes very little GPU time too.
In this case, a forward renderer and GPU compute means the difference between functional, and very nice if I do say so myself, lighting and a non-functional demo. The new version mentioned in the video can be found here, grab it and play around. We would tell you more about it, but it sadly only runs on Windows.S|A
Disclosure: Although SemiAccurate has a writer named Leo, he is not the Leo in this demo, eerie similarities aside.
Editors note: You can learn more about this type of material at AFDS 2012, specifically the Heterogenous Compute and Consumer Graphics tracks. More articles of this type can be found on SemiAccurate’s AFDS 2012 links page. Special for our readers if you register for AFDS 2012 and use promo code SEMI12, you get $50 off.
Latest posts by Charlie Demerjian (see all)
- Qualcomm launches the Snapdragon 205, a high-end low-end SoC - Mar 20, 2017
- Intel officially introduces Xpoint with the DC P4800X SSD - Mar 19, 2017
- Dell shows off an 8K HDR monitor - Mar 15, 2017
- A third huge datacenter falls to ARM servers - Mar 14, 2017
- A second megadatacenter goes heavily to ARM CPUs - Mar 13, 2017