The idea behind EDID is simple enough. If you have a massively high rez screen, at EDID introduction 4K was borderline unimaginable, there wasn’t a single cable capable of supporting that rez. You needed two cables each feeding half the screen. This lead to problems when clods like you (and me for that matter) hooked up the wrong cable to the wrong port. Hilarity ensued. Apple didn’t release high rez monitors, and the industry stagnated.
AMD broke this blockade with their proprietary tech which became DisplayID 1.3. It is hugely complex in it’s operation, so let us explain in detail. The monitor knows which half of the screen connects to which port, and with DID1.3 it can send that information back across the DP cable to the video card. The video card then knows which of it’s ports is connected to which half of the screen and adjusts accordingly. Well that actually wasn’t all that complex now was it.
Luckily for people like me, DID 2.0 does actually make things more complex, and for a good reason. No we are not just talking about 8K screens like the incredible Dell panel, but for technologies that it includes and other form factors. Just having a Side1/Side2 identifier isn’t going to cut it when you have odd shaped VR glasses, round screens, and the rest. You need something much more complex, and if nothing else, the consumer electronics industry is really good at making things more complex. That said DID 2.0 is more complex for a good reason.
The short version of how DID 2.0 works is that it sends data to the display in blocks rather than the more traditional way. Those blocks can be arbitrary sizes, square, non-square, and whatnot. More importantly the blocks don’t have to be in order like traditional scan lines. The monitor/display then has to piece these things together on it’s own meaning it has to be more complex too. That said you can tell how flexible this mechanism is.
While we haven’t gotten the full brief from VESA, you can see how DID 2.0 could be bent into mechanisms like sending the least complex and most quickly rendered part of a scene to the monitor first while the slower bits come later in the frame. If it isn’t possible to do things like this, you can be pretty sure it will be in DID 2.x once someone figures out the benefits for smoother frames.
On top of that, once you have two way communication between the source and display, you can signal a bunch of other things starting with very high resolutions. HDR features like high bit count pixels, high luminance values, and even variable refresh rates are all covered. If you can pass values like these, SemiAccurate would be surprised if the DID 2.0 framework wasn’t extensible to add arbitrary values in the future too.
So in the end DID 2.0 moves us from a 1 or 2 screen side identifier to a nearly free form block based signaling scheme with very descriptive headers. It should take a lot of the pain out of VR, odd shaped displays, and the transition to 8K among other things. Better yet it should allow the addition of new technologies without the traditional rip and replace pain from previous less flexible standards. DID 2.0 looks to be a good thing.S|A
Latest posts by Charlie Demerjian (see all)
- More on Intel’s 10nm process problems - Sep 17, 2018
- Intel puts out another 14nm 2020 server platform - Sep 11, 2018
- Why Can’t Intel Supply Enough 14nm Xeons? - Sep 10, 2018
- Intel can’t supply 14nm Xeons, HPE directly recommends AMD Epyc - Sep 7, 2018
- AMD reintroduces the Athlon name with two CPUs - Sep 6, 2018