THE FINAL KEYNOTE of the Common Platform conference was given by IBM’s Garry Patton, and he talked collaboration and tech. The list of new things thrown out was pretty dizzying, but what do you expect from someone who is in charge of semiconductor R&D?
The most interesting part of the whole discussion was how the collaboration chain was expanding because of increased technical complexity. In the past, the fabrication processes used to be worked on, and late in the game, the tool vendors were brought in to write the software and simulations that were needed for design.
From that point, the tools were given to the end users, and they designed their chips with them. It was all bundled up in the end and shipped off to the fab of choice. That whole process doesn’t really work anymore.
For the last node or three, tool vendors have been playing an increasingly important role in making the whole process work. They are being brought in sooner and sooner in the development process, and the tools are somewhat co-developed with the chemistry. The two are inseparable now, but the whole thing was still given to customers as a package when done.
Unfortunately, the whole chain is now so complex, and the designs that use them are similarly complex, that foundries are needing design partner input in the tools. This may sound like a good idea, but there is a lot to manage, especially if you have to bring in everyone that can potentially make a chip with a process before the process is fully baked. Herding cats is comparatively easy.
No matter who you bring in, and how you bring them in, the whole idea of collaborative R&D and collaborative design enablement is here to stay. Things are only going to get worse from here on out, and the only way to mitigate some of it is to bring all the players together early. Once again, group hug time.
From there, things went back to tech, and the group hugging was finished for the keynote sessions. It started out on a down note, that gate scaling was effectively dead. This is widely known, not a shock to anyone following the industry, but it is still a fairly hard roadblock. While you won’t get much, if any, smaller from here on out, gate innovation is far from dead.
With that stake planted in the sand, there was lots of new stuff to talk about, starting with a roadmap for the upcoming nodes. The Common Platform guys are still saying that they are on a 2 year shrink cycle, with 20nm coming in 2012, 14nm in 2014, and 11nm in 2016. At least to the 14nm node, maybe 11nm, the roadmaps seem fairly firm.
According to slides shown by Mr Patton, the big step for lithography on 32/28nm is ‘second generation immersion’. 20nm brings Source Mask Optimization(SMO) and 3rd gen immersion to the fray, upping to 4th gen at 14nm. In addition to SMO, 14nm also uses double patterning.
EUV is a hot topic among fab people now, and the Common Platform partners seem to think that it is possible for some 14nm layers, but not definite. I think it is more a question of tools availability, materials availability, and cost of running them more than anything else. Even 11nm is listed as “EUV (or DPL)” (double patterning).
What is Source Mask Optimization (SMO)? That is probably the most interesting bit. SMO is a fancy way of saying changing the (light) source to work around limits of the mask. Masks currently are using diffractive optical elements (DOE) to make patterns smaller than the wavelengths used to draw them, but that can’t work forever.
The key to SMO is a pixelated light source, something that gets interesting when you are talking about EUV lasers. If you can change the source on the fly, you can add a whole bag of tricks to mask creation. The closest SemiAccurate got to an explanation was a projection TV micromirror array to turn pixels on and off. Currently, ASML and Zeiss are working on one, but not at the EUV level. This should be very fun to watch as it develops.
Moving on to the transistors themselves, the roadmap went out a bit farther. 22nm was listed as using PDSOI and bulk silicon, basically the same choices as we have now. At 14nm, Common Platform will use FinFETs, and 11nm may add ETSOI (Extremely Thin SOI) to the mix. This is likely still in a bit of flux though, we are talking about 2018 for a possible introduction.
On the 8nm, 5nm and 3nm nodes, things get a little more theoretical. OK, a lot more theoretical. 8 and 5nm are listed as using silicon nanowires plus a fully depleted SOI substrate, and 3nm moves on to some kind of carbon, be it nanotubes or otherwise. Given the timelines involved, anything could be a contender below 11nm.
Getting back to the optimistic side, Mr Patton did state that they have working FinFETs in the labs now at 14nm, and they have working carbon ring oscillators at sub-14nm geometries as well. As was said earlier, making one is easy, a few billion or trillion is a completely different problem. That brings us to the last topic of the day, gate first vs gate last. The Common Platform vendors are going to gate last at 20nm, that much is certain.
Why are they doing it? Officially, the pros of gate last outweighed the cons, and the partners had the requisite group hug and decided that it was the right thing to do together. Inside sources tell SemiAccurate that there was no hugging involved, more knives and blood, but the end result is gate last. If you look at some of the yield headaches Llano was having, you might just come to the conclusion that some partners did not want to repeat that at 20nm.
Officially, there were four factors involved with the decision, density, scaling from the previous node, process complexity, and power/performance. Gate first is much denser, a claimed 10-20% more transistors per unit area. That of course is balanced out by lower yield of the gate first process, but depending on a lot of variables, the end result, cost, could go either way.
Scaling from the previous node is clearly on the gate first side. Gate last needs radically different design rules, so first is much simpler to work with. The flip side of that is once you go to gate last design rules, and if you stay there, the next node has an easier path to gate last too. From 32/28nm though, gate first at 20nm is the way to go if this is the only consideration.
Process complexity is a clear win for gate first too. You put the gate in and you have to protect it for some of the later steps, but that is not the end of the world. Gate last makes you put in a dummy gate, do the rest of the chip, etch the dummy out, and then put in a new gate. This isn’t exactly trivial, but as Intel has shown, it is quite doable. Gate first is much simpler, but Intel and TSMC seem to think the increased yield from gate last is worth the pain.
The last consideration is power/performance. IBM said at a later Q&A session that gate first has a 6% power/performance advantage over gate last. Gate first also needs less strain engineering, but gate last seems to benefit more from it. This ends up as a win for gate last.
In the end, it is impossible to pick a winner without hard data. On the 32/28nm node, the Common Platform companies went with gate first, Intel and TSMC are using gate last. Neither side is doing it on a whim, so there are obviously benefits to both methods. On 22/20nm, everyone is going with gate last, so the question is unequivocally answered. Group hug time again?
Where does that leave us? Group hugs are not a bad analogy as it turns out. Everyone involved in the chip making process, from the tool vendors and the materials vendors to the chip designer needs to get involved earlier. Each new node brings more and more complexity to the game, so everyone will eventually be brought in to the process, pun intended.
The Common Platform group has laid out a roadmap to 14nm that is pretty solid, and shown tidbits of things after that. Will they all work? What will end up in use, and what will end up panning out? Who knows, but it looks like scaling will be with us for some time to come.S|A
Latest posts by Charlie Demerjian (see all)
- ARM outs automotive ISPs with Mali-C71 - Apr 24, 2017
- Intel to brief press on Sandy-E/X on May 2 - Apr 24, 2017
- Broadcom’s Quartz implements Time Sensitive Network Ethernet - Apr 19, 2017
- Intel mercy kills IDF - Apr 17, 2017
- Is Intel’s Hyperscaling really a change? - Apr 4, 2017