DFM for Non-PhD's: Part 3 - Real Life Examples
DFM for Non-PhD's: Part 3 - Real Life Examples
I got some questions from my last installment of this series asking for some pictures of defects that caused yield issues in production that could have been avoided during design. It struck me that most designers probably never get a chance to see the manufacturing problems their designs encounter. Since my background is in the fab, I wrongly assumed everyone had lived through the same pain as myself. It’s a great question so I decided to focus this installment on real life examples.
There are actually three basic type of DFM issues that a design can encounter (Random, Systematic, Parametric). Random defects are defects that occur independent of the design layout, but the probability of the design failing because of them, is dependent on the layout. Here are some example of the defects I am talking about.
The image on the left is a composite wafer map showing the location of all the particle defects that occurred on this wafer during processing. By composite I mean the sum of the particles that occurred at various points during the manfacturing process and are located within various layers of the design.
This comes from a paper I did with LSI at DesignCon this year. We used the Calibre YieldEnhancer tool to find opportunities for via doubling that their router missed on four different designs. We then ran Calibre YieldAnalyzer to assess the Critical Area impact of doubling the extra vias and the yield impact it would have in production. You can see that the design yields were increased by up to 2% by making these few incremental changes on top of what the router had already done on a process that was already running at mature yields. On a high volume product 2% could mean a lot of extra profit. Imagine the impact of a broad range of changes throughout the design flow.
The second type of DFM issue that a design can encounter is systematic defects. These are defects that only occur when a particular layout construct interacts with a particular process variation. Again, the problem is statistical, in that the process only exhibits the particular variation a small percentage of the time and only a narrow range of layout constructs are suceptible to the variation. Several examples are shown below.
In this first example an electrical short and and electrical open are shown that are caused by variation in the lithography process that interacted with these particular layout constructs. You can see that the the bulk of the patterns are produced without issue and the problem was very localized. These locations print perfectly fine at nominal litho dose and focus, but at one edge of the process variation these spots image improperly. These particular locations have a non-zero probability of having this occur but the probability is not 100%. Tools like Calibre Litho Friendly Design (LFD) are used to identify these types of litho sensitivities.
In this second example an electical short is shown that was caused by the interaction of the previous layers with the CMP process. You can see in the picture on the left that all the lower levels o metal were aligned with the same spacing and width. This caused a slight thickness variation on each layer that added up as each layer was polished. Then in the top layer the layout was different. and the severity of the depression had accumulated to the point that the CMP process did not clear all the copper in the depressed area leaving the slight amount of copper bridging the two wires. Again these particular locations have a non-zero probability of having this occur but the probability is not 100%. Tools like Calibre CMPAnalyzer (CMPA) are used to identify these types of litho sensitivities and tools like Calibre YieldEnhancer are used to do “smart” fill to correct them.
In this example an electrical open is shown which is caused by the migration of small voids (bubbles essentially) in the copper metal that move to a point of stress relief and accumulate to the point of creating a significantly large void to cause an open. This phenomenon occurs when large areas of copper are in proximity to a single via. The via tends to act as a point of stress relief. Again the probability of it occuring is non-zero but not 100%. As the graph on the right shows the probability varies dramatically with the change in the width of the wire in this particular test structure.
In this example a non-problem becomes a problem in a very limited combination multiple layout dimensions. The dielectric deposition process that covers poly and active prior to cutting the local interconnect (LI) holes produces “keyholes” with certain gate spacings as shown in the picture on the right. Normally, these are no problem and do not effect anything about the circuit. However, when two LI cuts with small spacings between them are made between these gates as shown in the layout on the left an unexpected problem occurs. The keyhole acts as a tunnel between the two LI cuts and when the titanium liner is deposited in the cut, small amounts of Ti diffuse into the tunnel. If the LI cuts are close enough together then the tunnel is short enough for the diffused Ti from each side to touch causing a short as shown in the picture in the middle. Again the probability of it occuring is non-zero but not 100% and is highly dependent on both the gate and LI spacing simultaneously.
In this final example electrical shorts have an increased probability of occuring if min spaced metal wires of min width run at long distances beside each other. It is due to surface tension caused by evaporating water during develop rinse and dry. It is very feature dimension sensitive and has a non-zero but not 100% probability of occuring.
In the last three examples there is not a dedicated process simulator based DFM solution in the EDA industry for identifying these types of things. In these cases people are using Calibre YieldAnalyzer to create statistically based recommended rule analysis reports for these issues as they find them. We call this type of analysis Critical Feature Analysis (CFA). The idea is to take multi-dimensional measurements of the layout and relate them in a mathematical way to generate some level of empirical model of the probability or risk of these types of occurrences and then to roll up the statistical probability at the block or chip level. Armed with this information the designer can proritize the various features by sensitivity and drive down the overall statistical probability of failure. This in turn improves the yield. An example of this was demonstrated by Samsung in the Common Platform joint paper at SPIE this year shown below.
The table on the left shows the difference in the MCD score between the DFM enhanced and the nominal design. MCD is the Common Platform implementation of the Calibre YieldAnalyzer CFA solution. They ran the two versions of the layout on a test chip side by side. The table on the right shows that the DFM version yielded ~8% better than the non-optimized one. The MCD scores doesn’t predict the exact amount but there is a strong statistical correlation between the improvement of these DFM quality scores from CFA and the yield in production.
The last type of DFM issue that a design can encounter is parametric variability. This might not be accurately called “yield loss” as it depends on your product specifications. However, different layout configurations can experience much more variation than others in a way that doesn’t cause a short or open but causes a variation in some product performance measure. Again I will use a litho example.
In this example the L-Shaped piece of poly rounds off when printed on the wafer. Because the bend is so close to the active area edge it affects the gate length at the edge of this transistor. The difference will vary as the alignment and exposure vary during processing. By moving the bend farther away or reducing how far the bend runs parallel with the active edge the designer can reduce the variation he or she will see in production. Recommended rules in general are layout guidelines that relate to statistical yield loss and parametric variability. In other words, they are rules that you don’t have to always follow but the statistically more you follow them the more of a reduction in statistical variability you will see in the product. The following are good examples.
The left example shows that changing the contact to gate spacing from the min design rule to the increased recommended rule reduces the Ioff leakage in the transistor by 35%. A 35% change in one transistor may not be critical but if a statistically significant number of transistor have room to make this change then it will have a statistically significant impact on the chip leakage. The second example shows a 10% change in resistivity of poly as the width varies from the min DRC rule to the RR. The third example indicates a significant change in the IDsat of a transistor as the gate spacing is changed from min DRC to RR. The bottom line is summed up well in the following data from ARM.
They show in this experiment 5 different implementations of the same cell. The graph shows how the performance of the cell varied with different implementations and the table shows a change in relative yield between the approaches. All of them passed DRC and LVS! Design does make a difference and using the DFM tools to guide you optimization will make a difference.
I hope these examples help you better understand the importance of investing in DFM tools, practices and methodologies.
More Blog Posts
- Battle of Fins and BOXes
- TSMC 28nm yield (SemiWiki)
- DAC 2011 is upon us!
- Mentor Graphics User to User (U2U)
- Gate Oxide Breakdown Failures Highlight Industry Need for New Electrical Rule Checking Tools
- Dawn at the OASIS
- Layout Density and the Analog Cell
- Effects of Inception
- On-line session covering the DAC presentation for Calibre xACT 3D
- You can't give stuff away fast enough
- December, 2012
- March, 2012
- May, 2011
- April, 2011
- February, 2011
- January, 2011
- November, 2010
- August, 2010
- June, 2010
- May, 2010
- April, 2010
- March, 2010
- February, 2010
- January, 2010
- December, 2009
- November, 2009
- October, 2009
- September, 2009
- August, 2009
- July, 2009
- June, 2009
- "Waive" of the Future?
- How do you debug LVS?
- DFM for Non-PhD's: Part 2 - Reliability
- Mixed-Signal SoC Verification
- Process Variation: The Use of In-Die Variation
- DFM for Non-PhDs
- Calibre Everywhere -- the customer value of universal integration
- So, why not just write better rules?
- To be the man, you've gotta beat the man!
- Power in need, Power indeed
- May, 2009