Quantcast
Channel: Ops a la Carte
Viewing all articles
Browse latest Browse all 6

The Power of Population in Semiconductor Device Reliability

$
0
0

When we introduce a new chip, we plan and execute a comprehensive reliability qualification plan. This plan will be based on many different reliability stresses addressing infant mortality rate test, early life failure rate test, long term life test failure rate prediction based on a small population of samples pulled from early production lots .
Due to the fact of limited device sample sizes, we are trying to assign a confidence level to our failure rate predictions using “industry standard” chi-square adjustment in the hope, our prediction will be closer to real field failure rates.
This is a “standard” approach of the semiconductor industry because testing very large sample sizes of chips is economically not feasible, especially for small-and fabless semiconductor companies.
IBM Corp.’s Semiconductor Division calls the above practice “finding the tip of the iceberg  “only indicating if there are major catastrophic failure mechanisms”. IBM and major semiconductor manufacturers are stressing large sample sizes in ongoing reliability testing of the outgoing device population.
Above approach requires capabilities and facilities for ORT (ongoing reliabiliy testing) of tens of thousands of devices per year. Only major dedicated manufacturers do this
(like Intel, National Semiconductor, Micron, etc. )
In the course of 2-3 years of intensive ongoing reliability testing of samples of the outgoing population combined with field failure information will one be able to make reliability assessment and meaningful prediction of the maturing semiconductor product.


Viewing all articles
Browse latest Browse all 6

Trending Articles