Upload
clifford-wilcox
View
218
Download
0
Embed Size (px)
Citation preview
A Statistical Design Method for Giga Bit Memory Arrays and
Beyond
C. Caillat (*), E. Carman, J.M. Daga, C. Ouvrard and P. Bauser,
Innovative Silicon SA,Lausanne, Switzerland
(*) now with Micron Technology Belgium
2
Outline
Introduction
Principle of the Method & Theoretical Background
Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array
Conclusions and Perspectives
3
Outline
Introduction
Principle of the Method & Theoretical Background
Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array
Conclusions and Perspectives
Introduction This paper deals with a statistical sampling method intended to
study worst-case configurations of electrical parameters in large memory arrays
The goal is to help assessing the robustness of designs to local parameter fluctuations in a reasonable number of simulation runs These fluctuations can impact functionality through critical electrical
responses – for example voltage read margin resulting from the variation of sense-amplifier characteristics and cell signal fluctuations
Hence, worst cases are defined as the combinations of parameters yielding the largest response degradation within a given array (size is fixed)
To identify such worst cases for relevant responses, the classical approaches are Monte-Carlo sampling (CPU intensive) or Corner sampling (risk of over-design and/or non functional corners)
Our approach is based on a sampling of iso-probable extreme events Application boundaries
Normally distributed and independent input variables Non-exotic response: the response must be monotonic towards the
input parameter variations. Example: the signal should not be decreasing and then increasing again with the growth of a parameter
Local fluctuations only: die to die components are not considered here
4
Illustration of the Worst Case Margin
5
Probability Density Function (PDF)
Margin at a given probability (# of
events per array)
V/I
Cell signal distribution
Sense-Amp margin distribution
x1 x2
Questions: for a given array size, what is the lowest observable margin (=x2-x1), and for which (x1,x2) pair(s) does it happen ?
6
Outline
Introduction
Principle of the Method & Theoretical Background
Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array
Conclusions and Perspectives
Sampling Principle
7
x1
x2
Event horizonin an array
A classical DOE approach may deviate substantially from the actual worst cases and show false non functionality or non linearity
This proposal: sampling exclusively on the extreme horizon of iso-probability = very few runs !
A Monte-Carlo simulation approach fails to reach the extreme cases on very large arrays, and trying to extrapolate from this region would be inaccurate
Assuming two independent random parameters (x1,x2), there are several ways to sample the parameter space
The contour of iso-probable events can be calculated theoretically
Joint Probability Density Function
8
Note: illustration examples and formulas are given with a two parameter case, and generalized to n dimensions hereafter
2
2
2
2
2
)(
2
)(
2
1),( y
y
x
x yx
yx
eyxf
Key theoretical element: joint Probability Density Function (PDF) of two normally distributed variables (x, y) of average (µx, µy) and standard deviation (x, y), respectively (*):
22
)(
2
)( 2
2
2
2
2 zyx
y
y
x
x
Iso-density curve definition for (x, y) pairs, with z = constant (the curve is an ellipse, as illustrated to the right):
(*) AKA bivariate normal distribution
(1)
(2)Iso-density ellipse in red
Iso-Probability Sets It can be demonstrated, through basic statistics
(see details in appendix), that the above iso-density is also an iso-probability curve
The corresponding probability is z, expressed as a standardized sigma value of the normal distribution
In other words, choosing (x,y) pairs such that
guarantees that these events all belong to an iso-probability of ±6 sigma i.e. have one chance out of 1 billion to happen they represent the set of worst cases that we are looking for
9
22
2
2
2
6)()(
y
y
x
xyx
(z= 6 in this example)
Sense-Amplifier Case and General Formulas
10
In practice, however, not all parameters belong to the same population. Thus, the sense-amplifier (SA) is a sub-population of the entire array: it is shared by many cells. The excursion of the associated variables is hence bounded by a frontier defined as follows, ZSA representing the lowest probability of occurrence for a SA in the array, expressed in standardized sigmas:
Finally, the above formulas (2) and (3) can be generalized to an n-dimensional parameter space:
22
2)(SA
SA
SASA ZX
(3)XSA ~N(µSA,SA)
2
12
2
ZXn
i i
ii
2
2
2)(SA
SAi i
ii ZX
(4) (5)
Iso-probability n-dimensional ellipsoid for a set of random variables Xi
Generalized boundary condition for a subset of SA-related variables
Examples of Resulting Sampling Surfaces
11
Bounded excursion for the
SA variations
-ZSA +ZSA
Iso-probability surface example for a 1 Gb array with 512k SA, two SA variables (XSA, YSA) and one cell variable, 2000 samples, scales in sigma. (a) 3-D view (b) Projection matrix
Iso-probability curve example for a 1 Gb array with 512k SA, 156 samples. Scales represent standardized sigma values
Two variables Three variables
XSA
YSA
XS
A
XSA
Y SA
12
Outline
Introduction
Principle of the Method & Theoretical Background
Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array
Conclusions and Perspectives
Foreword: Response Analysis Once a sampling plan is established, the next steps
are To simulate a subset of samples on the selected iso-
probability surface – i.e., corresponding to the requested level of occurrence, such as 1 event per array of 1Gb, for example
To extract the relevant electrical responses from the simulation tool (SPICE for example)
Then to search for the minimum (or maximum) of these responses and to check if these conditions guarantee the circuit functionality or not
We applied the method of this paper to assess the robustness of a 1Gb Z-RAM floating body memory design to cell and sense-amplifier fluctuations during a Read ‘1’ operation
We will here focus on the case of a simple response, a signal margin, that does not require actually simulating the experimental points, but only calculating them with a known formula (see hereafter)
13
Z-RAM Floating-Body Memory Cell
14
Transient simulations during a read '1' (a) or a read '0' (b) operation. Scale: density of electron current (A.cm-2). T= 366K. 40nm pillar diameter assumed.
Typical Read '1' & '0' waveforms used for transient cell characterization in TCAD
The memory cell studied is a Z-RAM vertical double gate floating body memory device. It has been characterized and optimized in TCAD. The metric monitored is the charge transferred during a 3ns read ‘1’, as illustrated below. This is referred to as the cell signal hereafter and expressed in arbitrary units (a.u.).
Main Cell Parameter Variations
15
Note: the Poisson distribution was used to calculate the extreme doping values. Diameter range assumes 3 = 1nm of local fluctuation.
After a sensitivity study, two physical parameters were found to significantly impact the cell signal: the Random Dopant Fluctuation (RDF), and the pillar diameter. The relative effects of these parameters – with respect to the nominal conditions – have been characterized in TCAD for a ±6 variation range using simplifying assumptions (continuous doping, simple diameter offset).
Linearity of effects allows a linear interpolation over the full range
Sense-Amplifier Variations
16
Pre-amplifier Latch
tbl
paout
saout saoutb
ref
Technique used: the cell signal was swept until residual fails on the output (saout) were eliminated, in order to find the critical SA signal (switching point) due to degraded pre-amplifier output and latch offset.
The variations of the sense-amplifier switching point due to transistor mismatch have been characterized with the actual circuit configuration using HSPICE coupled to a classical Monte-Carlo generator to cover a ±3 excursion range
SA composed of a pre-amplifier connected to memory cells through the Bit Lines (tbl), plus a latch connected to the output of the pre-amplifier (paout)
saout
PASS997
FAIL3
Cell signal
Latching
T = 366K
0 1 2 3 4 time (ns)
(a.u.)
(a.u.)
1000 Monte-Carlo runs (read ‘1’)
Sampling Surface
17
-7-4
-1
2
5
-2
-1
0
1
2
-1
0
1
RDF effect(a.u.)
-7 -4 -1 1 3 5
Diametereffect (a.u.)
-2 -1 0 1 2
SA offset(a.u.)
-1 0 1
Scatterplot Matrix
With the 3 parameters listed above, we have generated a sampling plan using a tool that we have developed under Excel/VBA. The 500 samples generated are represented in a projection matrix below and belong to an iso-probability set of 1 event out of 1Gb. Scales are relative cell signal (a.u).
Resulting Cell Signal Excursion
18
Functional limit plane
Signal Margin = Mavg + (SRDF + Sdiam) – SAoffset
3-D scatter plot of signal margin vs. SA offset and RDF effect
Mavg is the average margin (constant term). SRDF, Sdiam = cell signal fluctuations due to Random Dopant Fluctuation (RDF) and device diameter variations (diam). SAoffset= SA switching point (offset) fluctuation. A successful read '1' operation requires a positive signal margin.
The studied response is a signal margin, defined as the difference between the cell signal during a read ‘1’ and the SA switching point:
Within the above sampling plan, the calculated signal margin being always greater than the functional limit, with a minimum value of 1.65 (a.u.), we expect a fully functional 1Gb array (see opposite)
Min signal of 1.65 found for [email protected]σ, [email protected]σ and SAoffset@+1.3σ.
19
Outline
Introduction
Principle of the Method & Theoretical Background
Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array
Conclusions and Perspectives
Conclusions & Perspectives We propose a fast and accurate method to predict worst
case configurations of parameters within large memory arrays
We have applied this method to assess the functionality of a 1Gb Z-RAM floating-body memory array with only 500 runs
The method applies, in principle, to any other memory array facing similar local fluctuation challenges (Flash, DRAM, SRAM…)
There are several ways to improve this method: Confidence intervals can be calculated for the response Extremum search refinement (better accuracy for the prediction)
By numerical methods: search for local extremum Analytical solutions in particular cases – linear combinations of parameters for
example Use of “Importance Sampling” methods to have the full
distributions and to take into account non-normal input parameters and/or non-monotonic responses
20
THANK YOU FOR YOUR ATTENTION
21
APPENDIX
22
Elements of Bibliography
1. S. Akiyama et al., "Concordant memory design using statistical integration for the billions-transistor era", ISSCC Dig., p. 466, 2005
2. J. Yeung and H. Mahmoodi, "Robust sense amplifier design under random dopant fluctuations in nano-scale CMOS technologies", IEEE International SOC Conference, p. 261, 2006
3. D.P. Bertsekas and J.N. Tsitsiklis, Introduction to Probability, Athena Scientific, 2nd Edition, July 2008
4. G. Marsaglia, "Choosing a point from the surface of a sphere", The Annals of Mathematical Statistics, vol. 43(2), p. 645, 1972
5. J.S. Kim et al., "Vertical double gate Z-RAM technology with remarkable low voltage operation for DRAM application", Symp. VLSI Technology Dig., p. 163-164, 2010
6. P. Magnone et al. "Matching performance of FinFET devices with Fin widths down to 10 nm", IEEE EDL, vol. 30(12), p.1374, Dec. 2009
7. D. Reid et al., "Analysis of threshold voltage distribution due to random dopants: a 100000-sample 3-D simulation study", IEEE TED, vol. 56(10), p.2255, Oct. 2009
8. S. Toriyama et al.,”Device simulation of random dopant effects in ultra-small MOSFETs based on advanced physical models", SISPAD Dig., p.111, 2006
23
Cumulative Probability Calculation [1/4]
24
Reasoning to find the cumulative probability corresponding to a given pair (X,Y):• The point M(X,Y) belongs to the iso-density circle of radius Z. We want to calculate PZ, the cumulative probability corresponding to this event (M,Z), for all events beyond this circle• Z being known, X and Y are not independent, hence calculating the cumulative probability requires taking that into account: the related PDF to be integrated is actually a conditional distribution function that can be expressed as a function of a unique variable, to be found• The cumulative probability is then the integral of this function between the position M and + along this unique variable• On the other hand, the integral is obtained by summing the PDF f(x,y) along the successive intervals z between iso-density circles centered around the origin point O• Moving from the circle of radius Z to the circle of radius Z+z transforms M into M’ along the vectorial direction <OM> this is giving the direction along which to integrate the PDF
Demonstrations below are made using standardized variables X, Y:
x
xxX
y
yyY
x
y
O X
YM
Z
z
M’
22
22
2
1),(
YX
eYXf
222 ZYX
Hence, the joint PDF reads: And the iso density curve reads:
Cumulative Probability Calculation [2/4]
With the above considerations, the cumulative probability PZ can be defined as the integral of the conditional distribution along x (fX|Y), with y being dependent on x through the linear equation y=(Y/X).x:
Observing that this is equivalent to a sum along a radius of direction <OM>, it is then more natural to express this integral as a function of the radius z:
25
X
YXx
X
Yy
dxxfXxP )()|( |
Zz
ZYXZ dzzpZzP )(| 222p being the conditional distribution along the direction <OM>
Cumulative Probability Calculation [3/4]
Based on that, it is more convenient to express the PDF in cylindrical coordinates:
In this coordinate system, the condition on the direction <OM> is expressed as a constant angle M
Hence:
The expression of the conditional distribution p is found by applying the Bayes’ Theorem on PDFs:
26
2
2
2
1,
z
ezf
Zz
MMZ dzzpzPP )|()|(
Mp
zfzp M
M
),()|( where pM
is the marginal
probability defined as:
dzzfp MM),(
Cumulative Probability Calculation [4/4] The marginal probability pM
can be calculated:
Therefore, the conditional distribution function reads:
Finally, the cumulative probability is:
which is the cumulative probability of the unidimensional standard normal distribution
A similar reasoning can be made in an n-space using hyperspherical coordinates and integrating along a radius of the hypersphere. The result is the same.
27
2/12
2
1
2
1),(
2
dzedzzfpz
MM
Zz
z
Z dzeP 22/1
2
2
1
2
2/1
2
2
1),()|(
zM
M ep
zfzp
M