This anomalous, empirical success has been largely explained using the concept of effective dimension see Background reference .
The concept is reviewed here because it strongly impacts the manner in which will map the circuits problems into a successful QMC form. Also, let C u denote the unit cube in the dimensions that belong to u. Hence, the variance off can be written as. Definition 1. Definition 2. Effective dimension is relevant for two important reasons.
First, it is widely invoked to help explain why QMC has been so strikingly efficient e.
- The Annals of Statistics.
- The Seer (Isle of Dreams Book 3).
These tasks seem to have low effective dimension; for example, in a pricing task with a long time horizon. Money today is much more valuable than money tomorrow, which reduces the impact of many dimensions of the problem. It is an open question if this behavior obtains in circuit analysis. Second, effective dimension is essential to optimally map problems into QMC form, which we discuss next. Suppose, for example, there are random threshold voltages to sample. It should not matter if any particular voltage is mapped to x 1 , or x 37 , or x Unfortunately, this is not the case.
All LDSs are imperfect, and usually show degraded uniformity as dimension increases.
This takes the form of pattern dependencies see Background reference  , illustrated in FIG. But to deal with the pattern effects of FIG. If this can be done, then very low error can still be achieved. For problems with a time-series random-walk structure, there are good techniques for mapping see Background reference  , but these are not applicable in the case of circuit yield analysis.
Two strategies are suggested:. The latter method is concentrated on here. The measure of sensitivity that we use is the absolute value of the Spearman's Rank Correlation Coefficient see Background reference . This is similar to Pearson's Correlation, but more robust in the presence of non-linear relationships.
Suppose R i and S i are the ranks of corresponding values of a parameter and a metric, then their rank correlation is given as:. This approach has a two-fold advantage. First, it helps reduce the truncation dimension, since all the important dimensions are the first few. The rank correlation can be computed by first running a smaller Monte Carlo run. For multiple metrics, the sum of the rank correlation values across all the metrics is used.
One final problem is now confronted: The error bound Equation 6 for QMC is very difficult to compute.
Also, it is only an upper bound on the error: It does not provide a practical way to measure the actual error, if the exact solution is unknown. In a standard Monte Carlo scenario, several different pseudo-random samplings would be simply run and compared. But QMC generates deterministic samples: Each run yields the same samples. Hence, this method scrambles the digits of the original LDS. Other methods have also been introduced see Background reference .
All these randomized sequences maintain the uniformity properties of the original LDS. Owen's original scrambling uses a large amount of memory. Hence, a more scalable, but less powerful, version is used, described in Background reference . In this discussion, the performance of the scrambled Sobol' points is compared against the performance of standard Monte Carlo, on three different testcases. Now, the testcases and the experiments will be discussed.
All samples were evaluated using detailed circuit simulation in Cadence Spectre. Results for all testcases will be analyzed together later in this embodiment. The RDF is modeled as normally distributed threshold voltage V t variation:. V t0 is the nominal threshold voltage. If we define. There are a total of 31 statistical variables in this problem. Ten Monte Carlo runs of 50, pseudo-random points each were run. Results are discussed later in this embodiment. As an illustrating example, consider how the rank correlation-based variable-dimension mapping works for this testcase.
The variable are sorted according to decreasing importance r s : In the order they would be mapped to the dimensions of the Sobol' sequence. The three most important parameters are labeled: 1 t ox : global gate oxide variation, 2 P tg1 :V t the variation of the pMOS device in the input transmission gate Tg1, and 3 : the variation of the nMOS device in the inverter Inv 1. Since, the input was timed in such a manner in the testbench, these measures of importance make intuitive sense. The second testcase is a bit SRAM column.
Only one cell is being accessed, while all the other wordlines are turned off. RDF on all devices including the write driver and column mux are considered, along with one global gate-oxide variation. All variations are assumed to be normally distributed. The Vt standard deviation is taken as. This variation is too large for the 90 nm process, but is in the expected range for more scaled technologies.
The write time is measured as a multiple of the fanout-4 delay of an inverter FO4. The value being estimated is the th percentile of the write time. If we write. Ten Monte Carlo runs of 20, pseudo-random points each were run.
The results are discussed later in this embodiment. The third testcase is a low-voltage CMOS bandgap reference. This bandgap is able to provide reference voltages that are less than 1 Volt, and is built using standard CMOS technology.
This circuit was chosen for its relevance in today's low-voltage designs, and also to test QMC on a circuit with highly non-linear behavior. The opamp used is a standard single-ended RC-compensated two-stage opamp see Background reference . The circuit has diodes. There are a total of local variation parameters and one global t ox variation.
The yield integral can be written in form Equation 1 , similar to as was done for the MSFF discussed earlier. Ten Monte Carlo runs of 10, pseudo-random points each were run. Managing Editor: Sabelfeld, Karl K. See all formats and pricing. Online ISSN See all formats and pricing Online.