Monte-Carlo-based Leakage Analysis of VLSI Circuits using Graphic Processing Units
xii, 181 p.
- 원문 URL
Owing to the rapid expansion of the mobile application market, power consumption has become a cardinal limiting factor in chip design. In particular, leakage power consumption (also called static power consumption) accounts for almost half of the overall power consumed, and this consumption occurs even while the chip is in the idle state. The leakage power is directly related to the battery life of mobile devices, and furthermore, directly related to the overall power consumption. Thus, it is very important to reduce the leakage power of VLSI chips, especially mobile VLSI chips, and therefore, accurate leakage estimation has become one of the most important steps of VLSI chip designs. However, due to the aggressive scaling down of the semiconductor process technology, the process uncertainty increases and the variation in process parameters increases. The increase of the process parameter variation results in the increase of the performance fluctuation of the VLSI chips. In particular, the leakage current is known to be extremely sensitive to the process variation. A worst-case corner analysis has been successfully used to deal with the variation of process parameters when the process variation is not too serious and the die-to-die (D2D) variation is the main contributor. However, the continuous scaling down of the process technology has worsened the process parameter variation to a serious level, and the within-die (WID) variation has become comparable with the D2D variation in size. As a result, a worst-case corner analysis has become overly pessimistic in many design, resulting in an invalidity of the analysis results, particularly during the gate-level design. Several statistical approaches based on the first-order exponential-polynomial model (first-order model) have been proposed to consider the process variation including the within-die (WID) variation and they has been successfully used when the minimum feature size is larger than about 100 nm. However, as the process technology entered into the deep-submicron era, the nonlinear relationship between certain process parameters and the logarithmic value of the leakage current became much stronger and the first-order model exhibited significant errors in the recent process technology. Although a look-up table (LUT)-based leakage current model, which considers the nonlinearity for accurate leakage current modeling, shows better accuracy than the first-order model, it requires a high computational complexity, and consequently, Monte-Carlo based leakage simulation using LUT-based model takes long runtime to complete a leakage analysis of a recent large VLSI circuit. This dissertation presents a novel gate-level leakage analysis method considering the effect of the process variation with an accurate gate-level leakage current model. The proposed method is based on the Monte-Carlo (MC) simulation and the proposed model is developed for the proposed MC simulation. The proposed accurate leakage current model combines the LUT-based model with the first-order model, and takes the advantages of both models. For accuracy, the proposed model treats the process parameters of strong nonlinear relationship with the logarithm of the leakage current as nonlinear process parameters, which are modeled in LUT. Other process parameters are regarded as linear process parameters and the first-order model is used for them for efficiency. Along with this accurate proposed leakage current model, the characterization method for the proposed model is also presented. The proposed MC simulation uses the proposed model to evaluate the leakage current value of a cell in a circuit. The proposed MC simulation method is executed on multiple graphic processing units (GPUs) to obtain maximum parallelism of computation to overcome the high computational complexity of the MC simulation. In particular, nVIDIA CUDA platform is used to implement the proposed MC simulation method and the proposed method is optimized to maximize the utilization of GPUs as much as possible. In the experiments using 22nm predictive technology model and ISCAS-85 benchmark circuits, the proposed method showed small average relative errors that less than 8% in all statistics, whereas the conventional first-order model based methods showed the average errors larger than 90% in all statistics. When compared to LUT-based MC simulation, the proposed method showed more efficient results with comparable accuracy. In runtimes, the proposed method completed a leakage analysis of an OpenSparc T2 core of 4.5 million gates with a runtime of less than two minutes when three GPUs were used. Although this runtime was longer than that of VCA, the fastest analytic SLE method, 40 seconds, a runtime of less than two minutes for 4.5 million gates is still in the acceptable range. Moreover, if more GPUs are available, this runtime can be reduced further. The computational complexity of the proposed method therefore poses no problem. In the experiments using 32nm industrial transistor model and ISCAS-85 benchmark circuits, the proposed method also showed small average relative errors that less than 10% in all statistics, whereas the conventional first-order model based methods showed the average errors larger than 20% in almost all statistics. When compared to LUT-based MC simulation, the proposed method showed more than 100 times faster runtimes with comparable accuracy. In runtimes, when three GPUs were used, the proposed method completed a leakage analysis of an OpenSparc T2 core of 4.5 million gates with a runtime of less than one minute and it was faster than VCA that spent 92 seconds to complete a leakage analysis of the same circuit. In conclusion, I expect that the proposed MC-based method with the proposed gate leakage current model can provide accurate leakage analysis results within a practical runtime.