Digital calibration techniques are widely utilized to linearize pipelined A/D converters (ADCs). However, their power dissipation can be prohibitively high, especially when high-order gain calibration is needed. For high-order gain calibration, this paper proposes a design methodology to optimize the data precision (number of bits) within the digital calibration unit. Thus, the power dissipation of the calibration unit can be minimized, without affecting the linearity of the pipelined ADC. A 90-mn FPGA synthesis of a 2nd-order digital gain-calibration unit shows that the proposed optimization methodology results in a 59% reduction in power dissipation.
The different versions of the original document can be found in: