Digital calibration techniques are widely utilized to linearize pipelined A/D converters (ADCs). However, their power dissipation can be prohibitively high, especially when high-order gain calibration is needed. For high-order gain calibration, this paper proposes a design methodology to optimize the data precision (number of bits) within the digital calibration unit. Thus, the power dissipation of the calibration unit can be minimized, without affecting the linearity of the pipelined ADC. A 90-mn FPGA synthesis of a 2nd-order digital gain-calibration unit shows that the proposed optimization methodology results in a 59% reduction in power dissipation.

Original document

The different versions of the original document can be found in:

Back to Top

Document information

Published on 01/01/2008

Volume 2008, 2008
DOI: 10.1109/icecs.2007.4511078
Licence: CC BY-NC-SA license

Document Score


Views 0
Recommendations 0

Share this document


claim authorship

Are you one of the authors of this document?