126
A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR By XIAOCHUAN GUO A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2002

A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

  • Upload
    others

  • View
    34

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

By

XIAOCHUAN GUO

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOLOF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2002

Page 2: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

To my parents.

Page 3: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

ACKNOWLEDGEMENTS

I would like to express my sincere gratitude to my advisor, Dr. John G. Harris, for

his continued guidance, support, and help throughout the past four years of Ph.D. graduate

studies. Without his patience and encouragement, this work would have been impossible.

He allowed me the freedom to explore by myself, but was always attentive and pertinent

in critical moments. Appreciation is also extended to Dr. Jose C. Principe, Dr. Robert

M. Fox, and Dr. Joseph N. Wilson for their interest and participation on my supervisory

committee. I would like to thank Julian Chen at Texas Instruments for the discussions and

suggestions. Also, many thanks go to Michael Erwin for his designing work in the early

stage of this project. I also would like to thank my colleagues in the analog group and the

Computational Neuroengineering Lab for their discussions of ideas and their friendships.

My wife, Liping Weng, has my deepest appreciation for all the suffering she had to

endure during the past two years, when she was working in San Diego and I was working

on this research project in Gainesville. Only a great love would be able to stand such a long

and involuntary separation, giving the support I needed at the same time. I am especially

grateful to my parents and to my sisters in China for their love and support.

iii

Page 4: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

TABLE OF CONTENTS

page

ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

CHAPTERS

1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Author’s Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Dissertation Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 SOLID-STATE IMAGERS . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Principles of Solid-State Imaging . . . . . . . . . . . . . . . . . . . . . . 5

2.2.1 Generation of Charge Carriers . . . . . . . . . . . . . . . . . . . 52.2.2 Collection of Generated Charge Carriers . . . . . . . . . . . . . . 62.2.3 Transportation of Collected Charge Carriers . . . . . . . . . . . . 7

2.3 Charge-Coupled Devices (CCDs) . . . . . . . . . . . . . . . . . . . . . . 82.4 CMOS Image Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5 Performance Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.5.1 Quantum Efficiency . . . . . . . . . . . . . . . . . . . . . . . . 102.5.2 Dark Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.5.3 Fixed Pattern Noise (FPN) . . . . . . . . . . . . . . . . . . . . . 122.5.4 Temporal Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5.5 Dynamic Range (DR) . . . . . . . . . . . . . . . . . . . . . . . 14

3 TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGER. . . . . . . 15

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 SNR and DR Analysis of Photodiode CMOS APS . . . . . . . . . . . . . 15

3.2.1 Signal Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2.2 Photon Current Shot Noise . . . . . . . . . . . . . . . . . . . . . 183.2.3 Dark Current Shot Noise . . . . . . . . . . . . . . . . . . . . . . 183.2.4 Photodiode Reset Noise . . . . . . . . . . . . . . . . . . . . . . 193.2.5 Readout Circuit Noise . . . . . . . . . . . . . . . . . . . . . . . 193.2.6 Signal-Noise-Ratio(SNR) . . . . . . . . . . . . . . . . . . . . . 193.2.7 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.3 Existing High Dynamic Range Image Sensors . . . . . . . . . . . . . . . 223.3.1 Nonlinear Optical Signal Compression . . . . . . . . . . . . . . 223.3.2 Multisampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

iv

Page 5: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

3.3.3 Time-based High Dynamic Range Imagers . . . . . . . . . . . . 243.4 Principles of TBAR CMOS Imager . . . . . . . . . . . . . . . . . . . . . 25

3.4.1 High Dynamic Range Capability of TBAR Imager . . . . . . . . 253.4.2 Asynchronous Readout . . . . . . . . . . . . . . . . . . . . . . . 32

3.5 Architecture and Operation of TBAR Imagers . . . . . . . . . . . . . . . 343.5.1 TBAR with on-chip Memory: TBARMEM . . . . . . . . . . . . 343.5.2 TBAR without on-chip Memory: TBARBASE . . . . . . . . . . 38

3.6 Errors Analysis and Simulation . . . . . . . . . . . . . . . . . . . . . . . 413.6.1 Errors Caused by Limited Throughput of TBAR Architectures . . 423.6.2 MATLAB simulation of errors . . . . . . . . . . . . . . . . . . . 453.6.3 TBARBASE imager with throughput control . . . . . . . . . . . 54

3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4 TBAR IMAGER CIRCUIT DESIGN AND ANALYSIS . . . . . . . . . . . . . 62

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.2 Pixel Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.2.1 Pixel Operation and Digital Control Circuitry . . . . . . . . . . . 624.2.2 Photodiode Design . . . . . . . . . . . . . . . . . . . . . . . . . 664.2.3 Comparator Design . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.3 Asynchronous Readout Circuit Design . . . . . . . . . . . . . . . . . . . 794.3.1 Design Methodology . . . . . . . . . . . . . . . . . . . . . . . . 794.3.2 Asynchronous Circuit Design . . . . . . . . . . . . . . . . . . . 80

4.4 Timing Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.4.1 Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.4.2 Pixels Firing Simultaneously . . . . . . . . . . . . . . . . . . . . 884.4.3 Finite State Machine Model . . . . . . . . . . . . . . . . . . . . 92

4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5 TBAR IMAGER TESTING AND CHARACTERIZATION . . . . . . . . . . . 96

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.2 Testing Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.3 Testing and Characterization . . . . . . . . . . . . . . . . . . . . . . . . 99

5.3.1 Power Consumption . . . . . . . . . . . . . . . . . . . . . . . . 995.3.2 Dark Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.3.3 Dynamic Range . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.3.4 Temporal Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.3.5 Conversion Gain . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.3.6 Fixed Pattern Noise . . . . . . . . . . . . . . . . . . . . . . . . . 109

5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

6 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

v

Page 6: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

Abstract of Dissertation Presented to the Graduate Schoolof the University of Florida in Partial Fulfillment of theRequirements for the Degree of Doctor of Philosophy

A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

By

Xiaochuan Guo

December 2002

Chairman: John G. HarrisMajor Department: Electrical and Computer Engineering

Charge-coupled devices (CCDs) have been the basis for solid-state imaging since

the 1970s. However, during the last decade, interest in CMOS imagers has increased sig-

nificantly since they are capable of offering System-on-Chip (SoC) functionality, lower

cost and lower power consumption than CCDs. Furthermore, by integrating innovative cir-

cuits on the same chip, the performance of CMOS image sensors can be extended beyond

the capabilities of CCDs. Dynamic range is an important performance criterion for all im-

age sensors. In this work, it is demonstrated that due to fundamental limitations of signal

headroom and noise floor, the dynamic range of conventional CMOS imagers is limited to

60-70 dB. However, many scenes have a dynamic range of more than 100 dB.

Instead of obtaining illumination information from the voltage domain, as is usually

done, a time-based asynchronous readout (TBAR) CMOS imager, is proposed to obtain

ultra-high (more than 120 dB) dynamic range. In this imager, the integration time is unique,

vi

Page 7: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

depending on the illumination level of each pixel. Therefore, by reading out the integration

time of each pixel, the illumination level can be recovered.

In this dissertation, the signal-to-noise ratio (SNR) and dynamic range (DR) of the

conventional photodiode CMOS active pixel sensor is first analyzed. Then, the TBAR

imager is proposed to achieve ultra-high dynamic range. The operation of this imager

is described to show the high dynamic range capability. Circuit design and analysis are

presented. The fabricated TBAR imager is tested and characterized. The dissertation is

concluded by a discussion of the future work.

vii

Page 8: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

CHAPTER 1INTRODUCTION

In this chapter, the background and motivation of this research project is discussed,

followed by a summary of the author’s major contributions. The chapter is concluded with

an outline of the dissertation structure.

1.1 Background

In past decades, personal computer (PC) multimedia and Internet applications have

generated enormous demands for digital images and videos. As a result, there has been

significant research and development on electronic imaging devices.

Today, there are two competing technologies for solid state imaging: charge cou-

pled devices (CCDs) and CMOS imagers. CCDs were invented by W. Boyle and G. Smith

of Bell Labs in 1970. After about 30 years of evolution, CCDs have established a dom-

inant position as the basis for electronic imaging [1]. CCDs are a type of charge storage

and transport device. The basic architecture of CCDs is a closely coupled array of metal-

oxide-semiconductor (MOS) capacitors. The charge carriers are generated and collected

under MOS capacitors and transported. After years of study of CCD physics, technology

and applications, CCDs have found widespread application in the digital imaging area.

Before a resurgence in CMOS image sensors in the 1990s, Weckler [2] and Dyck

[3] introduced MOS image sensors as early as the 1960s. The NMOS, PMOS and bipolar

technologies have been exploited for imaging applications. However, the larger fixed-

pattern noise (FPN) [4] and pixel size made MOS and CMOS image sensors inferior to

CCDs. The resurgence of CMOS image sensors in the 1990s is largely due to the demands

to create highly functional single-chip imaging systems where low cost, not performance,

is the driving factor [5]. Since it is usually not feasible to integrate analog and digital

1

Page 9: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

2

circuitry using CCD processes, CCD camera systems usually require multiple chips to

implement camera functionalities like analog-to-digital converters (ADCs), timing control

and signal processing. Therefore, system cost and power dissipation for CCDs are usually

higher than for CMOS imagers. After years of effort from both universities and industry,

CMOS imagers are gaining ground in the low-end digital imaging market. Camera-on-a-

chip solutions can be found in Smith et al. [6] and Loinaz et al. [7]. There are also CMOS

image sensors for digital still camera (DSC) applications from 800k-pixel [8] to several

mega-pixel [9].

1.2 Motivation

The motivation of this dissertation work is to present an architecture to improve

the dynamic range (DR) of CMOS image sensors. The DR can be defined as the ratio

of the highest illuminant level to the lowest illuminant level that an imager can measure

and process with acceptable output quality. The DR of conventional CMOS imagers is

usually limited to about 60-70 dB because the photodiode has a nearly linear response to

illuminance and all pixels have the same exposure time. However, many typical scenes

have a DR of more than 100 dB. As the standard CMOS technology keeps scaling, DR

tends to worsen because of the decreased signal headroom.

Since there is no single integration time that is satisfactory for all pixels, we found

dynamic range can be dramatically improved if each pixel has a unique integration time

depending on the illuminant level associated with each pixel. Therefore, time information,

not an electric voltage, is used to represent the illuminant level. To read out this time in-

formation effectively, an asynchronous readout circuit is applied. Compared with common

synchronous digital circuits, asynchronous readout circuits have the potential to achieve

higher speed, lower power, improved noise and electromagnetic compatibility (EMC), al-

though the design of asynchronous circuits is more involved.

Page 10: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

3

1.3 Author’s Contribution

The author’s main contributions in this work are the following:

1. Analysis of the noise and dynamic range of photodiode CMOS imagers.Theanalysis shows the fundamental limitations of dynamic range of the conventional CMOSactive pixel sensor (APS) architecture.

2. Development of a novel time-based asynchronous readout (TBAR) CMOSimager architecture, which is capable of ultra-wide dynamic range.Some unique is-sues associated with this architecture are discussed.

3. Design and analysis of this imager in the AMI 0.5µm CMOS technology.Some circuit design issues are discussed.

4. The test and characterization of this imager. The testing results prove thehigh dynamic range capability of the TBAR imager. Characterization techniques specifiedfor time-domain image sensors are developed.

1.4 Dissertation Structure

This dissertation comprises six chapters whose contents are outlined below:

Chapter 1 introduces the background and motivation of this research project.

Chapter 2 introduces fundamentals of the solid-state imaging, followed by the dis-

cussion of the architectures and operations of the most important existing solid-state im-

agers.

Chapter 3 analyzes the DR and SNR of the conventional photodiode CMOS im-

agers. This analysis is used to show the fundamental limitations of the DR. Some existing

time-based CMOS imager architectures are then discussed. The principles of the TBAR

CMOS imager are then presented. One unique issue of this architecture, namely the limited

throughput introduced errors, is presented and its influences on image quality are discussed

in detail.

Chapter 4 discusses the design and analysis of the TBAR CMOS imager. There are

three parts in the TBAR CMOS imager: photodiode, analog and digital circuitry. The de-

sign and analysis of each part will be discussed. Detailed timing analysis is also presented.

Chapter 5 discusses the test and characterization of a 32× 32 TBAR imager fabri-

cated using an AMI 0.5µm process through MOSIS.

Page 11: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

4

Chapter 6 concludes this research projects and discusses the possible directions of

future related research.

Page 12: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

CHAPTER 2SOLID-STATE IMAGERS

2.1 Introduction

This chapter introduces the physics of solid-state imaging. The architectures and

operations of the two most important solid-state imagers, CCD and CMOS active pixel

sensor (APS), are reviewed. The chapter is concluded by the descriptions of performance

limitations of CCD and CMOS imagers.

2.2 Principles of Solid-State Imaging

Solid-state imaging is based on the physical principles of converting quanta of light

energy (photons) into a measurable quantity (electric voltage, electric current) [1]. There-

fore, solid-state imaging is a physical process of generation, collection and transportation

of charge carriers.

2.2.1 Generation of Charge Carriers

If the energy of photons which are impinging on and penetrating into a semiconduc-

tor substrate (silicon for CCD and CMOS technology) is higher than the bandgap energy

of the semiconductor, it can generate electron-hole pairs. The electrons are released from

the valence band, leaving holes behind. We use Planck’s relationship between energy and

photon frequency:

E = hν = hc

λ(2.1)

whereE is the energy of a photon,h is Planck’s constant (6.63×10−34 J ·s), ν is frequency,

c is the speed of light (3.0× 1010 cm/s) andλ is the wavelength of the photon.

To be able to generate electron-hole pairs in semiconductors, the energy of photons

must be greater than the bandgap energy of the semiconductor. For silicon, the bandgap

5

Page 13: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

6

energyEg is 1.11 eV [10]; therefore, the maximum photon wavelength to which silicon

can respond is as follows:

hc

λ= Eg (2.2)

λ =hc

Eg

≈ 1120nm (2.3)

which is in the infrared region. The wavelength of visible light ranges from about 350

nm to 750nm [11]. Thus, silicon is a semiconductor capable of the visible light imaging,

although sometimes a color compensating filter is used to eliminate infrared wavelength

(e.g., see page 98 of Blanksby [12] ).

2.2.2 Collection of Generated Charge Carriers

Once electron-hole pairs are generated, an applied electric field separates electrons

and holes. Otherwise, electrons and holes will eventually recombine. On the other hand,

since it is very difficult to measure a single electron or hole, a better way is to integrate

charge carriers into a charge packet over a certain period of time.

Both separation and integration of charges can be done with a capacitor. Two types

of capacitors are commonly used: MOS capacitors and metallurgical p/n junction capaci-

tors. MOS capacitors are used in CCDs and photogate CMOS imagers while metallurgical

capacitors are used in photodiode CMOS imagers. Figure 2.1(a) shows a MOS capaci-

tor. If the gate voltage is high enough, there is a depletion region under the gate. The

photon-generated electrons will be swept towards and stored at theSi−SiO2 interface. In

Figure 2.1(b), the p/n junction is precharged to some positive voltage relative to the sub-

strate. When photon-generated electrons are swept towards n-type-silicon, the p/n junction

is discharged andVn begins to drop.

Note there are also electron-hole pairs generated in the neutral bulk of the semicon-

ductor material. Some of these carriers can be collected via diffusion. The efficiency of this

process depends on the wavelength of the impinging light and the diffusion length of the

Page 14: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

7

����

����

��������������������������������������������������������������������������������������������������������������������������������������������������������������������

P_Sub P_Sub

N+

Vg

Vn

Oxide

Gate

(a) MOS Capacitor (b) p/n junction

Figure 2.1: Charge carriers generation and integration in silicon.

semiconductor material. Detailed analysis of charge generation can be found in Chapter 5

of Theuwissen [1].

2.2.3 Transportation of Collected Charge Carriers

Since the amount of charge is proportional to the illuminance level for a given

charge collection time, after the charge carriers are collected, the illumination information

is stored in a charge packet. The next step is to send this information out for measurement

or processing. Since there are usually more than thousands of charge packets (pixels) on

an image sensor, efficiently transporting the information in these packets is a significant

issue.

There are two primary approaches to reading off the many charge packets. One is

to send the charge packets out serially, as is done in a shift register. This is the principle of

charge-coupled devices (CCDs). CMOS image sensors adopt a quite different approach.

Here, the information in the charge packets is multiplexed onto a sensing line that is shared

by many pixels. The detailed operations of CCDs and CMOS imagers will be discussed in

the next two sections.

Page 15: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

8

2.3 Charge-Coupled Devices (CCDs)

As shown in Figure 2.2, CCDs consist of many closely coupled MOS capacitors.

By properly clocking the gates (φ1, φ2, ..., φn) of these MOS capacitors, charge can be

transported from the photon-collecting site to the readout site serially.

������������������������������������ �����

��������������� �����

��������������� �����

��������������� ��

������ �������

����������������� � �

� � ���������� �����

��������������� �������

�����������������

n+

Readoutlight ϕ ϕ ϕ ϕn321

P_Substrate

Light Shield

Figure 2.2: Readout structure of CCD.

Since a charge packet has to undergo hundreds of transfers before it can reach the

output site, the transportation must be nearly perfect between two adjacent MOS capaci-

tors. If the charge transport efficiency is defined as the part of a charge packet transported

through a CCD cell, normalized to the original charge packet that had to be transported,

the charge transport efficiency has to be very close to 1 for a properly working CCD. To

understand why this is true, if the charge transportation efficiency is only99% and there

are 500 CCD cells, only0.66% of original charge carriers will reach the readout site.

The nearly perfect charge transportation requirement makes the technology for

CCDs somewhat more complicated. The complexity is a result of the relatively large

number of technological steps and the consecutive relatively large throughput time in the

production facilities [1]. As a result, CCD technology is more costly than CMOS technol-

ogy. Furthermore, to provide nearly perfect charge transportation, proper timing is crucial.

Many CCDs use several different supply voltage for timing clocks, which adds further

system complexity.

Page 16: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

9

2.4 CMOS Image Sensors

Unlike CCDs, CMOS imager sensors are built upon standard CMOS technology.

Since it is not possible to get nearly perfect charge transport devices with standard CMOS

technology, most CMOS imagers use XY addressable architectures, as shown in Figure 2.3,

where a row/column address selects a particular pixel to read out. In Figure 2.3, the two-

dimensional spatial information in each pixel can be addressed by row and column de-

coders. When the row decoder selects one line (Row Select m is high), the MOS switches

associated with each pixel are turned on. All the pixels in the selected row will put their

information on the corresponding column line. The information on each column line is

then processed by sample-and-hold and other signal processing circuits associated with

each column. The column decoder is used to select which column to output.

Row

D

ecod

er

Sampling−and−Hold and Signal Processing

Row_Select_1

Row_Select_2

Row_Select_3

Column_1 Column_2 Column_3

Column Decoder

Output

pixel

pixel

pixel pixel

pixel

pixel pixel

pixel

pixel

Figure 2.3: Readout architecture of CMOS addressable image sensor.

Note in Figure 2.3, a column line is shared by many pixels in the same column. The

parasitic capacitance on this column line is much larger than the photon-sensing capacitor

inside each pixel. To deal with this problem, usually a buffer is added into each pixel.

These kind of sensors are therefore called active pixel sensors (APS).

Page 17: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

10

Vdd Vdd

Rst

SelD1

M1 M2

M3

ColumnOutput

Figure 2.4: Pixel schematic of a photodiode APS.

One of the commonly used APS architectures is the photodiode APS [5]. The

schematic of a pixel is shown in Figure 2.4. There are a photodiode and three MOS tran-

sistors inside each pixel. Reset transistor M1 is used to precharge the photodiode. M2 is

a voltage follower, acting as a buffer between the small capacitance of the photodiode and

the large capacitance of the column line. M3 is a switch to select the pixel.

Arguably the biggest advantage of CMOS image sensors is their capability of inte-

grating many circuits onto the same chip. CCD imagers usually consist of several chips for

readout amplification, timing control, analog-to-digital conversion and signal processing.

As a result, system complexity and cost are increased. On the contrary, it is quite easy to

integrate all these functions onto the same chip for CMOS image sensors by using standard

CMOS technology.

2.5 Performance Limitations

To properly design a solid-state imager, it is very important to understand perfor-

mance limitations, which primarily include quantum efficiency, dark current, fixed pattern

noise (FPN), temporal noise and dynamic range.

2.5.1 Quantum Efficiency

Quantum efficiencyη is defined as the number of collected electrons divided by

the number of photons impinging on the device. Since not every photon can generate an

Page 18: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

11

electron which reaches the collecting region, quantum efficiency is always smaller than 1.

Quantum efficiency is also wavelength dependent. More detailed discussion of quantum

efficiency can be found in Theuwissen [1].

2.5.2 Dark Current

Even if image sensors are not illuminated at all, a small number of charge carriers

are still generated and collected by the pixels. This effect is called the dark current,Jdark,

in A/cm2.

The dark current is due to the thermal generation of charge carriers. It can happen

either at theSiO2 − Si interface or in the bulk of the silicon; however due to the large

irregularity at theSiO2 − Si interface, this interface is considered to be the principal

source of dark current. The dark current has two effects on image sensors. Because dark

currents vary from pixel to pixel, nonuniformities are introduced, which contribute to the

fixed pattern noise (FPN). On the other hand, like any other current, there is shot noise

associated with dark current. Dark current shot noise is part of the total temporal noise and

deteriorates the sensor’s signal-to-noise-ratio (SNR) and dynamic range.

Since dark current is a thermal generation process, it is highly dependent on tem-

perature, doubling every8◦C. In order to achieve an extremely small dark current in some

applications, the imager sensors are cooled down. Another way to achieve smaller dark

current is to isolate the photon-collection area from theSiO2 − Si interface. It is called

pinned photodiode technology. Many CCDs use this technology to decrease the dark cur-

rent. Some CMOS imagers using this technology can be found in Guidash et al. [13],

Inoue et al. [14], and Yonemoto et al. [15]. However, extra implantation steps are usu-

ally needed. Depending on different technologies, dark currents vary over a wide range.

Dark currents range from 3pA/cm2 to 100pA/cm2 for CCDs, and from 206pA/cm2 to

4 nA/cm2 [12] for photodiode CMOS imagers.

Page 19: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

12

2.5.3 Fixed Pattern Noise (FPN)

Due to the device and circuit nonuniformity between pixels, there is a static pattern

even if the imager is under uniform illumination. This is called fixed pattern noise (FPN)

because this pattern is a spatial random process. The most important components of FPN

are the following:

• Dark Current FPN

As explained in the last subsection, the dark current is a process of thermal gen-

eration of charge carriers. Because neither generation sites nor generation rates are

uniform, the dark current is also nonuniform. Dark current FPN is a dominant source

of FPN in low light conditions [16].

• Pixel Response FPN

Due to the nonuniformity of geometry, layer thickness, doping levels or doping pro-

file, the photon response is different from pixel to pixel. This phenomenon is more

obvious in high light conditions.

• Readout Circuit FPN

This problem is much less severe for CCDs because all the pixels share the same

readout circuit. For CMOS image sensors, each pixel experiences a different readout

path. Any nonuniformity of readout path will cause FPN. However, by applying a

double-delta-sampling (DDS) [17] circuit, it is reported [16] that the FPN of readout

circuit can be suppressed to a level negligible compared to dark current FPN and

pixel response FPN.

2.5.4 Temporal Noise

Like any electronic system, the solid-state imager is not immune to noise. Temporal

noise sets the lowest level of illuminance which sensors can detect. The major components

of the temporal noise are:

Page 20: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

13

• Dark Current Shot Noise

The dark current is due to the collection of randomly generated charge carriers,

which can be described as a Poisson random distribution [1]. For Poisson distri-

butions, the variance is equal to the mean of the distribution [18]. If the dark current

charge isQdark, then the dark current shot noise is (in number of electrons)ndark:

ndark =

√Qdark

q(2.4)

Dark current shot noise can also be expressed in voltage as

vdark =ndarkq

C=

√qQdark

C(2.5)

where C is the total capacitance at the photodiode, which includes the photodiode ca-

pacitance, the drain capacitance ofM1 and the gate capacitance ofM2 in Figure 2.4.

• Photon Current Shot Noise

As is the case for dark current shot noise, the charge carrier distribution due to

photon-generated current is also a Poisson distribution. Therefore, the photon cur-

rent shot noise (in number of electrons)nphoton is

nphoton =

√Qphoton

q(2.6)

• Photodiode Reset Noise

As shown in Figure 2.4, before the integration of charges, photodiodeD1 is reset by

transistorM1. Due to the channel thermal noise ofM1, there are voltage fluctuations

on the photodiode each timeD1 is reset. Because of the nature of thermal noise, the

reset noise can be expressed (in voltage) [19] as

vreset =

√kT

C(2.7)

Page 21: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

14

where k is Boltzmann’s constant (k = 1.38 × 10−23J/K) and T is the temperature

in Kelvin. If the reset noise is expressed in number of electrons:

nreset =Qreset

q=

vresetC

q=

√kTC

q(2.8)

• Readout Circuit Noise.

To convert photon generated charge carriers into a measurable electric voltage and

current and to buffer signals to the outside world, both CCDs and CMOS imagers

need readout circuits. Readout circuits will inevitably introduce noise due to the

thermal noise and1/f noise of MOS transistors.

2.5.5 Dynamic Range (DR)

Dynamic range (DR) is another important performance criterion. It is defined as the

ratio of the maximum measurable signal (saturation level) to the noise floor. The saturation

level of CCDs is usually limited by the charge well capacity, while the saturation level of

CMOS imagers is limited by the power supply voltage. The noise floor is the total noise

when there is no signal. It consists of dark current shot noise, resetKTC noise and readout

circuit noise. Because these three noise sources are independent in nature, the noise floor

in number of electrons can be expressed as

nfloor =√

n2dark + n2

reset + n2read (2.9)

wherenread is equivalent readout circuit noise in number of electrons. The noise floor can

also be expressed in volts:

vfloor =√

v2dark + v2

reset + v2read (2.10)

wherevread is readout noise in voltage. Ifvsat is the signal saturation level of the photodi-

ode, DR (in dB) is

DR = 20 logvsat

vfloor

= 10 logv2

sat

v2dark + v2

reset + v2read

(2.11)

The DR of CMOS imager sensors will be discussed further in the next chapter.

Page 22: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

CHAPTER 3TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGER

3.1 Introduction

In this chapter, a time-based asynchronous readout (TBAR) CMOS imager will

be presented at the system level. At first, the dynamic range of the standard photodiode

CMOS active pixel sensor (APS) is analyzed to show the dynamic range limitations of

this architecture, followed by a discussion of various existing approaches for improving

dynamic range. The center part of this chapter is the demonstration of the principles of

the TBAR CMOS imager. One unique issue of the readout architecture is that errors are

introduced when the throughput of the readout circuit is too low to accommodate the array

firing rate. This issue will be analyzed in detail. The simulation results show that the

introduced errors are negligible for moderate size high dynamic range scenes.

3.2 SNR and DR Analysis of Photodiode CMOS APS

The photodiode active pixel sensor (APS) is the most commonly used CMOS im-

ager architecture. In this section, the SNR and DR of the photodiode APS will be analyzed

in detail to show the limitations to DR of the APS architecture.

Figure 3.1 shows a schematic of a photodiode APS pixel and the associated readout

circuit. As explained in Chapter 2, M1 is used to precharge the photodiode D1. After M1

turns off, the cathode of the photodiode becomes floating. Photogenerated electrons will

begin to discharge the photodiode. M2 is used to buffer the photodiode voltage to the

column line, which is shared by all the pixels in the same column. M3 is a switch, and the

pixel will put its signal voltage on the column line when M3 is on. M4 is used to set a bias

current for the column line. The signal voltage on the column line then goes through other

analog signal processing and analog-to-digital converter (ADC) circuits.

15

Page 23: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

16

Vdd Vdd

Reset

Sel

PixelLineColumn

Vbias

Analog Signal Processing andADC

Vout

M1M2

M3

M4

D1

Figure 3.1: Schematic of Photodiode Active Pixel Sensor (APS).

As explained in Chapter 2, SNR can be defined as

SNR = 10 logv2

sig

v2ph + v2

dark + v2reset + v2

read

(3.1)

The DR in Equation 2.11 is repeated here for convenience:

DR = 10 logv2

sat

v2dark + v2

reset + v2read

(3.2)

where

• vsig: The photon introduced signal (in volts).

• vph: Photon current shot noise (in volts).

• vdark: Dark current shot noise (in volts).

• vreset: Photodiode reset noise (in volts).

• vread: Readout circuit noise (in volts).

Page 24: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

17

• vsat: The saturation (maximum) signal (in volts).

To calculate dynamic range, each of the components in Equation 3.2,vsat, vdark,

vreset andvread needs to be determined. To do this, some terms are defined:

• L: Incoming light illuminance (inlux, 1 lux = 1/683 W/m2).

• A: Photosensitive area of photodiode (inm2).

• η: Quantum efficiency.

• Jdark: Dark current per unit area (inA/m2).

• λ: Wavelength (inm).

• Tint: Integration time (ins).

• Cph: Total capacitance at the cathode of photodiode (inFarads).

Since the quantum efficiency is a function of wavelength, to simplify analysis, im-

pinging light is assumed to be monochromatic green light (λ = 555 nm = 5.55×10−7 m).

3.2.1 Signal Voltage

The number of photons per unit time with light illuminanceL is

N =LA

1

683hν

(3.3)

Photon currentIph is

Iph = Nηq (3.4)

=LA

1

683hν

ηq (3.5)

=A

1

683

hc

λ

ηqL (3.6)

Page 25: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

18

whereh is Planck’s constant,ν is photon frequency andc is the speed of light. If the

integration time isTint, the signal voltage at the photodiode is

v′sig =IphTint

Cph

(3.7)

However, the readout architecture in Figure 3.1 limits the signal range. The maximum

signal range is roughly

vsig,max = Vdd − Vth,M1 − Vth,M2 − Vsat,M2 − Vsat,M4 (3.8)

whereVdd is the power supply voltage.Vth,M1 andVth,M2 are the threshold voltages of

M1 and M2, respectively. Due to the back gate effect,Vth,M1 andVth,M2 are functions

of photodiode voltage.Vsat,M2 is the overdrive voltage of M2.Vsat,M4 is the necessary

voltage between drain and source to make sure M4 is working in the saturation region for

a given bias current. Thus, the true signal is the minimum ofv′sig andvsig,max:

vsig = min(v′sig, vsig,max) (3.9)

3.2.2 Photon Current Shot Noise

As shown in Equation 2.6,the photon current shot noise (in number of electrons)

nphoton =

√Qphoton

q=

√IphTint

q(3.10)

Photon current shot noise (in volts) can be expressed as

vph =nphotonq

Cph

(3.11)

=

√IphTintq

Cph

(3.12)

3.2.3 Dark Current Shot Noise

The dark current of a pixel

Idark = JdarkA (3.13)

From Equation 2.5, dark current shot noise is

vdark =

√qQdark

Cph

=

√qIdarkTint

Cph

=

√qJdarkATint

Cph

(3.14)

Page 26: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

19

3.2.4 Photodiode Reset Noise

The reset noise Equation 2.7 is repeated here:

vreset =

√kT

Cph

(3.15)

3.2.5 Readout Circuit Noise

Readout noise is the total thermal and1/f noise of M1,M2 and M4, plus the noise

from the analog signal processing and ADC circuits [20]. The detailed analysis of readout

noise has been done by Degerli et al. [21].

3.2.6 Signal-Noise-Ratio(SNR)

The full expression for SNR can now be written as

SNR = 10 logv2

sig

v2ph + v2

dark + v2reset + v2

read

(3.16)

= 10 log(min(v′sig, vsig,max))

2

IphTintq

C2ph

+qJdarkATint

C2ph

+kT

Cph

+ v2read

(3.17)

3.2.7 An Example

In order to demonstrate the relative importance of various noise sources and SNR,

the data of a typical photodiode imager are used here as an example:

A CMOS imager implemented using AMI 0.5µm technology with:

• Photosensitive area of photodiode:A = 14 µm2.

• Quantum efficiency:η = 0.4, atλ = 0.555 µm.

• Mean dark currentJdark = 1 nA/cm2.

• Integration timeTint = 30 ms.

• Total capacitance at the cathode of the photodiodeCph = 5 fF .

• Signal saturation voltage:vsig,max = 1 V .

Page 27: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

20

• Readout circuit noise is assumed from [20]:vread = 300 µV .

From Equation 3.6 and Equation 3.13, photocurrentIph = 3.66 × 10−15 A/lux,

and dark current is0.14 × 10−15 A/pixel. The photocurrent and dark current are shown

in Figure 3.2. While the mean dark current is constant, photocurrent is proportional to

illuminance in this model. Also note that scene illuminations range from10−3 lux for

night vision,102 to 103 lux for indoor lighting, to105 lux for bright sunlight, to higher

levels for direct viewing of other light sources such as oncoming headlights [22].

10−2

10−1

100

101

102

103

104

105

106

10−17

10−16

10−15

10−14

10−13

10−12

10−11

10−10

10−9

10−8

Photocurrent and Dark Current vs Luminance (A=14 µ m2, QE=0.4, wavelength=555nm)

Luminance (lux)

Pho

tocu

rren

t (A

mp)

Dark Current

Photocurrent

Figure 3.2: Photocurrent and dark current as a function of illuminance.

In Figure 3.3, signal, various sources of noise and total noise (in Volts) are shown.

In the low light region, KTC reset noise is the dominant noise source. As illuminance level

Page 28: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

21

10−2

10−1

100

101

102

103

104

10−5

10−4

10−3

10−2

10−1

100

Luminance (lux)

Sig

nal a

nd N

oise

(V

olt)

Signal and Noise vs Luminance (Cph=5fF,Tint=30ms)

Signal

Total Noise

Photocurrent Shot Noise

Darkcurrent Shot Noise

Readout Noise

Noise Floor

KTC reset Noise

DR

SNR

Figure 3.3: Signal and temporal noise as a function of illuminance.

increases, photocurrent shot noise becomes more and more important. Also the noise floor

can be easily calculated as

vfloor =√

v2dark + v2

reset + v2read = 0.97mV (3.18)

The DR of the APS is the ratio of the photodiode saturation voltage to the noise floor:

DR = 10 logv2

sig,max

v2floor

= 10 log1V 2

0.97mV 2≈ 60dB (3.19)

From Equation 3.19, in order to increase dynamic range, either the photodiode saturation

voltage needs to be increased or the noise floor to be decreased. As mentioned before, the

photodiode saturation voltage is limited by the power supply voltage, which is continually

scaling down for CMOS technology, driven by low-power and low-voltage requirements

Page 29: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

22

for digital circuits. Also, from the expression of the noise floor in Equation 3.18, it is very

difficult to decrease any of the components. Therefore, it is very hard for the conventional

CMOS APS architecture to achieve high dynamic range (beyond 80 dB). Existing CCD

imagers have a dynamic range from 60 to 80 dB, while the dynamic range of CMOS

imagers is usually lower than 70 dB [12, 15].

3.3 Existing High Dynamic Range Image Sensors

From the dynamic range analysis of the last section, the DR of the conventional

CMOS imager is usually less than 70 dB. For consumer photography film, the dynamic

range is about 60 dB [23].

However, scene illuminations have a much higher dynamic range than 60 dB. A

typical scene encountered in an outdoor environment can easily have a dynamic range of

more than 100 dB. To deal with such a wide range of illuminance levels, researchers have

proposed a number of solid-state imagers intended for high dynamic range applications.

3.3.1 Nonlinear Optical Signal Compression

As shown in Figure 3.2, the photodiode photocurrent is a linear function of illumi-

nant level to the first order. Since signal voltage can only have a dynamic range of 60 dB as

explained in the last section, if there is a linear mapping between photocurrent and signal

voltage (as in conventional CMOS and CCD architectures), the dynamic range of the illu-

minant level is also limited to 60 dB. To extend the dynamic range, nonlinear compression

between photocurrent and signal voltage has been proposed by some researchers.

In Mead’s photodiode [24], the photocurrent is fed into the source of a diode-

connected MOS transistor in the weak inversion region, where transistors have a loga-

rithmic current-voltage characteristic. The resulting voltage is logarithmically related to

the light intensity and a DR of more than 100 dB can be achieved. However, this archi-

tecture is very sensitive to device parameter variations and noise. Any small fixed pattern

Page 30: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

23

noise (FPN) and temporal noise in the voltage domain results in much larger variations in

light intensity domain due to the nature of the signal compression.

To reduce FPN, some researchers proposed on-chip FPN cancellation. In Loose et

al. [25], the calibration information is stored on a capacitor inside each pixel. The FPN is

reduced to3.8% of a decade, which is about10% linearly. In Kavadias et al. [26], a simpler

on-chip calibration is used. Each pixel is readout twice: one is stimulated by photocurrent,

the other is stimulated by a reference current. The difference of the two readouts will

cancel the most important source of FPN: threshold voltage variations of MOS transistors.

The calibrated FPN is2.5% of saturation level. Since the authors claimed it is sensitive to

6 decades of light, the FPN is15% of a decade, which corresponds to140% error linearly.

Other signal compression methods besides logarithmic compression are possible.

In Decker et al. [27], the sensor’s current versus charge response curve is compressed by

a lateral overflow gate, e.g., the reset transistor gate in a CMOS APS. The charge col-

lection well capacitor is adjusted by the overflow gate voltage. The imager achieved an

optical dynamic range of 96 dB. FPN is reduced to0.24% of saturation level by applying

correlated-double-sampling (CDS) circuits.

3.3.2 Multisampling

The idea of multisampling is to capture the details of high light regions by using a

short integration time and the details of low light regions by using a long integration time.

A high dynamic range image can be achieved by combining two or more images captured

at different integration times. Yadid et al. achieved a dynamic range of 109 dB by dual

sampling [22]. In Yang et al. [28], 96 dB dynamic range is obtained by sampling nine

times, with integration time doubled each time.

One obvious requirement for multisampling methods is the scene must be still.

Another difficulty arises when combining different samples into a high dynamic range

image. The multisampling imagers in Yadid et al. [22] and Yang et al. [28] assume strict

linear response of signal voltage to illumination. However, in the actual photodiode APS,

Page 31: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

24

the relationship between signal voltage and illumination is only approximately linear due

to the nonlinear nature of the photodiode capacitance and the gain of the readout circuits.

Therefore, some errors will inevitably be introduced when trying to recombine samples

linearly.

3.3.3 Time-based High Dynamic Range Imagers

Unlike the conventional CMOS APS imager, time-based imagers get optical illu-

mination information from the time domain. In Yang’s work [29], the photodiode voltage

is compared with a fixed reference voltage. Once the reference voltage is reached, the pixel

outputs a pulse and is then reset. Basically, each pixel acts as a free-running continuous

oscillator. A brighter pixel outputs pulses more frequently than a darker pixel. The time

interval between pulses represents the illuminance level on that pixel. The pulses are read-

out column by column. An asynchronous counter on each row counts the output pulses

for a fixed time. The time interval between pulses can be calculated from the number of

pulses during this fixed time and the light illuminance can be recovered. A dynamic range

of 120 dB was achieved. One problem associated with this architecture is the very long

time needed to readout a frame.

Instead of reading out these pulses by a number of counters, these pulses can be

sampled out [30]. The sampling of a free-running oscillator is equivalent to a synchronous

first-orderΣ − ∆ modulator. The author claims that acceptably low number of samples

(8k) are required to achieve a high dynamic range of6k+25 dB by sweeping through a set

of binary weighted frequencies. A dynamic range of 104 dB is achieved if the minimum

sampling frequency is 1 Hz.

In Culurciello et al. [31], a quite different readout scheme was used. When a pixel

output a pulse, the address of this pixel is readout using an address-event circuit [32].

This process is also continuous because, in contrast with conventional CMOS APS imager,

there is no global imager reset that signifies the start of a frame. Like Yang’s work [29], by

measuring the time interval, the illuminant information can be reconstructed.

Page 32: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

25

In Brajovic et al. [33] and Ni et al. [34], histogram equalization imagers were

implemented by storing a cumulative histogram information on a capacitor inside each

pixel. If a time stamp information is stored instead, it can be used as a high dynamic range

imager.

3.4 Principles of TBAR CMOS Imager

In this section, the principles of time-based imagers are first demonstrated to show

how high dynamic range is achieved [35]. Then, the motivations for the proposed asyn-

chronous readout architectures are presented [36].

3.4.1 High Dynamic Range Capability of TBAR Imager

To avoid ambiguity, all the signal voltages in the drawings and equations in this

chapter are the photocurrent introduced voltage drops across the photodiode. LetVph de-

note the actual voltage at the photodiode in Figure 3.1, if the initial voltage (right after

resetting the photodiode) isVreset, the signal voltage is

Vsig = Vreset − Vph (3.20)

This relationship is clear from Figure 3.4. The photodiode is reset (precharged) to

Vreset initially. At T0, the reset transistor is open and the photocurrent begins to discharge

the photodiode. The voltage at the photodiode begins to drop. If the signal voltageVsig is

defined as shown in Figure 3.4,Vsig is nearly proportional to the illuminance level and the

integration time, which is convenient for analysis.

From the SNR and DR analysis in Section 3.2, the fundamental factors limiting

the dynamic range (DR) of conventional CMOS imagers [37] are the photodiode’s nearly

linear response and an exposure time that is the same for all pixels. To make this point

clearer, the data from the example in Section 3.2 are used here to show the response of a

photodiode under various illuminance levels.

Page 33: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

26

Time

Time

Vreset

Vsig

Vsig

Vph

Tint

T0

T0

0

Figure 3.4: Relationship of photodiode voltage and signal voltage.

As shown in Figure 3.5, since the photodiodes in most CMOS imagers operate in

integration mode, the signal level of each photodiode increases with time at a rate propor-

tional to the illuminance on that photodiode. After a pre-determined exposure time (which

is the same for all pixels), the analog signal level across each photodiode is read out.

In Figure 3.5, the signal saturation level is 1 V and photodiode noise floor is 0.97

mV (see Equation 3.18). If we fix the exposure time for all pixels and assume that the

minimum acceptable SNR is 1 (0 dB), the maximum dynamic range is about 60 dB (

20 log(1V/0.97mV ) ), no matter which integration time is chosen. For example, if the

integration time is set to be14 µs, information for pixels with illuminance below100 lux

and above105 lux is lost. While the integration time can be increased to14 ms to capture

the information of pixels with illuminance as low as0.1 lux, pixels with illuminance above

100 lux are all saturated, and the optical DR is still 60 dB.

As an example to illustrate how the TBAR imager achieves a high dynamic range,

consider the situation in Figure 3.6. In Figure 3.6 (a), four pixels are located at addresses

Page 34: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

27

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

10−4

10−3

10−2

10−1

100

Integration Time (s)

Sig

nal V

olta

ge a

t Pho

todi

ode

(V)

Signal Voltage at Photodiode vs Integration Time under Various Illuminance

106 lux

105 lux

104 lux

103 lux 100 lux

10 lux

1 lux 0.1 lux

Noise Floor

Figure 3.5: The response of typical photodiodes under various illuminance for conven-tional CMOS imagers.

(m1, n1), (m2, n2), (m3, n3) and(m4, n4), with illuminance ofL1, L2, L3 andL0, respec-

tively. Instead of reading out the analog voltage across each photodiode at a predetermined

exposure time, there is a comparator inside each pixel. When the voltage on a photodiode

Vph drops below a global reference voltageVref (or equivalently, when signal voltage rises

above a global signal referenceVsig,ref ), the comparator inverts, and the pixel generates

a pulse (i.e., it has “fired”), as shown in Fig.3.6(b). After a pixel has fired, it is disabled

for the rest of the frame. The time at which a pixel fires is uniquely determined by the

illuminance on that pixel. For example, if the illuminanceLk at a pixel isk times larger

than the unit illuminanceL0, the signal voltage is proportional to the integration time and

illuminance until saturation:

Page 35: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

28

0

(Log)

0

0

1 ,n1 ) (m 2n, )2 ,3 3 (m )n,(m 4

T

)

T

n

T

4(m

OutputRequest

(a)

(b)

(c)

AddressEncoderOutput

t t t1 2 3

t t t1 2 3

t t t1 2 3

Signal atPhotodiode

Time(log)

Time(log)

Time(log)

Vsig,ref

1 23

0

L LL

L

Figure 3.6: Scheme for TBAR imager.

Vsig = STintL (3.21)

whereS is the sensitivity (V/(lux · second)) of the photodiode optical response. The

integration timetk for pixel with illuminanceLk is

tk =Vsig,ref

SLk

(3.22)

Since the reference level is same for all pixels, from Figure 3.6(a),

Lktk = kL0tk = L0T0 =Vsig,ref

S(3.23)

Thus, the relative illumination level k:

k =T0

tk(3.24)

Page 36: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

29

So, for two pixels at addresses(m1, n1) and(m2, n2) with illuminanceL1 andL2 that fire

at timest1 andt2, respectively, they have the following relationship:

L1

L2

=t2t1

(3.25)

Thus, the illuminance is computed from the measured time domain information. In

this way, an analog-digital converter (ADC), which is required for conventional imagers to

output digital values, can be replaced with a digital counter that reports the time when each

pixel fires. As shown in Fig.3.6(c), the imager must also report the positions (addresses) of

the pixels as they fire to the receiver (PC or DSP processor) in order to reconstruct images.

Since each pixel has a unique integration time, which is determined by its illu-

minance level (as shown in Equation 3.22), DR is no longer limited in the same way as

are conventional images in Figure 3.5. As evident from Equation 3.25, in this time-based

scheme with fixed reference voltage, the dynamic range is the ratio of longest and shortest

firing time:

DRmax = 20 logLmax

Lmin

= 20 logtlongest

tshortest

(3.26)

To get a rough idea of how wide a dynamic range that a single pixel can achieve,

consider the following example. Suppose the pixel operates in still imaging mode, the

longest pixel integration (firing) time is limited by the dark current in this case, which is

usually a few tens of seconds. From our measurement results of a TBAR imager, this time

is no less than 20 seconds. On the other hand, the shortest integration time is designed to

be 1µs in our TBAR imager implementation,. This will give a single pixel dynamic range

of more than 140 dB (20 log(20s/1µs) = 146 dB).

The dynamic range of an image array is more involved. If each pixel’s light integra-

tion time can be accurately recorded, dynamic range of an array is same as that of a single

pixel. However, as we will see later, practical issues put limitations on readout circuits,

and as a result, readout circuits may introduce errors to the integration time. Therefore,

Page 37: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

30

the dynamic range of an image array is determined by the readout circuit architecture, the

amount of acceptable errors, the size of the images and the nature of the scenes. This topic

will be further discussed.

Another advantage of the time-based imager operating in the still imaging mode is

the higher SNR than conventional APS CMOS imagers. To see why this is true, the SNR

expression in Equation 3.17 is repeated here:

SNR = 10 logv2

sig

v2ph + v2

dark + v2reset + v2

read

(3.27)

= 10 log(min(v′sig, vsig,max))

2

IphTintq

C2ph

+qJdarkATint

C2ph

+kT

Cph

+ v2read

(3.28)

Since the photocurrent shot noise (in number of electrons) is proportional to the square

root of the signal (in number of electrons) as in Equation 3.10, the photocurrent shot noise

is the dominant noise source during moderate or high light situations (which is obvious

from Figure 3.3). Equation 3.28 can be approximated as

SNR ≈ 10 log(min(v′sig, vsig,max))

2

IphTintq

C2ph

= 10 logCphVsig

q(3.29)

Thus SNR is proportional to the photodiode signal levelVsig. For the TBAR imager sce-

nario shown in Figure 3.6, the signal level of every pixel reaches the signal reference level

Vsig,ref when the pixel fires. SinceVsig,ref can be very close or equal to the maximum sig-

nal level, the SNR of each pixel is the same and is close or equal to the maximum possible

SNR. On the contrary, for the conventional CMOS APS imagers, as shown in Figure 3.5,

because the integration time is fixed for all pixels, the signal levels (which are always

smaller than maximum signal level) of pixels with different illuminance will be different.

For pixels with low light illuminance, the SNR is much smaller than the maximum possible

SNR (10 log(CphVsig,max)/q) due to the small signal level.

For video applications, the achievable maximum integration time is limited by the

video frame time, which is a few tens of milliseconds (e.g., 30 ms). This may limit the

Page 38: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

31

dynamic range if a constant reference voltage is used as in Figure 3.6. However,the dy-

namic range can be increased by varying the reference voltage appropriately. Also, under

room light conditions, the illuminance is often not strong enough to cause many pixels to

fire within a frame time with a constant, high reference voltage, and information of these

pixels would be lost. By varying the reference voltage [34], the imager can guarantee that

these pixels fire.

T4 frame

(Log)

4

4

Vref

OutputRequest

t t1

(b)

(m

2

,n

Time(log)

t

(m )1

n,2 (m )n,1 3 (m )n) 42

t3

3AddressEncoderOutput

t t1 2

Time(log)

t

,

(c)

3t

3

4

tt1 2

Signal atPhotodiode

Time(log)

1 2L L

L4

t t

L3

(a)

Vsig,ref

Figure 3.7: Varying reference voltage for a wider dynamic range in video mode.

To illustrate this idea, consider the example in Figure 3.7. If the frame time is

Tframe and the reference signal voltage is kept constant atVsig,ref , any pixel with illumi-

nance belowL3 does not fire withinTframe, so the imager cannot process and output valid

data about those pixels. However, if the reference voltage is swept as shown, pixels with

illuminance belowL3 (such asL4) will fire by the end of the frame, so the imager can out-

put data (pixel firing times and addresses) that, along with the known reference voltage at

Page 39: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

32

that time, will allow those pixels’ illuminance to be found. The disadvantage of perform-

ing this operation is that the pixels with an illuminance belowL3 will now fire at a lower

signal level, so they will have a worse SNR than the pixels that fired when the reference

voltage was higher (as mentioned above). However, conventional APS imagers also suffer

from a degraded SNR for pixels at low illuminance levels, but without the advantage of an

increased dynamic range.

In the TBAR architecture, the analog photodiode voltage is converted into a digital

pulse inside each pixel. This approach brings several advantages. Firstly, in conventional

CMOS APS architectures, analog signals have to go through a voltage follower, sample-

and-hold, gain amplifier and ADC before being converted into digital values. During this

long analog signal path, various noises are inevitably introduced. Since the analog pho-

todiode voltage is converted into a digital value at a very early stage, this architecture is

much less sensitive to various noise sources. Another advantage comes from the power

consumption point of view. The power consumption of sampling and hold, gain ampli-

fier and ADC of the CMOS APS comprises most of the imager power budget [7]. On the

contrary, none of these power hungry components are needed in the TBAR architecture.

The most power hungry components are thousands of comparators, one inside each pixel.

However, by making the MOS transistors of the comparators work in the subthreshold

region of MOS transistors, low-power consumption can be achieved.

3.4.2 Asynchronous Readout

For a time-based image sensor, after digital pulses are generated inside each pixel,

the next step is to correctly record the firing time of each pixel. This is not a trivial task

considering there are tens of thousands pixels and image sensor pixel firings are massively

parallel. This time recording process can be considered as a time-domain analog-to-digital

conversion (ADC) because the continuous firing time has been converted to digital numbers

of a counter.

Page 40: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

33

The recorded firing time information must be accurate, otherwise, errors will be

introduced when images are reconstructed using the recorded time. If the throughput of

the readout channels is not high enough, some fired pixels will have to wait to get their

firing time recorded. In this situation, some errors are introduced to their time records.

Researchers have proposed several approaches. In Yang [29], there is no global re-

set signals and pixels are reset right after firing. The temporal information is read out col-

umn by column by a counter at each row. Since it requires several seconds to readout low

light pixels in one column, it may take minutes to readout a whole frame. Basically, this

method is only suitable for still imaging application. McIlrath [30] and Kummaraguntla

[38] proposed multiple sampling of each pixel. Because pixels are digitized to either one

or zero, this method is similar to reading a memory. Because time-based imagers rely on

temporal information to reconstruct the original image, any difference between firing time

and sampling time will introduce errors. Thus there is a tradeoff between sampling rate and

errors. Higher sampling rate can reduce errors at the price of higher power consumption

and more memory (needed to store these samples).

Similar to the huge throughput of pixel-level voltage domain ADC CMOS imagers

in Yang et al. [28] and Kleinfelder et al. [39], a pixel-level time domain ADC CMOS

imager [40] also benefits from huge throughput. Inside each pixel, there is a 1-bit ADC and

8-bit counter. Except for time domain quantization errors, pixel firing times are recorded

accurately. However, the total number of transistors in a pixel is 214 and the pixel size is

an unacceptably large 50× 50µm2 with a 0.35µm technology.

To reduce the pixel area and, at the same time, keep adequate throughtput, this

author proposes two readout architectures: TBARMEM and TBAR BASE. Unlike most

digital circuits in use today, the readout circuit of TBAR imagers operate asynchronously.

Asynchronous circuits have the potential to achieve higher speed, lower power, improved

noise and electromagnetic compatibility (EMC), although the design of asynchronous con-

trol circuits is difficult and error prone [41]. Also, the protocol of readout circuits are data

Page 41: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

34

driven. Only pixels that have fired are transmitting their data. This is quite different from

sampling methods, where data is readout no matter whether the pixels have fired or not.

As a result, TBAR imagers will significantly reduce power and memory requirements. The

architectures and operations of TBAR imagers will be described in the next section.

3.5 Architecture and Operation of TBAR Imagers

In this section, the architecture and operation of two types of TBAR imagers,

TBAR MEM and TBAR BASE, are described at the system level. Detailed analysis of

each circuit component will be presented in the next chapter.

3.5.1 TBAR with on-chip Memory: TBAR MEM

Frame MemoryRow_sel

Memory

DoneWrite

Time

Row

Write A

ddress

Column Write Enable

Data In

Data Out

On−Chip

Row_sel(m)

Request

Memory

Data In

Counter

Row

Row_Request~(1)

Rst

Ref

PixelControlLogic

Pixel(m,n)

Pixel Array

Col_Request(n)

RETIBR

Row_Request~(m)

A

Figure 3.8: TBARMEM:A TBAR imager with on-chip memory.

The block diagram of the TBARMEM image sensor is shown in Figure 3.8. An

M × N pixel array is located in the center of the imager. Inside each pixel, there is a

photodiode with reset transistor, a comparator with an autozero circuit to cancel the offset

(not shown), and a pixel control logic block. On the left hand side of the imager, there is

Page 42: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

35

an M× 1 word memory (one word each row), which is used to record firing time. At the

bottom of the pixel array, there is a frame memory that is the same size as the pixel array.

The reason for including an on-chip frame memory is to utilize its parallel writing and high

speed to increase readout throughput. On the right hand side, a row arbiter chooses one

and only one row when there are pixels at multiple rows firing simultaneously.

As mentioned earlier, writing the digital counter value into a memory can be con-

sidered as a time-domain analog-to-digital conversion process. If the architecture of Andoh

et al. [40] is considered as pixel-level time-domain ADC, TBARMEM is a column-wise

ADC. Although the throughput of TBARMEM is not as high as that of Andoh et al. [40],

there is no need to put a digital counter and memory inside each pixel, and as a result, the

number of transistors inside each pixel is dramatically reduced.

The TBAR MEM operates as follows:

1. When the voltage across a pixel’s photodiode drops below the comparator’sreference voltage, the pixel makes a row request by pulling down theRow request∼ 1

line, which is shared by all pixels in the same row.

2. When there is at least one activeRow request∼ signal, two things happen.First, the digital counter data are put into the Row Request Memory at the positions whereRow request∼ are effective. These are the recorded firing time to be saved in the framememory. Secondly, the row arbiter chooses one and only one row by making the corre-spondingRow sel signal effective.

3. Row sel signal plays two roles. In addition to selecting a row of pixels, therow arbiter’sRow sel signal also selects the corresponding row in the frame memory.Then, the pixels in the selected row that have fired will send out column request signals (inparallel) that select the corresponding columns in the frame memory. Now that the framememory’s row and column addresses have been selected, the counter value (stored afterthe row request) is loaded to those addresses.

4. After finishing on-chip memory writing, TBARMEM control circuit sends asignalMemoryWriteDone back to the pixel array. Together with theRow Sel signal,MemoryWriteDone disables the pixels which has been written to the on-chip framememory. Consequently, these pixels withdraw theirRow request∼ signal. Up to thispoint, TBAR MEM finishes a readout cycle. The row arbiter is allowed to select a newrow to continue another cycle.

It is important to calculate the throughput of the TBARMEM imager. The through-

put is defined here as the number of pixels the imager can output per second. From the

1A signal with ˜ means it is an active-low signal through out this dissertation.

Page 43: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

36

operation of TBARMEM described above, the throughput turns out to be image depen-

dent. To demonstrate this point, assume we have an M× N TBAR MEM imager. If all of

the pixels are firing closely in time, during one frame memory writing operation, N pixels

can be readout. Throughput is high in this situation. On the contrary, if the pixel array

fires sparsely, there may be only one pixel being readout during one frame memory writing

operation. This corresponds to a low throughput. Thus, both the lowest and the highest

throughput cases need to be considered.

From the operation of the TBARMEM imager, the time of one readout cycle is

Tcycle = max(Trow memory, Trow arbiter + Tcol request)

+ Tframe memory + Tpixel disable (3.30)

where:

• Trow memory: the time delay of writing row request memory.

• Trow arbiter: the time delay of row arbiter choosing one row.

• Tcol request: the time delay of pixels sending column request signals to the on-chip

frame memory.

• Tframe memory: the time delay of writing on-chip frame memory.

• Tpixel disable: the time delay of disabling pixels which have already been read out.

Since row memory writing occurs in parallel with row arbitration and column re-

quest, their delays do not sum. Also note that the row request time is not included because

the cycle starts and ends at row arbitration. At this point, row request has already finished.

The throughput of an M× N (M=128 and N=128 in this example) TBARMEM

imager using AMI 0.5µm CMOS technology can be estimated from CADENCE SpectreS

simulations. Since it is impossible to simulate such a big array in CADENCE, in order to

estimate time delays, simulation was done on a 4× 4 array with the parasitic capacitance

Page 44: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

37

of a 128× 128 array. From CADENCE SpectreS simulations, the average row arbitration

time is 0.5× (K+1)× 0.5 ns forM = 2K rows. This gives an average row arbitration time

of 2 ns. Tcol request andTpixel disable are 3 ns and 3.7 ns, respectively. The access time of

state-of-the-art embedded-SRAM is less than 2 ns [42]. If a conservative access time of 5

ns is used in Equation: 3.30, the cycle time of TBARMEM is

Tcycle = max(5, 2 + 3) + 5 + 3.7 = 13.7ns

The highest and lowest throughput can then be calculated using the cycle time. The

highest cycle time happens when N pixels are readout during one on-chip frame memory

writing:

Throughputmax =N

Tcycle

=128

13.7ns

= 9.34 Gpixels/s (3.31)

The lowest throughput happens when only one pixel is read out during one on-chip

frame memory writing:

Throughputmin =1

Tcycle

=1

13.7ns

= 73.0 Mpixels/s (3.32)

It is also interesting to find out how much time it takes to readout one frame of a 128×128 scene:

Tmin =128× 128

Throughputmax

= 1.75 µs (3.33)

Tmax =128× 128

Throughputmin

= 224 µs (3.34)

Page 45: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

38

From the above analysis, due to the massive parallel writing ability, the peak

throughput of TBARMEM is quite high (9.34 Gpixels/s for an array built in a 0.5µm

CMOS technology). However, this is not quite feasible for our university research project

because it is to be fabricated through an educational MOSIS process, which limited us to

certain processes and chip areas. Therefore, another TBAR architecture, TBARBASE, is

proposed to get rid of on-chip memory, although at the expense of less throughput. This

architecture will be discussed next.

3.5.2 TBAR without on-chip Memory: TBAR BASE

In this section, the operation of a TBARBASE imager is described at the system

level. The throughput of this architecture is calculated. A 32× 32 version of this archi-

tecture has been successfully implemented and tested by the author. A detailed analysis of

each circuit component will be presented in the next chapter.

S

SelectedPixel

S

C O L U M N

THROUGHPUT CONTROLCircuit

Row AddressTime

Counter

DoneLatching

Event Pulse

E S

Control

E

DR

DA

COLUMNADDRESSENCODER

ROW

INTERFACE

Column Address

R

W

Row_select(m)

O

NCOD

RE

TE

A

R E

BI

R

L A T C H

ControlLogic

Rst

Ref

Pixel

Pixel(m,n)

Pixel Array

A R B I T E R

Row_request~(m)

Col_request~(n)

Figure 3.9: TBARBASE imager block diagram.

The block diagram of a TBARBASE imager is shown in Figure 3.9. As with the

TBAR MEM, each pixel contains a photodiode, a comparator (with an autozero circuit to

Page 46: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

39

cancel the offset), and a pixel control logic block. Row and column arbiters, which were

previously implemented by Boahen [32], are needed in case there are conflicting pixel

firings. Row and column address encoders output the digital pixel addresses. Meantime,

control circuits output a pulse indicating the address is valid. The throughput control circuit

is used to control how fast TBARBASE reads out pixels. In order to increase the speed

at which the imager can output pulse events and the corresponding pixel addresses, all

circuits operate asynchronously without a global clock. Also there is a counter outputting

time information.

The TBAR BASE imager operates as follows:

1. When the voltage across a pixel’s photodiode drops below the comparator’sreference voltage, the pixel makes a row request by pulling down theRow request∼ line,which is shared by all pixels in the same row.

2. The row arbiter randomly selects a row that is making a row request. This row’saddress is stored by a row address encoder.

3. When the row arbiter selects a row with theRow select line, the pixels that havefired in that row are allowed to make column requests by pulling down theCol request∼line, which is shared by all pixels in the same column.

4. The pixels in the selected row that are making column requests put their firingstates into the latch cells.

5. Once the column requests are latched, the pixels in the selected row that hadbeen making column requests are disabled from firing again for the rest of the frame bythese pixels’ pixel control block. As a result,Row request∼ from this row is withdrawn.After that, if there are other validRow request∼ signals, new row arbitration is allowed tostart taking place. However, the row interface circuit prevents a newRow select until allvalid data inside latch cells have been processed. Column arbitration begins on the requestsin the column latch cells. Note that the throughput control circuit can control the columnarbitration speed.

6. During column arbitration, the column arbiter randomly and sequentially selectsthe latched column requests. When a column is selected, its address, the latched rowaddress, and the time are read out. The time represents information about each pixel’silluminance.

7. Once column arbitration is complete, i.e., all valid data inside the latches havebeen processed, the row interface circuit allows a newRow selct signal is valid. Up to thispoint, a readout cycle is finished.

After a receiver (DSP, PC, or ASIC) receives every pixel’s firing time information

(from the counter) and their corresponding addresses, the pixel’s illuminance can be found

Page 47: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

40

using Equation 3.24 (if the reference level is fixed) or other equations (depending on how

the reference voltage is varying in Figure 3.7).

Because the number of valid pixel firings put into latch cells simultaneously de-

pends on scenes, the readout throughput is also scene dependent. The highest throughput

occurs when all the pixels in the selected row have fired and are put into latches simul-

taneously, and the lowest throughput occurs when only one pixel is put into the latch.

Consequently, the cycle time is also different. Similar to the TBARMEM imager, we can

estimate the throughput of a 128× 128 TBAR BASE imager.

Assume I/O circuits of a TBARBASE imager do not put a limit on throughput, for

a M× N TBAR BASE imager, the smallest and largest cycle times are

Tmincycle =

Trow arbiter + Tcol request + N × (Tcol arbiter + Tcol encoder) + Trow interface

N

(3.35)

Tmaxcycle = Trow arbiter + Tcol request + Tcol arbiter + Tcol encoder + Trow interface (3.36)

where

• Tcol encoder: the time delay for column address encoding.

• Trow interface: the time delay from finishing data in latches to allowing a new

row select.

• Trow arbiter, Tcol request and Tcol arbiter: same as definitions appeared in the TB-

AR MEM throughput calculation.

We calculated the throughput of a 128× 128 TBAR BASE imager using the

time delay information extracted from CADENCE SpectreS simulations.Trow arbiter,

Tcol request, Tcol encoder andTrow interface, are 2 ns, 3 ns, 2 ns and 3.7 ns, respectively. The

column arbiter time is designed to be controllable. If it is free running (i.e., the through-

put control circuit does not put a limitation on throughput), the averageTcol arbiter is 2 ns.

These numbers give the following cycle time:

Page 48: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

41

Tmincycle =

2 + 3 + 128× (2.0 + 2.0) + 3.7

128≈ 4.07 ns

Tmaxcycle = 2.5 + 3 + 2 + 2 + 3.7 = 13.2 ns

The corresponding throughput is

Throughputmax =1

Tmincycle

=1

4.07ns

= 245.7 Mpixels/s (3.37)

Throughputmin =1

Tmaxcycle

=1

13.2ns

= 75.8 Mpixels/s (3.38)

Compared with the throughput of the TBARMEM architecture, the minimum

throughput is slightly better, while the maximum throughput is much smaller. This is

because the parallel writing to on-chip frame memory of TBARMEM does not take extra

time, while in TBAR BASE, reading out data in latch cells must go through a long se-

quential column arbitration process. The time it takes to readout one frame of a 128× 128

scene by the TBARBASE imager:

Tmin =128× 128

Throughputmax

= 66.7µs (3.39)

Tmax =128× 128

Throughputmin

= 216µs (3.40)

3.6 Errors Analysis and Simulation

In the last section, two TBAR architectures, TBARMEM and TBAR BASE, are

proposed and their throughput is calculated. In this section, errors caused by limited

Page 49: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

42

throughput of TBAR imagers are further investigated. MATLAB simulators have been

built for both TBAR MEM and TBAR BASE imagers. These simulators are quite useful

because they quantify the errors for real-world high dynamic range images. Simulation

results of four high dynamic range images are presented.

3.6.1 Errors Caused by Limited Throughput of TBAR Architectures

As discussed in the last section, one issue with the TBAR architecture is the sit-

uation when many pixels are firing closely in time. Due to the limited throughput of the

readout circuits, TBAR imagers are unable to readout pixel firings in real time. Therefore,

the readout time may have some delay relative to the real firing time. Errors are introduced

when postprocessors reconstruct images using the readout time, instead of the true firing

time.

The above situation can be shown in Figure 3.10 using the TBARBASE archi-

tecture as an example, where a3 × 3 pixel array is under exactly the same illuminance

L. Although those pixels fire at exactly the same time T (assuming no mismatch between

these pixels), their corresponding output address pulses occur at different times (fromt1

to t9) due to the fact that a TBARBASE imager can only output one pixel address at a

time. The difference in time depends on how fast the TBARBASE imager can output

these pulses, i.e. the throughput of TBARBASE imager. Instead of using time position T

for all pixels , the receiver will have to uset1, t2, ..., t9 to reconstruct the original image.

Since the temporal position of the address pulses represents illuminance, some errors are

introduced. The reconstructed image will obviously be nonuniform.

From Equation:3.21, the true illuminance for these pixels is

L =Vsig,ref

ST(3.41)

However, when the illuminance is reconstructed from the available time informationtk,

the reconstructed illuminance is:

L̂k =Vsig,ref

Stk(3.42)

Page 50: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

43

t9

T

t84 tt5 t6

96 t8

t5

7

t

Vref

7

t t4

Address

2

t

Output

2t1 tt 3

3

Output

t

t1

tt1

Encoder

t

Signal atPhotodiode

Vsig,ref

L

(b)

n )(m ,

(c)

(a)

Request

t2

Time

Time

Time

Figure 3.10: Errors introduced by uniform illuminance on a3 × 3 pixel array ofTBAR BASE imager.

The relative errors introduced are

Error =L− L̂k

L=

tk − T

tk=

∆t

T + ∆t(3.43)

where∆t = tk − T is defined as the time delay of the output pulses. From Equation 3.43,

if the TBAR imagers have a high throughput, which means a smaller∆t, the error would

be smaller. Also, for the same time delay∆t, errors are more severe for high illuminance

pixels, where the firing timeT is small.

For the TBARMEM imager, errors caused by uniform illumination do not exist.

This is because the row request memory can record the firing time correctly. However, if a

lot of pixels are under very close, but not exactly same, illumination, some errors will still

exist due to the limited throughput.

Page 51: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

44

There are several factors which determine the amount of error (due to the time

delay). One important factor is the nature of the imaged scenes. The more uniform the

scene is, the more errors there will be. Also, since larger images have higher pixel firing

rate, errors are more likely to occur. In addition, the speed of the TBAR imagers also affects

errors. For a given image, the higher throughput a TBAR imager has, the fewer errors will

occur. The pulse output rate of a TBAR imager can be improved at the architecture level

(TBAR MEM architecture, for example), or at the circuit level.

Generally, it is difficult to give precise analytical results of the amount of errors

because it is scene dependent. However, a worst case situation for the TBARBASE imager

can be analyzed here. For a TBARBASE architecture shown in Figure 3.9, the worst case

of time delay for HDR images with a size of128×128 can be calculated using the following

assumptions:

• 5% of pixels are under exactly the same illuminance. After analyzing seven 8-bit,

grayscale images in MATLAB, we found that there were at most 4% of pixels un-

der uniform illumination. However, we must remember that these images are 8-bit

quantized versions of original scenes. It is reasonable to assume that quantization er-

rors and dynamic range limitations in the 8-bit images inflate the amount of uniform

illumination. Thus, 5% uniform illuminance should be a safe worst case for HDR

images.

• The time interval (∆t1 in Figure 3.10) of two adjacent output addresses which belong

to two pixels in the same row under uniform illuminance is 2.5 ns.∆t1 is due to the

time needed for column arbitration. The time interval (∆t2 in Figure 3.10) of two

adjacent output addresses which belong to two pixels in different rows under uniform

illuminance is about 11 ns.∆t2 is due to the time needed for disabling pixels, row

arbitration and latching the column request. This delay information comes from

a CADENCE SpectreS circuit simulation shown in Figure 3.11. Once again, the

Page 52: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

45

simulation is carried out on a 4× 4 array with load capacitance of a 128× 128 array

using AMI 0.5µm CMOS technology. Note that throughput control circuit is not

used in this simulation. Therefore, the asynchronous circuit is free running at the

maximum speed.

• Every row contains at least one uniformly illuminated pixel. This assumption will

bring the worst delay because∆t2 is longer than∆t1.

Using the above assumptions, the worst case delay∆t is

∆t = ∆t2M + ∆t1MN5% (3.44)

= 11 ns× 128 + 2.5 ns× 128× 128× 5%

= 3.5µs

where M is number of rows and N is number of columns.

Note that the errors for the same amount of time delay vary depending on when

these pixels fire. As from Equation 3.43, if the firing time T is 10µs, the worst relative

error is3.5µs/13.5µs = 26%. If these pixels fire around 30ms, the worst relative error is

3.5µs/30.0035ms = 0.012%. If we assume the lowest acceptable SNR is 0 dB, as in the

definition of DR, the smallest acceptable firing time is 3.5µs for the TBARBASE archi-

tecture. If we assume the longest firing time is 20 seconds (limited by the dark current),

this will give a dynamic range of 135 dB.

3.6.2 MATLAB simulation of errors

In this subsection, MATLAB simulators are built to simulate TBARMEM and

TBAR BASE at the system level. These simulators turn out to be very useful because

they can predict the performance of imagers for a given circuit architecture and speed.

Simulations are performed to show the amount of errors for a few high dynamic range

(HDR) images with different sizes (160×180 and480×720) using both TBARMEM and

TBAR BASE architectures.

Page 53: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

46

Figure 3.11: Output pulses of a4× 4 pixel array under uniform illumination (CADENCESpectreS circuit simulation).

Page 54: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

47

In these two simulators, numerical values of testing imagers, which do not have

physical meanings, have to be mapped into photocurrents in the MATLAB simulation.

Two assumptions are made. The first assumption is that photocurrents are proportional to

the HDR images’ numerical values. The second assumption is used to decide a reference

point. The dark current is assumed to correspond to the minimum non-zero numerical

value. Also, the imagers are assumed to be working in video mode (integration time is

30 ms) and the reference signal voltage drops linearly from 2 volts to 0 volts during time

interval from 15 ms to 30 ms. The reference voltage is shown in Figure 3.12.

2 V

30ms(15ms)

4I

3I

2

4

Vref

Vref

I1 I

t

Signal at

2t t

Photodiode

31

Time

t

Figure 3.12: Reference voltage of TBAR imager simulators.

After each pixel’s photocurrent is decided, its firing time can be computed as:

t =

2C

Isecond : 0 < t < 15ms

0.03

1 + 0.0075 IC

second : 15ms ≤ t ≤ 30ms(3.45)

where C is the capacitance at photodiode and I is the photocurrent.

After having each pixel’s firing time, the output time can be determined by TBAR

imager simulators. The delay times of 160× 180 images used in MATLAB simulators

are again extrapolated from CADENCE SpectreS circuit simulations using AMI 0.5µm

CMOS technology. For simplicity, the delay times of image size of 480× 640 are the same

as those of image size of 160× 180 assuming they are implemented in more advanced

Page 55: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

48

(fast) technologies (e.g. 0.18µm CMOS technology). The time delays of TBARMEM

are:

• Time delay from a pixel firing to making a row request: 1 ns.

• Access time of row request memory and frame memory: 5 ns.

• Average time delay of a row arbitration: 2 ns.

• Time delay from row selection to colrequest: 3 ns.

• Time delay from finishing the latched data to the next rowselect: 3.7 ns.

The time delays of TBARBASE (free running mode) are:

• Time delay from a pixel firing to making a row request: 1 ns.

• Time delay of a row arbitration 2 ns.

• Time delay of a column arbitration 2 ns.

• Time delay from row selection to output a pulse: 3 ns.

• Time delay of column encoder: 2 ns.

• Time delay from finishing the latched data to the next rowselect: 3.7ns.

Before showing simulation results of real high dynamic range images, it is inter-

esting to look at effects of limited throughput on images with large uniform illumination

areas. Figure 3.13 is a scene with 4 uniformly illuminated areas. They have photocurrents

equivalent to 4, 16, 64 and 255 times the dark current. Figure 3.14 (a) (b) show the re-

constructed images using TBARMEM simulator. There is no error in this case. This is

because the row request memory of TBARMEM can accurately record the firing times.

However, if the scene illumination is very close, but not perfectly uniform, there are still

Page 56: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

49

errors. Figure 3.14 (c) (d) shows the reconstructed images using the TBARBASE simu-

lator. We can see that due to the limited throughput, the reconstructed image has random

horizontal bands.

20 40 60 80 100 120

20

40

60

80

100

120

Figure 3.13: A 128× 128 scene, ‘squares’, with uniform illumination area.

The following MATLAB simulations are performed to show the amount of errors

for four high dynamic range (HDR) images. The original images have a size of480× 720

pixels. These images are subsampled to get HDR images with size of160× 180, which is

slightly larger than the size of QCIF (144 × 176) image format. QCIF is about the size of

the images used for hand-held PDA and videophones. These four HDR images used in the

MATLAB simulators come from Paul Debevec’s graphics research group at the University

of Southern California [43]. They arenave, groveC, rosette andvinesunset. These four

images are of floating-point representation and have a dynamic range from 88 dB to 168

dB. After subsampling to a size of160×180, there are slight changes to DR. The numerical

values and dynamic ranges of these images are shown in Table 3.1:

These four images are also displayed in Figure 3.15 and Figure 3.16. Since MAT-

LAB can only display 8-bit (48 dB) image, in order to display these high dynamic range

images, two figures are used to display one image.

The mean relative errors between the original images and the reconstructed images

of four 160 × 180 and four480 × 720 HDR images using the TBARMEM architecture

Page 57: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

50

20 40 60 80 100 120

20

40

60

80

100

120

20 40 60 80 100 120

20

40

60

80

100

120

(a) (b)

20 40 60 80 100 120

20

40

60

80

100

120

20 40 60 80 100 120

20

40

60

80

100

120

(c) (d)

Figure 3.14: Reconstructed ‘squares’. (a) Using the TBARMEM (normal display). (b)10X brighter to exaggerate errors. (c) Using the TBARBASE (normal display). (d) 10Xbrighter to exaggerate errors.

are shown in Table 3.2. The errors introduced by TBARBASE architecture are shown in

Table 3.3.

To have an idea of how significant these errors are, one of the reconstructed im-

ages with highest mean relative error (0.378%),rosette of TBAR BASE, is displayed in

Figure 3.17. There is no recognizable difference from Figure3.16(a)(b). Also, to compare

with other noise sources, the noise (error) caused by the photocurrent shot noise alone

of the lena image, shown in Figure 3.18(a), is computed using Equation 3.12. It gives a

mean relative error of 0.9%. Thelena image with photo current shot noise is displayed in

Figure 3.18(b).

Page 58: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

51

Table 3.1: Four480× 720 and four160× 180 HDR images used in MATLAB simulation.Images with a size of480× 720

Image Name nave groveC rosette vinesunsetMinimum Value 1.69× 10−5 5.53× 10−4 2.63× 10−5 1.44× 10−3

Maximum Value 4.27× 103 8.80× 102 8.28× 101 3.61× 101

Dynamic Range(dB) 168 124 130 88

Images with a size of160× 180Image Name nave groveC rosette vinesunsetMinimum Value 1.69× 10−5 9.13× 10−4 2.63× 10−5 1.58× 10−3

Maximum Value 2.59× 103 4.93× 102 7.43× 101 3.31× 101

Dynamic Range(dB) 164 115 129 86

Table 3.2: The mean relative errors introduced by TBARMEM of four HDR images withsizes of 160× 180 and 480× 720.

Images with a size of 160× 180Image Name nave groveC rosette vinesunsetMean Relative Error 9.87× 10−7 6.35× 10−7 7.31× 10−7 1.59× 10−7

Images with a size of 480× 720Image Name nave groveC rosette vinesunsetMean Relative Error 6.38× 10−5 4.28× 10−5 7.36× 10−6 1.69× 10−6

From the simulation results in Table 3.2, Table 3.3, Figure 3.17 and Figure 3.18,

we make the following observations:

• The errors caused by the limited throughput of TBARMEM and TBAR BASE are

negligible for these four HDR images, compared with the photocurrent shot noise

and the fixed pattern noise (FPN). The photocurrent shot noise alone will cause a

mean relative error of 0.9% for thelena image. The FPN is typically 0.2% of the

saturation level [16]. The mean relative error caused by the FPN is even higher

because not all pixels have a signal level close to the saturation level.

• The larger the image size, the more significant the errors are. This is not surpris-

ing because the probability of collision is higher for larger image size. From the

simulation results, however, the errors are negligible for image sizes up to 480×720.

Page 59: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

52

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

(a) (b)

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

(c) (d)

Figure 3.15: Imagesnave andgroveC. (a)nave (bright part). (b)nave (dark part:×100brighter for display). (c)groveC (bright part). (d)groveC (dark part:×20 brighter fordisplay).

• The errors caused by TBAR imagers are image dependent.

• The TBAR MEM architecture suffers fewer errors than the TBARBASE, thanks to

its parallel writing on-chip frame memory.

In the last section, we computed the throughput of the TBARMEM and the

TBAR BASE architectures. It is interesting to look at the firing rates of these four HDR

images and understand better how the throughput of the TBAR imagers affects errors. The

firing rates are computed as the number of firings per second in a small period. The max-

imum and mean firing rates of the four HDR images are shown in Table 3.4. The firing

Page 60: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

53

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

(a) (b)

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

(c) (d)

Figure 3.16: Imagesrosette andvinesunset. (a) rosette (bright part). (b)rosette (darkpart:×100 brighter for display). (c)vinesunset (bright part). (d)vinesunset (dark part:×10 brighter for display)

rates and the relative errors of therosette andvinesunset images with sizes of 160× 180

and 480× 720 are shown in Figure 3.19 and Figure 3.20, respectively. The time bin size

of these two figures is 5µs.

From Figure 3.19 and Figure 3.20, we have two useful observations:

• Recall from the throughput estimation in the last section, the minimum throughput

of TBAR MEM and TBAR BASE are 73.0 and 75.8 MegaPixels/second. From the

image firing rate plot in Figure 3.19 (b) and Figure 3.20 (b),rosette has a high firing

rate of more than 100 MegaPixels/second region from 27 ms to 30 ms.vinesunset

Page 61: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

54

Table 3.3: The mean relative errors introduced by TBARBASE of four HDR images withsizes of 160× 180 and 480× 720.

Images with size of 160× 180Image Name nave groveC rosette vinesunsetMean Relative Error 1.47× 10−5 6.96× 10−7 2.84× 10−6 2.58× 10−7

Images with size of 480× 720Image Name nave groveC rosette vinesunsetMean Relative Error 2.39× 10−4 1.93× 10−4 3.78× 10−3 6.71× 10−6

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

100 200 300 400 500 600 700

50

100

150

200

250

300

350

400

450

(a) (b)

Figure 3.17: Reconstructedrosette by TBAR BASE imager with 0.378% mean relativeerror. (a) Bright part. (b) Dark part (x100 brighter for display).

has a maximum throughput of only 73.8 MegaPixels/second. This indicates that

rosette is likely to have more errors thanvinesunset. It is verified by the results in

Table 3.2 and Table 3.3.

• Images with different size have different firing rate. The images with a size of 160

× 180 have firing rates roughly 10 times lower than those with a size of 480× 720.

Since the firing rates of images with size of 160× 180 are much lower than the

throughput of TBARMEM and TBAR BASE, the errors are negligible, which is

manifested by looking at the errors in Table 3.2 and Table 3.3.

3.6.3 TBAR BASE imager with throughput control

From the error calculations in the last subsection, the errors are negligible for

moderate size images (such as QCIF image format) for both the TBARMEM and

Page 62: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

55

50 100 150 200 250

50

100

150

200

250

50 100 150 200 250

50

100

150

200

250

(a) (b)

Figure 3.18: Imagelena. (a) Original image. (b) Imagelena with 0.9% mean relativeerror due to photo current shot noise.

Table 3.4: The maximum and mean firing rates of four HDR images with sizes of 160×180 and 480× 720.

Images with size of 160× 180Image Name nave groveC rosette vinesunsetMax Firing Rate (MPixels/second) 9.2 15.6 20.4 7.8Mean Firing Rate (MPixels/second)0.96 0.96 0.96 0.96

Images with size of 480× 720Image Name nave groveC rosette vinesunsetMax Firing Rate (MPixels/second) 89.6 105 190 73.8Mean Firing Rate (MPixels/second)11.4 11.4 11.4 11.4

TBAR BASE architectures. However, one practical issue arises as how to capture the

address pulses generated by a free-running TBARBASE imager shown in Figure 3.11.

This is not a trivial issue because the duration of a pulse is only about 2ns. This puts a

very high requirement on I/O circuit design [44] and testing. One possible solution is to

use an on-chip buffer (registers or SRAM). But this will make the design more complicated

and consume more silicon area. Also considering the intended test instrument is a Agilent

1693A logic analyzer which has a maximum transient clock speed of 200 MHz, the output

pulses rate in Figure 3.11 are controlled by an external clock to make testing easier,. This

brings two advantages:

Page 63: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

56

0 0.005 0.01 0.015 0.02 0.025 0.030

0.5

1

1.5

2

2.5x 10

7

Time (second)

Imag

e F

iring

Rat

e (p

ixel

s/se

cond

)

0 0.005 0.01 0.015 0.02 0.025 0.030

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2x 10

8

Time (second)

Imag

e F

iring

Rat

e (p

ixel

s/se

cond

)

(a) (b)

0 0.005 0.01 0.015 0.02 0.025 0.030

0.5

1

1.5

2

2.5

3x 10

−3

Time (second)

Rel

ativ

e E

rror

s

0 0.005 0.01 0.015 0.02 0.025 0.030

0.005

0.01

0.015

0.02

0.025

0.03

Time (second)

Rel

ativ

e E

rror

s

(c) (d)

Figure 3.19: The firing rates ofrosette (a) with the size of 160× 180, (b) with the size of480× 720, and the relative errors ofrosette (c) with the size of 160× 180, (d) with thesize of 480× 720.

1. The throughput of the imager is controllable. The data generating rate of the im-ager can be slowed down if the I/O or testing equipment cannot keep up with it. Althoughthis approach may reduce the throughput, but the testability justifies this approach for thisprototype design.

2. Since the throughput is controlled by an external clock, the output addressesare actually synchronous with a clock. Therefore, although the internal imager circuitsare asynchronous, the output addresses are synchronous. This solution incorporates theasynchronous circuits in a synchronous environment [45].

The current design has been simulated using clock rates ranging from 20 MHz to

66 MHz. If a 20 MHz clock is used, it takes one clock period to output one pixel, no matter

where it is located, as shown in Figure 3.21, where thec latch signal is one phase of a

Page 64: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

57

0 0.005 0.01 0.015 0.02 0.025 0.030

1

2

3

4

5

6

7

8x 10

6

Time (second)

Imag

e F

iring

Rat

e (p

ixel

s/se

cond

)

0 0.005 0.01 0.015 0.02 0.025 0.030

1

2

3

4

5

6

7

8x 10

7

Time (second)

Imag

e F

iring

Rat

e (p

ixel

s/se

cond

)

(a) (b)

0 0.005 0.01 0.015 0.02 0.025 0.030

0.5

1

1.5

2

2.5

3x 10

−5

Time (second)

Rel

ativ

e E

rror

s

0 0.005 0.01 0.015 0.02 0.025 0.030

0.5

1

1.5

2

2.5

3

3.5x 10

−4

Time (second)

Rel

ativ

e E

rror

s

(c) (d)

Figure 3.20: The firing rates ofvinesunset (a) with the size of 160× 180, (b) with thesize of 480× 720, and the relative errors ofvinesunset (c) with the size of 160× 180, (d)with the size of 480× 720.

two-phase clock. If a 66 MHz clock is used, it takes 1 clock period (15 ns) to output each

pixel firing simultaneously in the same row. For pixels firing simultaneously in different

rows, it takes two periods (30 ns) to output each of them. This is shown in Figure 3.22.

It is not difficult to calculate the throughput. With a 20 MHz clock, the throughput

is fixed at:

Throughput =1

Tcycle = 20Mpixels/s(3.46)

Page 65: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

58

Figure 3.21: Output addresses using a 20MHz clock to control throughput under uniformillumination.

With a 66 MHz clock, throughput is different. For a M×N array, the maximum throughput

happens when latches have N valid data in one cycle:

Throughputmax =N

N × Tcycle + Tcycle

≈ 1

Tcycle

= 66Mpixels/s

The minimum throughput happens when there is only one firing state inside latch

cells each cycle:

Throughputmin =1

Tcycle + Tcycle

Page 66: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

59

Figure 3.22: Output addresses using a 66MHz clock to control throughput.

≈ 1

2× Tcycle

= 33Mpixels/s

Errors introduced by 20 MHz and 66 MHz throughput control are estimated from

MATLAB simulations on the four160 × 180 HDR images. They are listed in Table 3.5.

Compared with errors in Table 3.2 and Table 3.3, there is a slight increase of errors. How-

ever, they are still negligible compared with photon shot noise and fixed pattern noise.

In this research project, a 32× 32 TBAR BASE imager with throughput control

has been designed and tested due to its feasibility and testability in the university environ-

ment. The error simulations discussed in the last several subsections demonstrate that the

Page 67: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

60

Table 3.5: The mean relative errors of four HDR images with size of160 × 180 underdifferent throughput control clock.

TBAR BASE with 20 MHz Throughput ControlImage Name nave groveC rosette vinesunsetMean Relative Error 6.93× 10−5 1.57× 10−5 3.22× 10−5 3.08× 10−6

TBAR BASE with 66 MHz Throughput ControlImage Name nave groveC rosette vinesunsetMean Relative Error 3.64× 10−5 4.17× 10−6 9.67× 10−6 1.10× 10−6

TBAR MEM architecture suffers from fewer errors for the same image size compared to

the TBAR BASE architecture. The disadvantage of this architecture is the large on-chip

memory needed. The TBARBASE imager does not need on-chip memory at the expense

of a smaller peak throughput. Throughput control circuits of TBARBASE forces the ad-

dress pulses to synchronize with an external clock. This makes testing much easier. Since

both TBAR MEM and TBAR BASE imagers share the same idea (time-based imager with

asynchronous readout) and same circuit components (pixel, arbiter and asynchronous con-

trol circuits), after a 32× 32 TBAR BASE imager is successfully fabricated and tested,

it can be easily extended to make a larger imager or a TBARMEM imager if the on-chip

memory is available.

3.7 Summary

In this chapter, the DR of the photodiode CMOS APS imager is analyzed to show

the limitations to DR with the conventional CMOS APS imager architecture. The opera-

tions of a number of existing HDR imagers are investigated. A time-based asynchronous

readout (TBAR) imager is introduced to achieve HDR, and its principles and operations

are described. Two different architectures, TBARBASE and TBARMEM, are proposed.

One unique issue with TBAR imagers, errors introduced by limited throughput, is dis-

cussed. The throughput of TBARBASE and TBARMEM is computed. MATLAB simu-

lations demonstrate that the errors introduced by TBAR imagers are negligible for the four

moderate size (up to480 × 720) HDR test images. The reason for adding the throughput

Page 68: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

61

control circuit is explained. In the next chapter, the design of a 32× 32 TBAR BASE

imager with throughput control will be presented.

Page 69: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

CHAPTER 4TBAR IMAGER CIRCUIT DESIGN AND ANALYSIS

4.1 Introduction

In this chapter, a 32× 32 TBAR BASE imager is designed using a AMI 0.5µm

CMOS process. While the system architecture has been discussed in the last chapter, this

chapter focuses on the design at the circuit level. Section 4.2 will discuss the pixel design

for the TBAR imagers, which includes a photodiode, a comparator and a digital control

circuit. The asynchronous readout circuit design is presented in Section 4.3. Since the

imager is operating asynchronously, the correct timing is crucial. The timing analysis is

discussed in Section 4.4. This chapter is concluded in Section 4.5.

4.2 Pixel Design

The block diagram of the fabricated and tested TBARBASE imager is shown in

Figure 4.1. Each of the blocks will be discussed in detail in this chapter. Compared with

Figure 3.9, Figure 4.1 does not have a counter since the test equipments, an Agilent 1693A

logic analyzer, has an integrated counter.

There are 32× 32 pixels for this design. The pixel schematic is shown in Figure 4.2.

Inside each pixel, there is a photodiode, a comparator and digital control circuitry. The

pixel operation and the digital control circuitry will be explained first, followed by the

photodiode and comparator design.

4.2.1 Pixel Operation and Digital Control Circuitry

In this section, the operation of one pixel of the TBARBASE imager will be de-

scribed. The circuit diagram of one pixel located at rowm and columnn is shown in

Figure 4.2. D1 is a photodiode. The comparator is implemented using an opamp (which

62

Page 70: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

63

W

R

clock

φ1,φ2

32 x 32

Off−Chip

ImagerI/O Circuitry

Row AddressColumn Address

THROUGHPUT CONTROL

Row_select(m)

E S

SSER

COLUMNADDRESSENCODER

ROW

INTERFACE

Done

Generator2−phase Clock

Latch Control

O

ADD

Power Supply, Bias Circuit and Test Equipment

Latching

C O L U M N

A

E

ER

N

DOC

R

RETIB

L A T C H

Ref

Pixel

LogicControl

RstPixel(m,n)

Row_request(m)

Col_request(n)

A R B I T E R

Pixel Array

Figure 4.1: TBARBASE imager block diagram.

will be discussed in detail later).RES andRES∼ are global reset signals.JOIN is an-

other global signal, which will be explained soon. Therow request∼ signal is the request

signal going to the row arbiter.row sel is the selection signal, coming through the row in-

terface circuit from the row arbiter.latch row is used to prevent the pixel from firing again

after this pixel has put its firing status inside the latch. Thecox∼ signal puts the pixel firing

status into the latch after the row of this pixel is selected byrow sel. row request∼(m),

row sel(m) andlatch row(m) signals are shared by all pixels in the same rowm, while

cox∼(n) is shared by all pixels in the same columnn.

The pixel will always be in one of the four operation states: resetting, integrating,

firing and latching:

Page 71: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

64

Vdd

Vdd

Vdd

RES~ RES

V_r

eset

V_r

ef

JOIN

RESRES~

D1

M13

M18

M17

B

M25

M34

M35

M32

INV1 INV2

Pixel(m,n)

To latcth

A

row_sel(m)latch_row(m)

RESM30INV3

INV4

row_rquest~(m)

cox~(n)

M_100

To row_arbiter

M16

I1

Figure 4.2: TBARBASE pixel schematic.

4.2.1.1 Resetting

During the reset phase,RES is high andRES∼ is low. The comparator is in a unit-

gain feedback configuration to reset the photodiode. Assuming the amplifier gain is large

enough and the charge injection of switch M16 is negligible, the photodiode is reset at the

voltage ofVreset + Voff , whereVoff is the offset voltage of the comparator. Also, during

the reset phase,JOIN is low and M17 is off. As a result, the output of the comparator is

isolated with reset of the circuit.

M18 is used to speed up transition during firing. During reset, the pixel should not

output a request signal. That means voltage at nodeA should be low (pixel is disabled),

ensured byM13. SinceM13 is stronger thanM18, A is low during reset phase.

M30 is used to reset the latch formed by two inverters:INV 3 andINV 4. This

latch stores information whether the pixel is in thelatching state.

4.2.1.2 Integrating

WhenRES goes low, the photodiodeD1 is to be discharged at the rate propor-

tional to photocurrent and the pixel is in the integration phase. The non-inverting node of

Page 72: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

65

the opamp is connected to the reference voltageVref . Since the initial voltage atD1 is ap-

proximatelyVreset, which is higher thanVref , the output of comparator goes fromV reset

to zero. Also,M13 is disabled andM17 is turned on during the integration phase. The

output of the comparator will be able to control the voltage at the nodeA.

Attention need to be paid for the timing ofJOIN . JOIN is used to isolate the

comparator output and nodeB during theReset phase. It is necessary because the com-

parator output voltage is high (equal toVreset) while nodeB is low during theReset phase.

If M17 is turned on at the same timeRES goes to low, the voltage at nodeB will be de-

cided by the charge sharing between the charges stored at the output of the comparator

and nodeB. This voltage is digital ‘1’ verified by the SPICE simulation. However, pixels

should not be firing immediately afterRES goes low. Therefore,M17 need to be turned

on ∆t later thanRES goes to low, waiting for the output of the opamp goes back to ‘0’,

where it should be. The CADENCE SpectreS simulation shows∆t should be more than

1 µs.

4.2.1.3 Firing

During integration, the photodiode is discharged by the photocurrent. When the

voltage atD1 drops below the reference voltageVref , the comparator output goes to high.

As a result,A is high androw request˜ is pulled down to low. Therefore, this pixel fires

and sends a request signal to the row arbiter.

4.2.1.4 Latching

If the row arbiter selects (by makingrow sel(m) high) this row after receiving the

row request(m) from this pixel, this pixel will sendcox˜ (n) to the column latch. After

the corresponding latch makes sure that the firing status (either ‘1’ or ‘0’) is inside the

latch, it sends alatch row(m) signal to let the pixel withdrawrow request˜ (m) signal.

Meanwhile, the pixel is disabled by the pull-down transistorM32. This pixel can not fire

again until the next reset phase turns offM32.

Page 73: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

66

There are 30 transistors for each pixel. The layout of one pixel is shown in Fig-

ure 4.3. Note that the area that is not shaded is light-sensitive. The pixel area is 37.5µm

× 34.8µm. The photo-sensitive area is 4.8µm × 5.1µm. Compared with the pixel size

of about 5µm × 5 µm for state of the art CMOS image sensors, this pixel size is much

larger. However, this design is implemented on a 0.5µm CMOS technology. In [39],

there are 37 transistors in one pixel and the pixel size is 9.7µm × 9.7 µm using a 0.18

µm CMOS technology. Thus, we expect that the TBARimager pixel size can be reduced

to about 9µm × 9 µm if a 0.18µm technology is used. Furthermore, since it has been

generally acknowledged that further decrease in the pixel size much beyond 5µm × 5

µm is not needed because of the diffraction limit of the camera lens [46]. Therefore, the

TBAR imagers will benefit from further scaling in terms of reduced pixel size, which is

not the case for the conventional APS image sensors because their pixel size has already

reached their fundamental limits. In [47], the authors estimated the number of transistors

each pixel can fill as the CMOS technology scales assuming a 5µm × 5 µm pixel size

with a constant fill factor of 30%. For 0.10µm technology, about 20 analog transistors

or 100 digital transistors can be put into one pixel. This is more than enough for TBAR

imagers.

4.2.2 Photodiode Design

The photodiode design is not a trivial issue. The optimal design should achieve high

sensitivity, low dark current and low cross-talk. One effective way to increase the photo-

sensitivity is to deepen the photo-conversion region [48]. To reduce dark current, a pinned

photodiode is used to suppress the surface state [13, 14, 15]. Cross-talk can be reduced

by careful layout and adjusting the doping profile [48]. Note that device simulators are

frequently used to determine optimal layouts and doping profiles in designing photodiodes.

For this research project, we do not have the luxury of adjusting the doping profiles

to achieve an optimal photodiode. Instead, native P/N diodes of a standard AMI 0.5µm

CMOS process are used as photodiodes. There are at least three types of photodiodes

Page 74: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

67

Figure 4.3: The layout of one pixel.

available in the AMI 0.5µm CMOS process: P-substrate/N-diffusion, P-substrate/N-well

and P-diffusion/N-well. In this design, a P-substrate/N-well diode, as shown in Figure 4.4,

is used.

The total capacitance at the cathode node of the photodiode is the summation of

the junction capacitance of the P-substrate/N-well, the gate capacitance of an input PMOS

transistor (with a width of 1.5µm and a length of 1.2µm) of the comparator and the drain

capacitance of the reset PMOS transistor (M16 in Figure 4.4, with a width of 1.5µm and

a length of 0.6µm):

Cpd = CN−well + Cgate + Cdrain (4.1)

The drawn area of the N-well is3.6 µm × 3.9 µm ≈ 14µm2, giving rise to a capacitance

of CN−well = 0.56 fF from the MOSIS N-well capacitance data (40 aF/µm2 between

Page 75: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

68

P_Sub

N−Well

N+

To comparator/reset

Figure 4.4: N-well/P-sub photodiode.

N-well and substrate). The input PMOS transistors of the comparator are in the weak

inversion region. The gate capacitor can be modeled as the series combination of a gate

oxide capacitorCox and a substrate depletion capacitor [49]. From the MOSIS data, the

gate oxide capacitance of a1.5 µm(W ) × 1.2 µm(L) PMOS transistor is 4.3fF . The

depletion capacitance depends on the substrate doping density, for a transistor in weak

inversion [49]:

Cb = A

√qεsNch

2ψs

(4.2)

whereA is the area of the gate,εs is the permittivity of silicon (1.04 × 10−12 F/cm),

Nch is the channel doping density (1.7 × 1017 cm−3 from MOSIS data) andψs is the

surface potential (about 0.7 V in weak inversion). From the above equation, the depletion

capacitanceCb is 3.2 fF. Thus, the gate capacitance is

Cgate =CoxCb

Cox + Cb

= 1.8 fF (4.3)

Also from MOSIS data, the drain diffusion capacitance of the reset PMOS transistor is

about 1.6 fF. Thus the total capacitance:

Cpd = CN−well + Cgate + Cdrain = 4.0 fF (4.4)

Page 76: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

69

From the CADENCE SpectreS simulation, the total capacitance at the photodiode is about

4.6 fF, which matches well with the hand calculations.

4.2.3 Comparator Design

The operation of one pixel of the TBAR imager has been described in Section 4.2.1.

The only analog part of the TBAR imager is the comparator inside each pixel. The design

of the comparator will be discussed in this section.

As shown in the pixel diagram of Figure 4.2, the comparator output flips when the

voltage at the photodiode drops below the reference voltage. Modern high-speed com-

parators typically have one or two stages of preamplification followed by a track-and-latch

stage [50]. However, this architecture is not quite suitable for the TBAR imager. For a

track-and-latch type of comparator, the internal nodes are reset before entering the latch

(comparison) mode. That means, we need to decide when we want to make a comparison.

This is not a problem for synchronous systems, such as A/D converter applications, where

a clock can indicate when the comparison is to occur. For a TBAR imager, only one com-

parison is needed during one frame time for each pixel, but the time of the comparison is

unknown (in fact, this is the information the TBAR imager intends to capture). In other

words, we do not know when to switch the comparator from the track mode into the latch

mode. Another issue is that this architecture usually needs dozens of transistors, which

would make the pixel unacceptably large.

In this design, an opamp is used as a comparator. For conventional applications,

the main drawback of this approach is the slow response time since the opamp output has

to slew a large amount of output voltage and settles too slowly [19]. However, for TBAR

imager applications, it is not necessarily an issue. The slow response time of the opamp

will cause a delay from the time when the photodiode voltage reaches the reference volt-

age to the time when the pixel sends a row request signalrow request˜ . This amount

of time delay may be different for different illumination levels (i.e., different photodiode

discharging rates). When the original image (illumination information) is reconstructed

Page 77: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

70

using Equation 3.24, there will be some distortions from the (assumed) linear relationship

between illumination and image numerical value due to the additional opamp delay. Fortu-

nately, these nonlinear distortions are fundamentally different from the errors discussed in

Section 3.6, where pixels firing at the same time (i.e., pixels are under same illumination)

will have output pulses at different times. Assuming that there is no mismatch among these

comparators, the time delays introduced by the slow response of comparators are the same

for all pixels under the same illumination. Therefore, pixels with the same illuminance

still get the same reconstructed numerical values. Note that the nonlinear relationship be-

tween the illuminance and the output image numerical value exists even in conventional

CCD and CMOS APS imagers, because of the nonlinearities of the photodiode capacitance

and readout circuit gain with the signal level. Furthermore, some very nonlinear functions

(such as gamma correction [12]) are often applied to final digital output values from image

sensors to accommodate for human vision. In short, the slow response time of the opamp

will introduce nonlinear distortion, and unlike FPN, this nonlinear distortion is acceptable

for imaging applications.

The requirements for this opamp are low power, small area, high enough gain and

fast enough response time. Low power is essential because there are tens of thousands

of opamps in the imager array. To keep the power consumption of a TBAR imager with

100,000 pixels below 50 mW, the total current for one pixel must be less than 100 nA for

a 5 V power supply voltage. Therefore, the MOS transistors of the opamp operate in the

subthreshold region. Also, the opamp can not be too large due to the limited size of the

pixel. To reset the voltage of different photodiodes to a fixed value and to cancel effectively

the offset of opamps during autozeroing, enough gain is required. However, too much gain

will make the opamp too slow. Therefore, there is a trade-off between the opamp gain and

the response time.

Page 78: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

71

4.2.3.1 Opamp Gain Requirement without Considering Offset

For most conventional APS CMOS image sensors, the reset of photodiode is simply

done by an NMOS transistor (shown in Figure 2.4). Fixed pattern noise is reduced by

the correlated double sampling (CDS) circuit. However, for TBAR imagers, once the

signal passes through the comparator, it is a digital voltage. It is not possible to apply an

analog voltage CDS circuit to cancel offsets. In contrast, an opamp working in unit-gain

feedback configuration is used to reset the photodiode during reset phase in this TBAR

imager design. In this way, the random offset of the comparator (an opamp in this design)

which gives rise to fixed pattern noise, as well as 1/f noise, can be greatly reduced. This

technique is also called antozeroing [51].

To see how much gain is needed for the opamp, a perfectly matched opamp is

considered at first, as shown in Figure 4.5.

A

Opamp

Photodiode

Vph

Vreset

Figure 4.5: Using an opamp to reset the photodiode.

Assume the gain of the opamp isA, from the small signal model of the opamp, we

have

∆vout = A∆vin (4.5)

where∆vout and∆vin are small deviations from the opamp DC operating points at output

and input, respectively.

Page 79: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

72

From Figure 4.5, the photodiode reset voltageVph has to satisfy the following equa-

tion:

Vph − Vout,dc = A(Vreset − Vph) (4.6)

whereVout,dc is the opamp output voltage when the differential input voltage is zero. From

this equation, the photodiode reset voltage can be expressed as

Vph = Vreset +Vout,dc − Vreset

A + 1(4.7)

Without offset voltages, the desired photodiode reset voltageVph should be the

same for all pixels. However, as from Equation: 4.7,Vph is a function of opamp gainA. If

there is a gain variationδA from the nominal gainA0 (i.e.,A = A0 + δA), the photodiode

reset voltage is

Vph = Vreset +Vout,dc − Vreset

A0 + δA + 1

≈ Vreset +Vout,dc − Vreset

A0 + 1(1− δA

A0 + 1)

≈ Vreset +Vout,dc − Vreset

A0

− (Vout,dc − Vreset)δA

A20

(4.8)

assumingδA is much smaller thanA0 andA0 is much larger than 1. Thus, the reset voltage

variation is∆Vph = (Vout,dc − Vreset)δA/A20. This variation depends on the difference

between the opamp output bias voltageVout,dc and the photodiode reset voltageVreset, the

opamp nominal gainA0, and the opamp gain variationδA/A0.

If the photodiode reset voltage is equal to the opamp output bias voltage, the reset

voltage variation is zero from Equation: 4.7. However, there may be some difference be-

tween the desired photodiode reset voltage and the opamp output bias voltage in practice.

To see how much gain is required, assume:

1. The variation of the photodiode reset voltage should be limited to less than 1 mV.

2. Vout,dc − Vreset is less than 1 volt.

3. The opamp gain variationδA/A0 is 0.1.

Page 80: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

73

This gives rise to an opamp gain of 100, which is not a difficult requirement to meet.

4.2.3.2 Opamp Gain Requirement with Considering Offset

The offset of the CMOS amplifier stems from the threshold voltage and device

W/L ratio mismatch of the source-coupled differential pair, as well as from the mismatch

of load elements [52]. The offset of the opamp might be on the order of 1 mV to 10 mV for

typical CMOS processes [51]. Random mismatches of pixels in an array introduce FPN

for a TBAR imager. An autozeroing circuit is used to effectively reduce the opamp offset

and 1/f noise [51].

Voff

RES

V_ref

RES~

V_reset

RESReal Opamp

A

S1

S2S3

Vph

C

Figure 4.6: Opamp offset cancellation using autozeroing circuit.

The autozeroing offset cancellation circuit is shown in Figure 4.6. During the reset

phase, the opamp is connected in a closed-loop configuration. It is not difficult to find out

the voltage at the photodiode is

Vph = Vreset + Voff +Vout,dc − Vreset

A + 1− Voff

A + 1(4.9)

When the opamp is used as a comparator, switches S1 and S2 are open, while switch S3 is

closed. The voltage at the photodiode right after switch S1 opens is

Vph = Vreset + Voff +Vout,dc − Vreset

A + 1− Voff

A + 1+

Qinj

C(4.10)

whereQinj is the charge injection from switch S1, and C is the total capacitance at the

cathode of photodiode. From Equation 4.10, we can see the opamp offset has been stored

Page 81: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

74

on the photodiode, in additional to a residual offset [51]

Voff,res = − Voff

A + 1+

Qinj

C(4.11)

Considering that theVoff of the opamp is usually less than 50 mV, a gain of 100 should be

enough to reduce the the offset voltage below 1 mV. The charge injected into the photodi-

ode comes from the channel charge of switch MOS transistor S1:

Qch = WLCox(VGS − Vth) (4.12)

where W and L are the gate width and length of the MOS transitor switch S1, respectively.

When switch S1 turns off,Qch goes to both sides of the switch under capacitive coupling

and resistive conduction. However, in the fast switching-off conditions, the percentage of

charge injected into either side approaches 50 percent [53].

For a minimum size switch PMOS transistor withW = 1.5µm, L = 0.6µm,

Cox = 2.4fF/µm2, andVGS − Vth = 1V , using the photodiode capacitance of4.6fF , the

charge injection introduced voltage at the photodiode is

Vph,inj =Qinj

C

=1

2

Qch

C

=1

2

WLCox(VGS − Vth)

C

= 0.23V

This is quite substantial. With an assumption of 2% mismatch of charge injection, it will

introduce about 5 mV photodiode reset voltage variance.

4.2.3.3 Opamp Design

From a gain requirement of about 100 and a total current of less than 100 nA,

the opamp in Figure 4.7 can be used. The reason why a simpler opamp is not used is

illustrated in Figure 4.8. Note that the photodiode is connected to the opamp inverting

input node. After the photodiode is reset and the pixel goes into the integrating phase, the

Page 82: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

75

Vb

Vdd

Vin+Vin−

Vout

Vdd

M1M2

M4 M3 M5M6

M7M8

M9

Figure 4.7: Opamp schematic.

output voltage of the opamp,Vout, goes from the reset voltageVreset to about 0 V. Because

the photodiode is floating, due to the gate-drain overlap capacitorCov of M1, the change

in voltage at the photodiode is given by

∆Vph =Cov

Cph + Cov

∆Vout (4.13)

Since the photodiode capacitance is only a few femto Farads, the overlap capacitance of

Cov is comparable toCph. Therefore,∆Vph is quite significant because of the large swing

of ∆Vout. Also,Cov is dependent on the transistor size. From CADENCE SpectreS simula-

tion, a few hundred mV have been observed. More importantly, any mismatch of transistor

size of M1 will introduce different charge injection. Cadence simulations show a firing

time difference of 10% if the M1 length has a mismatch of 25% given 200 pA photocurrent.

In contrast, the drain of inverting input transistor M2 in Figure 4.7 is at low impedance,

thanks to the diode connected transistor M4. Therefore, the drain of the opamp input tran-

sistor in Figure 4.7 node does not experience drastic voltage changes as in Figure 4.8.

Page 83: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

76

M5Vb

M2 M1Vin+

Vdd

M3M4

Vin−

Vout

Cov

Cph

Figure 4.8: 5-transistor Opamp charge injection.

Gain. To find the gain of this opamp, note at first that all transistors are working

in the weak inversion region since the total bias current is less than 100 nA. The weak

inversion drain current [52]

ID =W

LqXDnnp0 exp

(k2

VT

)exp

(VGS − Vt

nVT

)[1− exp

(−VDS

VT

)](4.14)

whereX is the thickness of the region in whichID flows,Dn is the diffusion constant for

electrons,np0 is the equilibrium concentration of electrons in the substrate,k2 is a constant,

andn = (1 + Cjs/Cox), in which Cjs is the depletion-region capacitance andCox is the

oxide capacitance. Let

IS =W

LqXDnnp0 exp

(k2

VT

)exp

(−Vt

nVT

)(4.15)

Equation 4.14 is simplified to

ID = IS exp

(VGS

nVT

)[1− exp

(−VDS

VT

)](4.16)

Note that unlike in strong inversion, whereVDS ≥ Vov = VGS − Vt is needed to enter into

saturation, the drain current is almost constant (saturated) whenVDS is larger than a few

VT . Also, from Equation 4.16, the transconductance

gm =∂ID

∂VGS

=ID

nVT

(4.17)

Page 84: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

77

The gain of the opamp in Figure 4.7

A =gm

gds,M5 + gds,M7

=

ID

nVT

ID

VA,M5

+ID

VA,M7

=

1

nVT

1

VA,M5

+1

VA,M7

(4.18)

given transistor pairs M3/M5, M4/M6, and M7/M8 are of the same size. From CADENCE

simulations,n = 1.75, VA,M5 = 9.5 andVA,M7 = 7.6. This gives a gain of 93.

To find out how much the gain variation is, simulations are done on design corners.

MOS transistor models with threshold voltage variances of±100mV are used. The sim-

ulations are also done at two different temperatures:27◦C and100◦C. The gain ranges

from 97 to 100, within 10% of the nominal value 93.

Speed.It is obvious that the first pole is decided by the time constance at the opamp

output. Note that the opamp is working open-loop during the comparison phase. Assuming

negligible influence from zeros and higher order poles, the -3dB frequency is

f−3dB =1

2πτ=

1

2π1

gds5 + gds7

Cout

(4.19)

whereCout is the total capacitance at the opamp output, which includes the load capac-

itance and parasitic capacitance of transistors M5 and M7. The gain-bandwidth product

is:

GBW = Af−3dB =gm

gds,M5 + gds,M7

1

2π1

gds5 + gds7

Cout

=gm,M1

2πCout

(4.20)

=Ids,M1

2πnVT Cout

(4.21)

Table 4.1 shows the simulation results with a load capacitanceCL = 10fF :

whereτ = 1/(2πf−3dB) is the time constant of the open-loop opamp.

Page 85: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

78

Table 4.1: The opamp performance.Total Current A f−3dB τ GBW Phase Margin

64 nA 39.6 dB 48 KHz 3.32µs 3.3 MHz 45◦

The slew rateSL = Ibias/Cout. A Cout of 10 fF will give a slew rate of 3.6V/µs.

Offset Cancellation. CADENCE simulations have been done to predict the per-

formance of the offset cancellation. Two comparators with different sizes are used in the

pixel schematic in Figure 4.2. One comparator (compnorm) has perfectly matched input

transistors M1 and M2 in Figure 4.7. The other comparator (compmis) has 25% length

mismatch between M1 and M2. The firing times of pixels at nodeA in Figure 4.2 are listed

in Table 4.2 for situations with and without autozero cancellation at different photodiode

current levels. For 25% length mismatch, it shows the errors (in the time domain) have

decreased from 0.95% to 0.21% and from 0.87% to 0.26% for photocurrents of 200 pA

and 2 pA, respectively.

Table 4.2: The firing time of pixels with and without the autozeroing circuitIpd = 200 pA Ipd = 2 pA

compnorm compmis Error compnorm compmis Errorw/o autozero 52.41µs 51.91µs 0.95% 5.077 ms 5.033 ms 0.87%w/ autozero 54.79µs 54.67µs 0.21% 5.302 ms 5.288 ms 0.26%

When the TBARBASE imager was designed and submitted, the author did not

realize a high-gain amplifier was unnecessary. The opamp shown in Figure 4.9 was used.

This amplifier achieved high gain (79 dB) at the expense of speed. The cadence simulation

results on AMI 0.5µm technology are listed below in Table 4.3. The offset cancellation

effects are shown in Table 4.4.

Table 4.3: The opamp performance.Total Current DC Gain Unit Gain Bandwidth Phase Margin

54 nA 79 dB 5.1 MHz 50◦

Page 86: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

79

Vdd Vdd

Vin− Vin+

Vp1 Vp1

Vp2

Vn

VoutM2

M4

M3

M5

M1 M9

M8

M7

M6

Figure 4.9: Opamp used in the tested TBAR imager.

Table 4.4: The firing time of pixels with and without the autozeroing circuit using theopamp in Figure 4.9

Ipd = 200 pA Ipd = 2 pAcompnorm compmis Error compnorm compmis Error

w/o autozero 54.41µs 55.39µs 1.8% 5.168 ms 5.266 ms 1.9%w/ autozero 59.03µs 59.00µs 0.05% 5.629 ms 5.617 ms 0.2%

4.3 Asynchronous Readout Circuit Design

4.3.1 Design Methodology

The system diagram of the TBARBASE imager is shown in Figure 4.1. First note

that this is a mixed-signal system, with the analog photodiode and comparator inside each

pixel while the rest of the circuits are digital. Secondly, the digital circuits operate asyn-

chronously. This prevents adopting a standard digital design methodology, which uses a

high degree language (Verilog or VHDL) to describe and simulate digital circuits, and from

that a transistor implementation may be automatically synthesized using a library of stan-

dard cells. Part of the difficulty lies on that most commercially available tools and libraries

are targeting for synchronous systems. Although there are some CAD tools available from

Page 87: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

80

the academic research community, major EDA vendors have not yet included such tools in

their product portfolios [41].

For this 32× 32 TBAR BASE, there are about 38k transistors. Full SPICE simu-

lation was proven to be impossible. It takes more than one day to simulate about 20 pixels.

For larger arrays such as 128× 128, the computation would be prohibitive.

Some researchers called for behavioral modelling of analog and mixed-signal cir-

cuits [54]. There are two reasons, one is the increasing size of analog systems, the other is

the heterogeneity of waveform types of interest. An example of the latter case is the phase-

lock loop (PLL), which operates in the frequency domain. For the mixed-signal system,

the ability of hierarchical simulation is important. Basically, in one simulation, different

blocks in a system can be simulated using different simulation engines. For example, digi-

tal parts are represented as abstract logic cells and simulated using VHDL, while the analog

parts are represented at the transistor level and simulated using SPICE.

The author has not been able to design the TBAR imager using the above approach.

One reason is the tight schedule, and another is that the mixed-signal simulation tools did

not work at the time. Instead, after carefully simulating each building block, we simulated

the whole imager for about two dozen pixel firings using CADENCE SpectreS. Circuits

were verified with extracted parasitic capacitors at the process and temperature corners.

Since this is a high speed digital design, the package parasitic inductors and capacitors

were also included in the simulation.

4.3.2 Asynchronous C ircuit Design

4.3.2.1 Pixel digital control circuit

The pixel digital control circuit has been drawn in Figure 4.2 and described in

Subsection 4.2.1.

4.3.2.2 Arbiter

Row and column arbiters are used to choose one row/column when many rows-

/columns make simultaneous requests. The arbiters used in a TBAR imager are from the

Page 88: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

81

paper of Boahen [32]. Arbiters with many inputs are built from two-input arbiter cells.

Column arbiter is not necessary in the TBARMEM architecture, as mentioned in Chap-

ter 3.

Vdd

ROut

RIn~

LIn_2

LIn_1

RIn~

ROut

ROut

LIn_2

LIn_1

LOut_2~

LOut_1~

(a)

request_out (ROut)

request_in_2 (LIn_2)

request_in _1 (LIn_1)select_in (RIn~)

select_out_2 (LOut_2~)

select_out_1 (LOut_1~)

(b)

Figure 4.10: A two input arbiter cell circuit (a) schematic and (b) symbol.

The schematic and symbol of a two-input arbiter cell is shown in Figure 4.10. A

two-input arbiter cell has two lower ports and one upper port. Each lower port has one

request in (LIn in the schematic) input pin and oneselect out (LOut∼ in the schematic)

output pin. The upper port has onerequest out (ROut in the schematic) output pin and

oneselect in (RIn∼ in the schematic) input pin.

When at least one of the two active-high request signals,LIn 1 andLIn 2, makes a

request, the arbiter cell relays the request to the upper level by makingROut high, through

Page 89: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

82

a modifiedOR gate at the lower right of the schematic. When this arbiter cell is selected

(acknowledged) byRIn˜ from the upper level, two situations might happen: if bothLIn 1

andLIn 2 are making the request, one of theLOut˜ pins is selected arbitrarily by the flip-

flop at the left of the schematic; if only one of theLIn pins is making a request, that port

is selected. The CADENCE SpectreS simulation using AMI 0.5µm technology shows

the forward propagation delay (fromLIn to ROut) is about 0.4 ns, and the downward

delay (fromRIn˜ to LOut˜ ) is about 0.3 ns. The simulation results are also shown in

Figure 4.11.

Figure 4.11: Time delay simulation of a 2-input arbiter cell.

Row and column arbiters have many inputs. They are built from two-input arbiter

cells using the binary tree architecture shown in Figure 4.12. The requests are relayed up

the tree, while the selection is relayed down the tree. At any time, only one input port (at

Page 90: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

83

the left side of tree) is selected provided there is a request from this port. In situations that

there are requests from many input ports, only one port is selected arbitrarily.

request_1

request_8

request_3

request_2

select_1~

select_2~

select_3~

select_8~

Figure 4.12: An eight inputs arbiter tree.

4.3.2.3 Row Interface

The row interface is used to control whether a row can be selected by the row

arbiter. It is located between the pixel array and the row arbiter, as shown in Figure 4.1.

There is one row interface circuit for each row. The circuit is shown in Figure 4.13. It

is implemented using an asymmetric C-element described by Martin [55]. This is a state-

holding operator with two inputs. One of the inputs,arbiter sel∼ is the selection signal

coming from arbiter. The other input,row sel enable, comes from the latch control circuit.

There are three possibilities:

• If arbiter sel∼ is inactive,row sel is inactive, no matter what staterow sel enable.

• If both arbiter sel∼ androw sel enable are active, the outputrow sel is active.

Page 91: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

84

• If arbiter sel∼ is active androw sel enable is inactive, therow sel keeps the pre-

vious state.

Vdd

row_sel_enable

row_selarbiter_sel

row_request~ arbiter_request

Figure 4.13: Row interface circuit.

4.3.2.4 Latch and Latch Control

Latch and latch control circuits have been described by K. Boahen [32]. These

circuits are used to increase the throughput of address-event readout originally proposed

by Mahowald [56]. The latch cell and latch control circuits schematics are shown in Fig-

ure 4.14 [32]. There is one latch cell for each column and one latch control circuit for the

whole array. Below are functions of some important control signals:

• b is used to control whether data oncox˜ are allowed to come into the latch cells.

• g˜ indicates whether there are still valid data inside latch cells.

• col request n is the column arbiter request signal; andcol sel n is the acknowledge

signal coming through the throughput control circuit from column arbiter.

• lp monitors whether there are data on thecox˜ lines.

• row sel enable goes to the row interface circuit (shown in Figure 4.13). It enables

the new row selection signal goes into pixel array.

Page 92: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

85

Vdd

Vdd

Vdd

cox~

lp

b

col_request_n

col_select_n

g~

bx

RES~

G

H

(a)

Vdd

Vdd

Vdd

lp

g~

latch_data_ready

row_address_trigger

b

row_sel_enable

(b)

Figure 4.14: Latch cell and latch control circuits. (a) Latch cell. (b) Latch control circuit.

• latch data ready is valid after data oncox˜ lines have already come into the latch.

This signal, together withrow sel, disable the pixels which have fired in the selected

row.

• row address trigger lets the row address encoder update output row address.

Also note that:

1. cox˜ comes from the pixel (Figure 4.2) and is shared by all pixels in the samecolumn.

2. col request n andcol select n are therequest andselect for the column arbitertree, respectively.

Page 93: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

86

3. lp, b andg˜ are shared by all latch cells.

The operation of this asynchronous circuit has been described by Boahen [32] us-

ing the concurrent-hardware-processes (CHP) description language [57]. It will become

clearer after the timing analysis in Section 4.4.

4.3.2.5 Throughput Control

The throughput control circuit, shown in Figure 4.15, is used to control how fast the

readout circuit can be. TG1 and TG2 are two transmission gates. Since they are controlled

col_arbiter_sel

col_encoder_in

RES

φ1φ1∼

φ2∼ φ2

TG2

TG1

col_select_n

Figure 4.15: Throughput control circuit.

by two non-overlapping clocksφ1 andφ2, at any time, at most one transmission gate is

open.col arbiter sel is the selection signal from the column arbiter andcol select n goes

to the latch cell.col encoder in is the input to the column address encoder. By using two

transmission gates, we can control how fast the selection signals from column arbiter go

into the latch cell by simply adjusting the clock speed.

Page 94: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

87

4.3.2.6 Address Encoder and I/O

A standard digital address encoder and an I/O circuit are used in this design. Their

descriptions are omitted.

4.4 Timing Analysis

In this section, the CADENCE SpectreS simulation results of the 32× 32

TBAR BASE imager are shown to get a better understanding of the operation of the asyn-

chronous readout circuit. The circuit is analyzed under two situations: imager reset and for

pixels firing simultaneously. We summarize the timing analysis using a finite state machine

(FSM) model.

4.4.1 Reset

WhenRES is active, the TBAR imager is reset. Below are some important signal

states in each building block.

4.4.1.1 Pixel and Row Arbiter

RES = 0 −→ A = 0 −→ row request∼ = 1 −→ row sel = 0

4.4.1.2 Latch Cell and Latch Control

• all cox∼ = 1 −→ lp = 0 −→ latch data ready = 0

• RES∼ = 0 −→ col request n = 0column arbiter−→ col select n = 0 −→ g∼ = 1

• (lp = 0)AND(g∼ = 1) −→ row sel enable, b = 1

In summary, during the reset phase,row sel enable = 1 allows the row interface to pass

the row selection signal from the row arbiter.b = 1 means the firing states can go into

latch cells.

Page 95: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

88

4.4.2 Pixels Firing Simultaneously

In this simulation, there is a 32× 32 pixel array. Assuming all pixels at row(1,

5, 9, 13, 17, 21, 25, 29) and column(1, 5, 9, 13, 17, 21, 25, 29) have exactly the same

photocurrent, the following events will happen:

• Row Arbitration.

Pixels located at row(1, 5, 9, 13, 17, 21, 25, 29) and column(1, 5, 9, 13, 17, 21, 25,

29) firing simultaneously−→ row request˜ (1,5, 9, 13, 17, 21, 25, 29)↓ row arbiter−→row arbiter picks one row arbitrarily. In this example, the 29th row is selected (i.e.

row sel 29 ↑) −→ fired pixels in row (29) sent their states to latch cells:cox∼ (1,-

5, 9, 13, 17, 21, 25, 29) ↓.

Figure 4.16 shows this process:row request 29∼ ↓−→ row sel 29 ↑−→ cox∼(1,-

25) ↓. From this figure, we can see the delay fromrow request˜ ↓ to row sel 29 ↑is about 5.5 ns, and the delay fromrow sel 29 ↑ to cox˜ (1, 5, 9, 13, 17, 21, 25, 29)↓is 0.5 ns.

• Latch Cell and Column Arbitration.

at least one cox˜ is activeb = 1

}−→ gxo(1, 5, 9, 13, 17, 21, 25, 29) = 1

column arbiter−→ fx(1) = 1

wherefx(1) is one of the column arbiter selection signals picked by the column

arbiter arbitrarily. It is shown in Figure 4.17.

• Latch Control(when there is at least one valid data value inside latches).

cox∼ = 0 −→ lp = 1b = 1

}g˜ =0−→ b = 0(row sel enable = 0)

Page 96: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

89

Figure 4.16: Waveforms of row arbitration.

−→{

latch data ready = 1 −→ disable fired pixelsrow address trigger ↑−→ enable row address output

In summary, when data (cox∼ signals) are already inside the latch cells (i.e.,g∼ is

low) and there are valid data oncox∼ lines (i.e.,lp is high), the latch control circuit

will make three actions:

1. Makingb (row sel enable) low. This will forbid new row selection by disablingthe row interface circuit.

2. Disabling the fired pixel of the selected row by enablinglatch data ready signal.

3. Generating arow address trigger trigger signal to output the row address.

The above timging is shown in Fig. 4.18.

• Disabling Pixel.

Page 97: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

90

Figure 4.17: Waveforms of column arbitration.

When the data are already inside the latch, pixels which have put data into latch cells

are disabled, as shown in Figure 4.19. Following the above case, row(29) has been

selected and it is being disabled now:

latch data ready = 1row sel 29=1−→ latch row 29 = 1if node A in pixel(29, j) = 1

}−→ pixel(29, j) is reset

−→{

row request∼(29) = 1row arbiter−→ row sel i = 0

all cox∼ = 1 −→ lp = 0

Since all the fired pixels in the previously selected row have been disabled, pixel

request signals,cox∼, are high.

• Throughput Control

Page 98: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

91

Figure 4.18: Waveforms of latch control circuit.

By using two non-overlapping clock signals to control data transport between the

column arbiter and latch cells, throughput control is made possible. The column

address is actually synchronous with the control clock, as shown in Figure 4.20.

Note thatci down andc latch in Figure 4.20 are two non-overlapping clock signals

φ1 andφ2 in Figure 4.15, respectively.

• Latch Control (when all the valid data inside the latch cells have output their ad-

dresses.)

When the last datum (in this example, it is at column 29) in the latch cells has output

its column address,gxi 29 (col select 29 in Figure 4.14(a)) goes down right after

c latch goes down. This will causeg∼ to rise up. As a result,b (androw sel enable)

goes high. Thus, the row interface circuit is open and new row selection is made

Page 99: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

92

Figure 4.19: Disabling pixels.

possible. Up to this point, the TBAR readout circuit finishes outputting the addresses

of fired pixels in one selected row. This process is shown in Fig. 4.21.

4.4.3 Finite State Machine Model

From the above timing analysis and simulation results, we have seen the asyn-

chronous readout circuit is quite complicated. More insight can be achieved by modelling

the operation of the readout circuit as a finite state machine (FSM), as shown in Figure 4.22.

There are four states for this FSM: reset, standby, data-in-latch, and column-ad-

dress-output.

1. Reset (State I)

WhenRES = 1, the TBAR imager is reset. The important signal states have been

described in Section: 4.4.1.

2. Standby (State II)

Page 100: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

93

Figure 4.20: Column Address Output.

WhenRES goes down, the TBAR imager enters into the standby state, waiting for

pixel firing. Since there is no firing, the important signals states are the same as in

the State I.

3. Data-in-latch (State III)

After one of the pixels firs, the TBAR imager goes from the State II to the State III.

In this state, fired pixels in the row selected by the row arbiter send their firing states

to the latch cells. The column arbiter begins to process data inside the latch cells

(i.e., outputs the addresses of the cells with valid data). As described above, three

actions are taken by the latch control circuit:

(a) Making b (row sel enable) low. This will forbid new row selection by dis-

abling the row interface circuit.

Page 101: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

94

Figure 4.21: Latch control signals after all data are output.

(b) Disabling the fired pixel of the selected row by enabling thelatch data ready

signal.

(c) Generating arow address trigger trigger signal to output the row address.

4. Column-address-output (State IV)

The activelatch data ready disables the fired pixels, and allcox∼ signals are

cleared. Column arbiter and column address encoder keep outputting addresses of

the valid data inside the latch cells at a rate controlled by the throughput control cir-

cuit. When all valid data in the latch cells are processed (g∼ = 1), row sel enable

becomes valid and new row selection (if there is any) can be started. The TBAR

imager goes back to standby state (State I).

Page 102: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

95

RES=0

RES=1

no firing

firing

IReset

IIStandby

IIIIV

Data−in−Latchlatch_data_ready=1Output

Column−address

g~ g~=0 =1

Figure 4.22: Finite state machine model of asynchronous readout circuit.

4.5 Summary

In this chapter, the design and analysis of the TBARBASE imager is discussed.

An N-well photodiode is used to convert photons into charge carriers. The only analog

part in the imager is a comparator. The reasons for using an opamp as a comparator are

explained. The gain requirement of this opamp is analyzed in terms of reduction of the

fixed pattern noise (FPN). The readout circuit operates in the asynchronous mode. Each

part of the asynchronous readout circuits (arbiters, row interface, latch cells, latch control

and throughput control circuits) are discussed and simulation results are presented. The

readout circuit analysis is completed by a finite state machine (FSM) model.

Page 103: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

CHAPTER 5TBAR IMAGER TESTING AND CHARACTERIZATION

5.1 Introduction

Techniques for the testing and characterization of CCD and CMOS APS image

sensors have been developed over many years. Characterizations of CCD image sensors are

described by Theuwissen’s book [1]. Excellent descriptions of testing and characterization

of CMOS APS image sensors can also be found in Blanksby [12], Gamal et al. [58], and

Yang et al. [59]. However, although there have been several time-based CMOS image

sensors reported so far, testing and characterization techniques for time-based images are

not well established, to the author’s best acknowledge.

There are two challenges for testing and characterization of the fabricated TBAR

imager. As described in Chapter 3, the TBAR imager represents illuminance information

in the time domain. This is quite different from CCD and CMOS APS image senors, in

which illuminance is represented in the voltage domain. As a result, some common testing

and characterization techniques based on the voltage domain may not be directly applied

to the TBAR imager. Another challenge comes from the testing equipment. Since this

TBAR imager is the first CMOS imager built in our lab, we are lacking of some specialized

testing equipment. This makes the characterization of TBAR incomplete. However, the

best estimates are given when it is at all possible.

The remainder of the chapter is organized as follows. In Section 5.2, we describe

some of testing equipment we used and the testing setup. Characterization of the TBAR

imager is presented in Section 5.3. The chapter is summarized in Section 5.4.

96

Page 104: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

97

5.2 Testing Setup

The 32× 32 TBAR image sensor is fabricated in a AMI 0.5µm CMOS process

through MOSIS. The fab ID is T25Q-BL. The die micrograph is shown in Figure 5.1.

The total die area is 3mm × 3mm. It is packaged in a 40-pin DIP. The pin connections

are shown in Figure 5.2. The DIP 40 package from MOSIS has substantial pin parasitic

inductance, ranging from 3.2 nH to 8.2 nH. To reduce simultaneous switching noise (SSN)

[60] caused by fast output driver switchings, pins with a high rate of current changing use

lower inductance pins (pin 8-16, and pin 25-36). Also, analog, digital and pads circuitries

use different power supply pins to avoid corrupting the sensitive analog signals.

Figure 5.1: Micrograph of TBARBASE imager.

The major testing equipment is an Agilent PC-hosted 1693A logic analyzer. It has

three measurement modes: a state mode with 200 MHz clock rate and 256K memory, an

Page 105: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

98

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20 21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

40

39

( TOP VIEW)

vb_p1(IN)

VDD_A(IN)

vb_n(IN)

vb_p2(IN)

EN_1(IN)

g~(OUT,EN_1)

clock (IN)

φ2(ΙΝ)

clock(OUT)

VDD_pad(IN)

GND_D(IN)

VDD_D(IN)

(OUT, EN_2)φ1

EN_2(IN) GND_PAD (IN)

N/C

RES~(OUT)

JOIN(OUT)

col_A0(OUT)

col_A1(OUT)

col_A2(OUT)

col_A3(OUT)

col_A4(OUT)

VDD_D(IN)

GND_D(IN)

row_A0(OUT)

row_A1(OUT)

row_A2(OUT)

row_A3(OUT)

row_A4(OUT)

JOIN(IN)

RES(IN)

v_ref(IN)

v_reset(IN)

row_sel_enable(OUT, EN_1)

row_address_trigger(OUT,EN_1)

GND_pad(IN)

VDD_pad(IN)

GND_A(IN)

GND_A(IN)

Figure 5.2: Pin connection of the TBAR imager.

asynchronous timing mode with 400 MHz/800 MHz sampling rate and 512K/1M deep

memory (full/half channel), and a transitional timing mode with 200 MHz and 256 K

memory. The power supply is an Agilent 6651A 8 V/50 A DC power supply. Reference

voltageVref and reset signalRES are generated from an National Instruments 8-channel

NI-6713 D/A computer hosted PCI card. It is a 12-bit D/A with maximum update rate of

1 MS/second. The FIFO buffer can hold 16K samples. The nominal output voltage range

is from -10 V to 10 V, which gives an LSB of 5 mV. The RMS analog output noise is 200

µV.

The testing setup is shown in Figure 5.3. A small printed circuit board (PCB)

is built to put together TBAR imager, voltage regulator, bias generator and connectors.

The 1693A logic analyzer captures address outputs and pixel firing times from the TBAR

imager. A simple MATLAB program reconstructs images using the captured data.

Page 106: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

99

Testing PCB Board

TBARImager

Address Out1693A

LightSource

WorkStation

DELL

NI 6713 D/APCI Card

RefercenceVoltage

Lens

Agilent

Logic

Analyzer

6651ADCPowerSupply

Agilent Bias

VoltageRegulators

Generators,

Power Supply,Bias Voltage

Figure 5.3: Experimental setup for testing and characterization of TBAR imager.

5.3 Testing and Characterization

In this section, the performance of the TBAR imager is characterized in terms of

power, dark current, FPN, temporal noise, conversion gain, and dynamic range. Quantum

efficiency and sensitivity are not measured because we do not have a monochromator.

Without an integrating sphere to generating uniform illumination, we are also unable to

estimate FPN.

5.3.1 Power Consumption

Since TBAR imager has a separate power supply for analog, digital, and pad circuit,

power consumption for these three components can be measured separately. The current

is measured using a Keithley 617 programmable electrometer. At 30 frames/second, the

analog current is 0.24 mA and the digital circuit (without pads) consumes 0.38 mA. The

total current is 0.62 mA (without pads). At 5 V power supply, this 32× 32 TBAR imager

consumes 3.1 mW at 30 frames/sceonds.

Page 107: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

100

5.3.2 Dark Current

As explained in Chapter 2, dark current is caused by the charge carrier generation

and collection even when there is no illumination at all. To measure the dark current, the

TBAR imager is completely covered with a ceramic lid. The measured smallest firing time

is 24.7s. Given a voltage swing of 2 V, the largest dark current is 81 mV/s. Also note that

the measurement is done under room temperature (about 25◦ C).

����������������������������

n−welln−well

D1

M16

p−substate

n+

Vph

E

p+ p+E

n+

Vdd

Figure 5.4: Dark current generation at the photodiode.

We also find many pixels do not fire at all under the dark condition. It was quite

surprising at first. After carefully looking at the circuits around the photodiode, we believe

it is because of the leakage current of a PMOS transistor. As shown in Figure 4.2, PMOS

transistor M16 is used to reset photodiode D1 during reset phase. After reset, M16 is

turned off and the cathode of D1 is floating. From Figure 5.4, there are two diodes at

theVph node: one is the n-well/p-substrate photodiode D1, and the other is the p+/n-well

diffusion diode forming the source of PMOS transistor M16. The direction of electrical

field of these two diodes are also shown in Figure 5.4. While the thermal generated charge

carriers from photodiode D1 discharge the capacitor at the cathode of the photodiode,

Page 108: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

101

the thermal generated charge carriers from M16 diffusion diode (i.e., leakage current of

diffusion diode) will charge the capacitor. For some pixels, if the leakage current of the

diffusion diode is larger than the photodiode dark current, the voltage at the cathode of

photodiode,V ph, will never drop. This is the reason why some pixels never fire when they

are completely dark.

Dark Current Distribution (Scaled for Display)

5 10 15 20 25 30

5

10

15

20

25

30

Figure 5.5: Dark Current Distribution.

The dark current distribution in the array is shown in Figure 5.5. Note that the dark

current is linearly scaled from 0 to 255 for display. A spatially random distribution pattern

is observed from this figure. It is not surprising because the dark current is largely due to

the generation centers (i.e., traps), which tend to distribute randomly.

5.3.3 Dynamic Range

Given a fixed reference voltage, the dynamic range of the TBAR imager is deter-

mined by the ratio of the longest integration time to the shortest integration time, as from

Equation 3.26. The dark current determines the longest integration time, which is more

than 20 seconds from measurements. The designed shortest integration time is 1µs. This

Page 109: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

102

will give a theoretical dynamic range of 146 dB. The measured shortest integration time is

1.61µs because we do not have strong enough light source.

(A), x1

5 10 15 20 25 30

10

20

30

(B), x10

5 10 15 20 25 30

10

20

30

(C), x100

5 10 15 20 25 30

10

20

30

(D), x1000

5 10 15 20 25 30

10

20

30

(E), x10000

5 10 15 20 25 30

10

20

30

(F), x100000

5 10 15 20 25 30

10

20

30

Figure 5.6: A high dynamic range picture taken by TBAR imager (without postprocessing).

Since we do not a good mounted device to hold lens, we simply put a lens sitting

on top of the TBAR imager. It turns out this optical setup puts a limit on the dynamic

range we can measure. In order to take a high dynamic range picture, there must be both

very bright pixels and very dark pixels in the sensor array. However, since the lens is not

tightly mounted on the TBAR imager, there is light leaking into the pixel array when a very

bright light source presented. As a result, we are unable to get a very dark pixel when a

strong light source is presented. However, we still manage to get a 104 dB picture, which

is far beyond the dynamic range of conventional CCD and CMOS APS image sensors.

This picture is an incandescent lamp covered by black tape with a hole in the middle, as

shown in Figure 5.6. To display the dark part of the image, the data in Figure 5.6 (B), (C),

(D), (E), (F) are 10, 100, 1000, 10000, 100000 times larger than data in Figure 5.6 (A),

respectively. The shape and position of the filament can be clearly seen from Figure 5.6

Page 110: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

103

Histogram Equalization

5 10 15 20 25 30

5

10

15

20

25

30

Figure 5.7: Histogram equalization of Figure 5.6.

(A). The histogram equalized image is displayed in Figure 5.7. Also note that the smallest

firing time is 1.61µs and the largest firing time is 255 ms.

Figure 5.8 shows two image samples taken by the TBAR imager. These two images

have a dynamic range of 57 dB and 58 dB, respectively.

5.3.4 Temporal Noise

For conventional CCD and CMOS APS image sensor, temporal noise is defined as

the output voltage fluctuation under a stable illumination. For the TBAR imager, since the

illumination is represented in the time domain, the firing time fluctuations are measured to

estimate the temporal noise. The major limitation to the accuracy of the measurement is

the stability of the light source. If the light source is unstable, we will not know how much

the firing time fluctuation is contributed by the circuit temporal noise and how much is

contributed by the light source fluctuation. A 3-volt DC battery powered flash light is used

as a light source. There is no observable battery voltage fluctuation from the oscilloscope.

Page 111: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

104

(A)

5 10 15 20 25 30

5

10

15

20

25

30

(B)

5 10 15 20 25 30

5

10

15

20

25

30

(C)

5 10 15 20 25 30

5

10

15

20

25

30

(D)

5 10 15 20 25 30

5

10

15

20

25

30

Figure 5.8: Two image samples ‘UF’ and ‘Lamp’ (without postprocessing).

For a pixel (m,n), the mean and variance of the firing time are estimated as follows:

t(m,n) =1

K

K∑

k=1

tk(m,n) (5.1)

σ2t,(m,n) =

1

K − 1

K∑

k=1

(tk(m,n) − t(m,n))2 (5.2)

where K is the number of samples (frames) andT k(m,n) is the firing time of pixel (m,n) in

thekth frame. The relative temporal noise is defined as the ratio of the firing time standard

deviation to the mean firing time:

tn(m,n) =σt,(m,n)

t(m,n)

(5.3)

Figure 5.10 shows the temporal noise of different pixels in the image ‘flash light’

shown in Figure 5.11. One interesting observation is that although the image ‘flash light’

has a drastically wide photocurrent range of 79 dB (the firing times range from 28.9µs

to 241.77 ms), the temporal noise is almost constant for all pixels. The explanation for

Page 112: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

105

Histogram Equalization

5 10 15 20 25 30

5

10

15

20

25

30

Histogram Equalization

5 10 15 20 25 30

5

10

15

20

25

30

Figure 5.9: Two histogram equalized image samples ‘UF’ and ‘Lamp’.

this is that the dominant noise source is the photocurrent shot noise. Because the refer-

ence voltage of the comparators is constant, all pixels have accumulated same number of

photoelectrons. This will give rise to the same noise assuming photoelectron distribution

is a Poisson distribution (Equation 2.6). Figure 5.12 shows the firing time of one pixel at

position (5, 5) in the different frame. The memory depth of the logic analyzer limits the

frame number to 45. For this particular pixel, the mean firing time is 74.96 ms and the

standard deviation is 345.1µs, which gives a relative temporal noise of 0.460%.

5.3.5 Conversion Gain

Conversion gain characterizes the signal generated per photoelectron. It indicates

the sensitivity of the sensor. An accurate determination of the conversion gain also enables

a determination of the photodiode’s quantum efficiency. The conversion gain is defined by

g =v

n(5.4)

wherev is the signal voltage at the photodiode andn is the number of photoelectrons.

An excellent reference of determination of the conversion gain using a statistical

model is from B. Beecken and E. Fossum [61]. If the dominant noise source is the photon

shot noise, conversion gain can be determined by using the fact that the shot noise obeys

the Poisson distribution. From the Equation 5.4, the mean and the variance of the signal

Page 113: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

106

0 100 200 300 400 500 600 700 800 900 10003

4

5

6

7

8

9

10x 10

−3 Temporal noise of different pixel

Pixel Index

Tem

pora

l Noi

se

Figure 5.10: The temporal noise of different pixels in the image ‘flashlight’.

voltage at the photodiode

v = gn (5.5)

σ2v = g2σ2

n (5.6)

For a Poisson distribution, the variance is simply the mean. Thus,

σ2n = n (5.7)

Since only the signal voltage can be directly measured for the CCD and CMOS APS image

sensor, conversion gain is represented as a function of the mean and the variance of the

signal voltage. From Equation 5.5–5.7,

g =v

n=

v

σ2n

=v

σ2v/g

2= g2 v

σ2v

(5.8)

From Equation 5.8, the conversion gain is

g =σ2

v

v(5.9)

Since the output from the TBAR imager is the firing time, instead of voltage, it is

desirable to represent conversion gain as a function of the firing time statistics. Before we

Page 114: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

107

flash light (bright part)

5 10 15 20 25 30

5

10

15

20

25

30

flash light (dark part)

5 10 15 20 25 30

5

10

15

20

25

30

(a) (b)

Figure 5.11: Image ’flash light’, (a) Bright part. (b) Dark part (1000 times brighter fordisplay ).

start the derivation, note that the conversion gain is also determined by the total capacitance

at the cathode of the photodiode. Ifn electrons cause a voltage ofv on a capacitance of

Cph, the conversion gain is

g =v

n=

1

n

Qn

Cph

=nq

nCph

=q

Cph

(5.10)

whereQn is the total charges at the photodiode andq is the charge of an electron.

Assuming the comparator noise and the readout circuit delay can be neglected, the

firing time is the time when the voltage at the photodiode reaches the reference voltage

Vref . This assumption is valid when there are enough photon introduced charges carriers at

the photodiode and, as a result, the photo-current shot noise is the dominant noise source.

At this time, there areN photoelectrons accumulated at the cathode of photodiode. We

note that

N =Q

q=

(Vreset − Vref )Cph

q(5.11)

Because the photoelectron generation and collection process is a Poisson process,

it is shown [18] that the time distancet = tN − t0 from a fixed pointt0 (the time right after

Page 115: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

108

0 5 10 15 20 25 30 35 40 450.074

0.0745

0.075

0.0755

0.076

0.0765

Frame Index

Firi

ng T

ime

Firing time of one pixel

Figure 5.12: The firing time of one pixel.

reset in this case) to the timetN when theN th electron arrives has an Erlang distribution:

fN(t) =λN

(N − 1)!tN−1e−λt (5.12)

whereλ is the Poisson parameter. The meant and varianceσ2t of the Erlang distribution

are given by

t =N

λ(5.13)

σ2t =

N

λ2(5.14)

From above two equations, the relationship between the temporal noiseσt/t and the total

number of charge carriesN are found:

σ2t

t2 =

N

λ2

N2

λ2

=1

N(5.15)

Note thatσt/t is the temporal noise defined in the last section. Since the charge capacity

N is same for all pixels assuming no mismatch between pixels, Equation 5.15 implies

all pixels should have the same temporal noise regardless of their different firing times.

Page 116: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

109

The temporal noise measurements in the last section manifest this fact. Despite the large

difference in the firing times, the ‘flash light’ image shows almost the same temporal noise

at each pixel.

From Equation 5.15, charge capacity can be found. Using the measurement data in

the last section, the charge capacity

N =1

(σt

t)2

=1

(0.460%)2

= 47, 260 electrons (5.16)

The total capacitance at the cathode of the photodiode is

Cph =Qch

Vreset − Vref

=qN

Vreset − Vref

=1.6× 10−19 × 47, 260

2

= 3.78fF (5.17)

We calculated this capacitance to be 4.0 fF in Chapter 4. The measurement result matches

well with hand calculations. From Equation 5.10, the conversion gain is

g =q

Cph

=1.6× 10−19

3.78fF

= 42 µV/electron (5.18)

This concludes the discussion of conversion gain.

5.3.6 Fixed Pattern Noise

Fixed pattern noise (FPN) is the spatial variance of the output pixel values under

uniform illumination due to the device mismatches across an image sensor. Both CCD

and CMOS image sensors have FPN, but FPN is particularly problematic for CMOS APS

image sensors because each pixel may have different signal paths.

To estimate FPN, pixel output values of a sensor array ofM × N pixels are mea-

sured under uniform illumination. The measurement is repeated forK times to obtainK

Page 117: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

110

frames. An average frame is calculated by averaging thoseK frames to reduce the effect

of temporal noise on the output values. For one pixel located at position (m,n), its temporal

average firing time is

t(m,n) =1

K

K∑

k=1

tk(m,n) (5.19)

wherek denotes thekth frame. Giving this average frame time, the spatial mean and

variance can be estimated as

ts =1

MN

N∑n=1

M∑m=1

t(m,n) (5.20)

σ2t,s =

1

MN − 1

N∑n=1

M∑m=1

(t(m,n) − t)2 (5.21)

The FPN in time domain can be defined as

FPNt =σt,s

ts(5.22)

Unfortunately, we do not have a uniform light source at the time this dissertation is

being written. As a result, the author is unable to estimate the FPN of the TBAR imager.

However, to get a very rough idea of the FPN performance, we measure the FPN under

room light conditions without a lens on the top of the TBAR imager. The averaged frame

is shown in Figure 5.13. The measured FPN of this image is 2.02%. Again, since this image

is not taken under uniform illumination, the author emphasizes here that the estimated FPN

is an estimate at best. Figure 5.14 shows another the shape of a wire sitting on the top of

the TBAR imager without using a lens. There is no noticeable FPN from this image.

5.4 Summary

In this chapter, the testing setup is first described. A 32× 32 TBAR imager is

characterized in terms of power consumption, dark current, dynamic range, temporal noise

and conversion gain. Because the signal is in the time domain, some characterization

techniques used in this chapter are different from which is usually used for conventional

voltage domain CCD and CMOS APS image sensors. The characterization is incomplete

Page 118: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

111

An Temporally Averaged Frame

5 10 15 20 25 30

5

10

15

20

25

30

Figure 5.13: An temporally averaged image under room light condition without using alens. This image is scaled up to exaggerate the FPN.

due to the lack of equipment to characterize FPN, quantum efficiency and sensitivity. The

performance of the TBAR imager is summarized in Table 5.1.

Table 5.1: The performance of the TBAR imager.Technology 0.5µm AMI Standard CMOSSupply Voltage 5 VoltTransistors per pixel 30Array size 32× 32Pixel size 37.5µm × 34.8µmPhotosensitive area 4.8µm × 5.1µmPower dissipation at 30 frames/second3.1 mW without pad powerDark current (room temperature) ≤81mV/s;≤1.25 nA/cm2

Conversion Gain 42µV/electronDynamic Range (one pixel, measured)140 dBDynamic Range (array, measured) 104 dB (limited by the optical equipment)

Page 119: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

112

A wire (without lens)

5 10 15 20 25 30

5

10

15

20

25

30

Figure 5.14: A wire sit above sensor without using a lens.

Page 120: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

CHAPTER 6CONCLUSION

6.1 Summary

In this research project, we investigated the fundamental limitations to the dynamic

range of the solid-state image sensors. Two time-based asynchronous readout (TBAR)

CMOS imager architectures are proposed to improve the dynamic range: TBARBASE

without on-chip memory and TBARMEM with higher readout throughput at the expense

of on-chip memory. The TBAR imager performance is analyzed and simulated at the sys-

tem level to prove its high dynamic range capability. A 32× 32 TBAR BASE imager

was designed in a 0.5µm CMOS technology. The time domain testing and characteriza-

tion techniques are developed because the common voltage domain based characterization

techniques cannot be directly applied. The imager performs as we expected, achieving

high dynamic range and moderate power.

It is the author’s belief that the TBARBASE imager is poised to be a strong com-

petitor among high dynamic range CMOS image sensors, especially for small to moderate

size imager markets. The author believes that the TBARBASE imager has one of the most

optimized trade-offs in terms of performance, power and memory requirements. For exam-

ple, the FPN of logarithm-based high dynamic range imagers is not satisfactory even with

FPN cancellation techniques. Multi-sampling techniques usually need a large amount of

memory and has a high power consumption. Many other time-based high dynamic range

imagers have too complicated image reconstruction procedures.

Our TBAR imagers represent and reconstruct images in a nature and simple way:

the time interval from pixel resetting to firing. For moderate size images, the TBARBASE

architecture provides a high enough throughput to accurately reconstruct images without

113

Page 121: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

114

using on-chip memory. The power is low because each pixel needs to be readout only

once each frame. The major drawback is there are about 30 transistors inside one pixel.

However, as the CMOS technology is continuously scaling, we expect this problem will

be alleviated.

6.2 Future Directions

It has taken the author two years to develop this idea and build a 32× 32

TBAR BASE imager. The author sincerely hope this work could be continued to make

TBAR imager a serious contender for the CMOS image sensor market. The possible re-

search directions are:

1. The throughput of the TBAR imager needs to be improved. It is a fundamentallimiting factor to apply the TBAR imager for large images (e.g., more than 1 Mega-pixel).The throughput can be improved at both the architecture level and the circuit level.

2. We have built a 32× 32 TBAR BASE imager using a 0.5µm CMOS technology.However, to better demonstrate the capabilities of the TBAR imager, larger imager (e.g.,128× 128) needs to be designed and fabricated in a more advanced (e.g., 0.18µm CMOStechnology).

3. One of the contributing factors to the large number of transistors inside a pixelis the autozeroing circuit, which is intended to cancel FPN. Because we do not have auniform light source, we were unable to measure how effective the autozeroing circuit isin terms of FPN cancellation. This issue needs to be further addressed.

4. In the author’s experience, asynchronous circuit design is tedious and error-prone. It is largely because design tools we have are mainly aiming at synchronous systemdesign. However, there are some asynchronous circuit design tools from academic researchcommunity. It may become imperative to incorporate some automatic design tools as theimager size increases.

5. Because we do not have some specialized optical equipment in our lab, theauthor was unable to measure FPN, quantum efficiency and sensitivity. This work shouldbe done in the future to make the characterization complete.

6. The outputs of a TBAR imager are drastically different from the CCDs andCMOS APS. The pixel positions are reported sequentially according to their light intensi-ties. In other words, the image are sorted in the light intensity. This feature makes certainsignal processing tasks, e.g., histogram equalization and tracking objects, easier. This topicmay need to be further investigated.

Page 122: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

REFERENCES

[1] A. Theuwissen. Solid-State Imaging with Charge-Coupled Devices. Kluwer Aca-demic Publishers, New York, 1995.

[2] G. Weckler. Operation of p-n junction photodetectors in a photon integration mode.IEEE Journal of Solid-State Circuits, 2(1):65–69, January 1967.

[3] R. Dyck and G. Weckler. Integrated arrays of silicon photodetectors image sensing.IEEE Transaction on Electron Devices, 15(4):196–201, April 1968.

[4] P. Fry, P. Noble, and R. Rycroft. Fixed pattern noise in photomatrices.IEEE Journalof Solid-State Circuits, 5(3):250–254, May 1970.

[5] E. Fossum. CMOS imager sensors: Electronic camera-on-a-chip.IEEE Transactionon Electron Devices, 44(10):1689–1698, October 1997.

[6] S. Smith, J. Hurwitz, M. Torrie, D. Baxter, A. Murray, P. Likoudis, A. Holmes,M. Panaghiston, R. Henderson, S. Anderson, P. Denyer, and D. Renshaw. A single-chip CMOS306 × 244 NTSC video camera and a descendant coprocessor device.IEEE Journal of Solid-State Circuits, 33(12):2104–2111, December 1998.

[7] M. Loinaz, K. Singh, A. Blanksby, D. Inglis, K. Azadet, and B. Ackland. A 200-mw,3.3-v, CMOS color camera IC producing352× 288 24-b video at 30 frames/s.IEEEJournal of Solid-State Circuits, 33(12):2092–2102, December 1998.

[8] J. Hurwitz, P. Denyer, D. Baxter, and G. Townsend. A 800k-pixel colour CMOSsensor for consumer still camera. InProceedings of the SPIE Electronic ImagingConference, volume 3019, pages 115–124, April 1997.

[9] J. Hurwitz, M. Panaghiston, K. Findlater, R. Henderson, T. Bailey, A. Holmes, andB. Paisley. A 35mm film format CMOS image sensor for camera-back applications.In ISSCC Digest of Technical Papers, pages 48–49, 2002.

[10] B. Streetman and S. Banerjee.Solid State Electronic Devices. Prentice Hall, UpperSaddle River, New Jersey, 2000.

[11] J. S. Lim. Two-Dimensional Signal and Image Processing. Prentice-Hall, UpperSaddle River, New Jersey, 1990.

[12] A. Blanksby. Colour Cameras in Standard CMOS. PhD thesis, University of Ade-laide, South Australia, 1998.

[13] R. Guidash, T. Lee, P. Lee, D. Sackett, C. Drowley, M. Swenson, L. Arbaugh, R. Holl-stein, F. Shapiro, and S. Domer. A 0.6µm CMOS pinned photodiode color imagertechnology. InIEDM Technical Digest, pages 927–929, 1997.

[14] I. Inoue, H. Nozaki, H. Yamashita, T. Yamaguchi, H. Ishiwata, H. Ihara, R. Miya-gawa, H. Miura, N. Nakamura, Y. Egawa, and Y. Matsunaga. New low voltage buriedphoto-diode for CMOS imager. InIEDM Technical Digest, pages 883–886, 1999.

115

Page 123: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

116

[15] K. Yonemoto, H. Sumi, R. Suzuki, and T. Ueno. A CMOS image sensor with asimple FPN-reduction technology and a hole accumulated diode. InISSCC Digest ofTechnical Papers, pages 102–103, 2000.

[16] A. Blanksby and M. Loinaz. Performance analysis of a color CMOS photogate imagesensor.IEEE Transaction on Electron Devices, 47(1):55–64, January 2000.

[17] R. Nixon, S. Kemeny, B. Pain, C. Staller, and E. Fossum.256 × 256 CMOS activepixel sensor camera-on-a-chip.IEEE Journal of Solid-State Circuits, 31(12):2046–2050, December 1996.

[18] A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, 1991.

[19] D. Johns and K. Martin.Analog Integrated Circuit Design. John Wiley & Sons, NewYork, 1996.

[20] Y. Degerli, F. Lavernhe, P. Magnan, and J. Farre. Analysis and reduction of signalreadout circuitry temporal noise in CMOS image sensors for low-light levels.IEEETransaction on Electron Devices, 47(5):949–962, May 2000.

[21] Y. Degerli, F. Lavernhe, P. Magnan, and Jean Farre. Non-stationary noise responseof some fully differential on-chip readout circuits suitable for CMOS image sen-sors.IEEE Trans. on Circuits and Systems-II: Analog and Digital Signal Processing,46(12), 1999.

[22] O.Yadid-Pecht and E. Fossum. Wide intrascene dynamic range CMOS APS usingdual sampling.IEEE Transaction on Electron Devices, 44(10):1721–1723, October1997.

[23] P. Greenspun.Making Photographs. [Online.] http://www.photo.net/photo/tutorial/,accessed 09/30/2002.

[24] C. Mead.Analog VLSI and Neural Systems. Addison-Wesley, Boston, Massachusetts,1989.

[25] M. Loose, K. Meier, and J. Schemmel. A self-calibrating single-chip CMOS camerawith logarithmic response.IEEE Journal of Solid-State Circuits, 36(4):586–596,April 2001.

[26] S. Kavadias, B. Diericks, D.Scheffer, A. Alaerts, D. Uwaerts, and J. Bogaerts. Alogarithmic response CMOS image sensor with on-chip calibration.IEEE Journal ofSolid-State Circuits, 35(8):1146–1152, August 2000.

[27] S. Decker, R. D. McGrath, K. Brehmer, and C. G. Sodini. A256 × 256 CMOSimaging array with wide dynamic range pixels and column-parallel digital output.IEEE Journal of Solid-State Circuits, 33(12):2081–2091, December 1998.

[28] D. Yang, A. E. Gamal, B. Fowler, and H. Tian. A640×512 CMOS image sensor withultra-wide dynamic range floating-point pixel-level ADC.IEEE Journal of Solid-State Circuits, 34(12):1821–1834, December 1999.

[29] W. Yang. A wide-dynamic-range, low-power photosensor array. InISSCC Digest ofTechnical Papers, pages 230–231, 1994.

Page 124: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

117

[30] L. McIlrath. A low-power low-noise ultrawide-dynamic-range CMOS imager withpixel-parallel A/D conversion.IEEE Journal of Solid-State Circuits, 36(5):846–853,May 2001.

[31] E. Culurciello, R. Etienne-Cummings, and K. Boahen. Arbitrated address event rep-resentation digital image sensor. InISSCC Digest of Technical Papers, pages 92–93,2001.

[32] K. Boahen. A throughput-on-demand address-event transmitter for neuromorphicchips. InAdvanced Research in VLSI, pages 72–86, 1999.

[33] V. Brajovic and T. Kanade. A sorting image sensor. InProc. of the 1996 IEEE Int.Conf. on Robotics and Automation, pages 1638–1643, Minneapolis, Minnesota, April1996.

[34] Y. Ni, F. Devos, M. Boujrad, and J. H. Guan. Histogram-equalization-based adaptiveimage sensor for real-time vision.IEEE Journal of Solid-State Circuits, 32(7):1027–1036, July 1997.

[35] X. Guo, M. Erwin, and J. Harris. Ultra-wide dyanmic range CMOS imager usingpixel-threshold firing. InProceedings of World Multiconference on Systemics, Cy-bernetics and Informatics, volume 15, pages 485–489, Orlando, Florida, July 2001.

[36] X. Guo, M. Erwin, and J. Harris. A time-based asynchronous readout (TBAR) CMOSimager for high-dynamic range applications. InProceedings of IEEE Sensors, pages712–717, Orlando, Florida, June 2002.

[37] S. Mendis, S. Kemeny, R. Gee, B. Pain, Q. Kim C. Staller, and E. Fossum. CMOSactive pixel image sensors for highly integrated imaging systems.IEEE Journal ofSolid-State Circuits, 32(2), Feburary 1997.

[38] R. Kummaraguntla.Time Domain Quantization CMOS Image Sensor System Designand Architecture. Master’s thesis, University of Florida, Gainesville, FL, 2001.

[39] S. Kleinfelder, S. Lim, X. Liu, and A. El Gamal. A 10 000 frames/s CMOS digitalpixel sensor. IEEE Journal of Solid-State Circuits, 36(12):2049–2059, December2001.

[40] F. Andoh, H. Shimamoto, and Y. Fujita. A digital pixel image sensor for real-timereadout.IEEE Transaction on Electron Devices, 47(11):2123–2127, November 2000.

[41] C. H. Van Berkel, M. B. Josephs, and S. M. Nowick. Applications of asynchronouscircuits. Proceedings of the IEEE, 87(2):223–233, Feburary 1999.

[42] H. Shimizu, K. Ijitsu, H. Akiyoshi, K. Aoyama, H. Takatsuka, K. Watanabe,R. Nanjo, and Y. Takao. A 1.4-ns access 700-MHz 288-kb SRAM macro with ex-pandable architecture. InISSCC Digest of Technical Papers, pages 190–191, 1999.

[43] P. Debevec.Recovering High Dynamic Range Radiance Maps from Photographs.[Online.] http://www.debevec.org/Research/HDR/, accessed 09/30/2002.

[44] S. Dabral and T. Maloney.Basic ESD and I/O design. Wiley, New York, 1998.

[45] J. Kessels and P. Marston. Designing asynchronous standby circuits for a low-powerpager.Proceedings of the IEEE, 87(2):257–267, Feburary 1999.

Page 125: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

118

[46] Hon-Sum Wong. Technology and device scaling considerations for CMOS imagers.IEEE Transaction on Electron Devices, 43(12):2131–2142, December 1996.

[47] A. Gamal, D. Yang, and B. Fowler. Pixel level processing-why, what and how? InProceedings of the SPIE Electronic Imaging Conference, volume 3650, pages 2–13,March 1999.

[48] M. Furumiya, H. Ohkubo, Y. Muramatsu, S. Kurosawa, and Y. Nakashiba. Highsensitivity and no-cross-talk pixel technology for embedded CMOS image sensor. InIEDM Technical Digest, pages 701–704, 2000.

[49] Y. Tsividis. Operation and Modeling of the MOS Transistor. McGraw-Hill, NewYork, 1999.

[50] R. Yukawa. A CMOS 8-bit high-speed A/D.IEEE Journal of Solid-State Circuits,20(3):775–779, June 1985.

[51] C. Enz and G. Temes. Circuit techniques for reducing the effects of op-amp imper-fections: Autozeroing, correlated double sampling, and chopper stabilization.Pro-ceedings of the IEEE, 84(11):1584–1614, 1996.

[52] P. Gray, P. Hurst, S. Lewis, and R. Meyer.Analysis and Design of Analog IntegratedCircuits. 4th Ed. John Wiley & Sons, New York, 2001.

[53] J. Shieh, M. Patil, and B. Sheu. Measurement and analysis of charge injection inMOS analog switches.IEEE Journal of Solid-State Circuits, 22(2):277–281, April1987.

[54] A. Abidi. Behavioral modeling of analog and mixed signal ic’s. InProceedings ofthe IEEE2001 Custom Integrated Circuits Conference, pages 443–450, 2001.

[55] A. Martin. Synthesis of asynchronous VLSI circuits.Caltech Computer ScienceTechnical Report CS-TR-92-03, California Institute of Technology, Pasadena, CA,1991.

[56] M. Mahowald.VLSI analogs of neuronal visual processing: A synthesis of form andfunction. PhD thesis, California Institute of Technology, Pasadena, CA, 1992.

[57] A. Martin. Programming in VLSI: From communicating processes to delay-insensitive circuits.Caltech Computer Science Technical Report CS-TR-89-01, Cali-fornia Institute of Technology, Pasadena, CA, 1989.

[58] A. Gamal, B. Fowler, H. Min, and X. Liu. Modeling and estimation of FPN com-ponents in CMOS image sensors. InProceedings of the SPIE Electronic ImagingConference, volume 3301, pages 168–177, San Jose, California, January 1998.

[59] D. Yang, H. Tian, B. Fowler, X. Liu, and A. Gamal. Characterization of CMOS imagesensors with nyquist rate pixel level ADC. InProceedings of the SPIE ElectronicImaging Conference, volume 3650, pages 52–62, San Jose, California, March 1999.

[60] R. Senthinathan and J. Prince. Application specific CMOS output driver circuit de-sign techniques to reduce simultaneous switching noise.IEEE Journal of Solid-StateCircuits, 28(12):1383–1388, December 1993.

[61] B. Beecken and E. Fossum. Determination of the conversion gain and the accuracy ofits measurement for detector elements and arrays.Applied Optics, 35(19):3471–3477,July 1996.

Page 126: A TIME-BASED ASYNCHRONOUS READOUT CMOS IMAGE SENSOR

BIOGRAPHICAL SKETCH

Xiaochuan Guo was born in Hanzhong, China. He earned his Bachelor of Science

in applied physics from the Shanghai JiaoTong University (Shanghai, China) in July 1994.

After working as a design engineer in Shanghai for two years, he began graduate studies in

electrical engineering at the Shanghai JiaoTong University in September 1996. In August

1998, he transferred to the University of Florida in Gainesville, Florida. Since then, he

has been a Ph.D. student in the Computational NeuroEngineering Laboratory in the De-

partment of Electrical and Computer Engineering at the University of Florida. His present

interests are in the areas of analog and mixed-signal integrated circuit design.

119