Introduction 1
Geethanjali College of Engineering and Technology
Faiz Ahmed – (11R11D7001) M.tech - ECE
Page 1
3-D Transistor
Introduction
ContentsTopic Page
1) Introduction
a) Transistor
b) History
2) Moore’s Law
a) Prediction
b) Present
3) Transistor Technologies
a) 1st Transistor -Thermionic triode (1907)
b) 1st Semiconductor Transistor - Point Contact Transistor (1947)
c) Other new Semiconductor Technologies for Transistor –
i) Traditional Planar Transistor
Page 2
Introduction
ii) Ballistic Transistor
iii) Carbon Nanotube FET
4)3-D Transistor
5)Merits of 3-D Transistor
6) Conclusion
7) References
Page 3
Introduction
1) Introduction
TransistorA transistor is a semiconductor device used to amplify and
switch electronic signals and power. It is composed of a
semiconductor material with at least three terminals for
connection to an external circuit. A voltage or current applied
to one pair of the transistor's terminals changes the current
flowing through another pair of terminals. Because the
controlled (output) power can be higher than the controlling
(input) power, a transistor can amplify a signal.
The transistor is the fundamental building block of modern
electronic devices, and is ubiquitous in modern electronic
systems. Following its development in the early 1950s the
transistor revolutionized the field of electronics, and paved the
way for smaller and cheaper radios, calculators, and computers,
among other things.
The thermionic triode, a vacuum tube invented in 1907,
propelled the electronics age forward, enabling amplified radio
technology and long-distance telephony. The triode, however,
Page 1
Introduction
was a fragile device that consumed a lot of power. Physicist
Julius Edgar Lilienfeld filed a patent for a field-effect transistor
(FET) in Canada in 1925, which was intended to be a solid-state
replacement for the triode.
From November 17, 1947 to December 23, 1947, John Bardeen
and Walter Brattain at AT&T's Bell Labs in the United States,
performed experiments and observed that when two gold point
contacts were applied to a crystal of germanium, a signal was
produced with the output power greater than the input. The
transistor is the key active component in practically all modern
electronics. Many consider it to be one of the greatest
inventions of the 20th century.
Advantages
The key advantages that have allowed transistors to replace
their vacuum tube predecessors in most applications are
Small size and minimal weight, allowing the development
of miniaturized electronic devices.
Page 2
Introduction
Highly automated manufacturing processes, resulting in
low per-unit cost.
Lower possible operating voltages, making transistors
suitable for small, battery-powered applications.
No warm-up period for cathode heaters required after
power application.
Lower power dissipation and generally greater energy
efficiency.
Higher reliability and greater physical ruggedness.
Extremely long life. Some transistorized devices have been
in service for more than 50 years.
Complementary devices available, facilitating the design of
complementary-symmetry circuits, something not possible
with vacuum tubes.
Insensitivity to mechanical shock and vibration, thus
avoiding the problem of microphonics in audio
applications.
Page 3
Introduction
Limitations
Silicon transistors typically do not operate at voltages
higher than about 1000 volts (SiC devices can be operated
as high as 3000 volts). In contrast, vacuum tubes have
been developed that can be operated at tens of thousands
of volts.
High-power, high-frequency operation, such as that used
in over-the-air television broadcasting, is better achieved
in vacuum tubes due to improved electron mobility in a
vacuum.
Silicon transistors are much more vulnerable than vacuum
tubes to an electromagnetic pulse generated by a high-
altitude nuclear explosion.
Vacuum tubes create a distortion, the so-called tube
sound, that some people find to be more tolerable to the
ear.
Page 4
Introduction
History
Very-large-scale integration (VLSI) is the process of
creating integrated circuits by combining thousands of
transistors into a single chip. VLSI began in the 1970s when
complex semiconductor and communication technologies
were being developed.
During the 1920’s, several inventors attempted devices
that were intended to control the current in solid state
diodes and convert them into triodes. Success, however,
had to wait until after World War II, during which the
attempt to improve silicon and germanium crystals for use
as radar detectors led to improvements both in fabrication
and in the theoretical understanding of the quantum
mechanical states of carriers in semiconductors and after
which the scientists who had been diverted to radar
development returned to solid state device development.
With the invention of transistors at Bell labs, in 1947, the
Page 5
Introduction
field of electronics got a new direction which shifted from
power consuming vacuum tubes to solid state devices.
Another problem was the size of the circuits. A complex
circuit, like a computer, was dependent on speed. If the
components of the computer were too large or the wires
interconnecting them too long, the electric signals couldn't
travel fast enough through the circuit, thus making the
computer too slow to be effective.
Jack Kilby at Texas Instruments found a solution to this
problem in 1958. Kilby's idea was to make all the
components and the chip out of the same block (monolith)
of semiconductor material. When the rest of the workers
returned from vacation, Kilby presented his new idea to
his superiors. He was allowed to build a test version of his
circuit. In September 1958, he had his first integrated
circuit ready[1]. Although the first integrated circuit was
pretty crude and had some problems, the idea was
groundbreaking. By making all the parts out of the same
block of material and adding the metal needed to connect
Page 6
Introduction
them as a layer on top of it, there was no more need for
individual discrete components. No more wires and
components had to be assembled manually. The circuits
could be made smaller and the manufacturing process
could be automated. From here the idea of integrating all
components on a single silicon wafer came into existence
and which led to development in small-scale integration
(SSI) in the early 1960s, medium-scale integration (MSI) in
the late 1960s, and large-scale integration (LSI) and VLSI in
the 1970s and 1980s with tens of thousands of transistors
on a single chip (later hundreds of thousands and now
millions).
Developments
The first semiconductor chips held two transistors each.
Subsequent advances added more and more transistors,
and, as a consequence, more individual functions or
systems were integrated over time. The first integrated
circuits held only a few devices, perhaps as many as ten
diodes, transistors, resistors and capacitors, making it
Page 7
Introduction
possible to fabricate one or more logic gates on a single
device. Now known retrospectively as small-scale
integration (SSI), improvements in technique led to
devices with hundreds of logic gates, known as medium-
scale integration (MSI). Further improvements led to
large-scale integration (LSI), i.e. systems with at least a
thousand logic gates. Current technology has moved far
past this mark and today's microprocessors have many
millions of gates and billions of individual transistors.
As microprocessors become more complex due to technology
scaling, microprocessor designers have encountered several
challenges which force them to think beyond the design plane,
and look ahead to post-silicon:
Power usage/Heat dissipation – As threshold voltages have
ceased to scale with advancing process technology,
dynamic power dissipation has not scaled proportionally.
Maintaining logic complexity when scaling the design
down only means that the power dissipation per area will
go up. This has given rise to techniques such as dynamic
Page 8
Introduction
voltage and frequency scaling (DVFS) to minimize overall
power.
Process variation – As photolithography techniques tend
closer to the fundamental laws of optics, achieving high
accuracy in doping concentrations and etched wires is
becoming more difficult and prone to errors due to
variation. Designers now must simulate across multiple
fabrication process corners before a chip is certified ready
for production.
Stricter design rules – Due to lithography and etch issues
with scaling, design rules for layout have become
increasingly stringent. Designers must keep ever more of
these rules in mind while laying out custom circuits. The
overhead for custom design is now reaching a tipping
point, with many design houses opting to switch to
electronic design automation (EDA) tools to automate
their design process.
Timing/design closure – As clock frequencies tend to scale
up, designers are finding it more difficult to distribute and
Page 9
Introduction
maintain low clock skew between these high frequency
clocks across the entire chip. This has led to a rising
interest in multicore and multiprocessor architectures,
since an overall speedup can be obtained by lowering the
clock frequency and distributing processing.
First-pass success – As die sizes shrink (due to scaling), and
wafer sizes go up (to lower manufacturing costs), the
number of dies per wafer increases, and the complexity of
making suitable photomasks goes up rapidly. A mask set
for a modern technology can cost several million dollars.
This non-recurring expense deters the old iterative
philosophy involving several "spin-cycles" to find errors in
silicon, and encourages first-pass silicon success. Several
design philosophies have been developed to aid this new
design flow, including design for manufacturing (DFM),
design for test (DFT), and Design for X.
Page 10
Introduction
Moore’s Law
Prediction
In 1965, Gordon Moore sketched out his prediction of the pace of silicon technology. Decades later, Moore’s Law remains true, driven largely by Intel’s unparalleled silicon expertise.According to Moore’s Law, the number of transistors on a chip roughly doubles every two years. As a result the scale gets smaller and smaller. For decades, Intel has met this formidable challenge through investments in technology and manufacturing resulting in the unparalleled silicon expertise that has made Moore’s Law a reality.
Nearly 40 years ago, Intel co-founder Gordon Moore forecasted the rapid pace of technology innovation. His prediction, popularly known as “Moore’s Law,” states that transistor density on integrated circuits doubles about every two years. Today, Intel continues to lead the industry, driving Moore’s Law to in-crease functionality and performance and
Page 11
Introduction
decrease costs, bringing growth to industries worldwide.
Page 12
Introduction
Present
Moore’s Law is not a law of science founded in scientific
investigation but an uncannily accurate projection based on
observation.
t the present time, researchers are struggling to keep Moore’s
Law on track. Processor clock rates have stalled, as chip
designers have struggled to control energy costs and heat
dissipation, but the industry’s response has been
straightforward — simply increase the number of processor
“cores” on a single chip, together with associated cache
memory, so that aggregate performance continues to track or
exceed Moore’s Law projections.
The law states that the complexity (i.e., number of transistors
per chip) for minimum component costs has increased at a rate
of roughly a factor of two per year. Certainly over the short
term this rate can be expected to continue, if not to increase.
Over the longer term, the rate of increase is a bit more
Page 13
Introduction
uncertain, although there is no reason to believe it will not
remain nearly constant for at least 10 years. That means by
1975, the number of components per integrated circuit for
minimum cost will be 65,000.
Anyone working in the computer industry will at some time
hear about Moore’s Law because of its ability to predict future
processor transistor density and thus performance. In 1965,
just four years after the first planar integrated circuit was
discovered (not microprocessor), Dr. Gordon E. Moore with
Intel had observed exponential growth in the number of
transistors that could be manufactured on a chip. Dr. Moore
went on to predict this exponential growth would continue. As
it turned out, Intel has been able to manufacture
microprocessor chips that at least doubled the number of
transistors over a 12-month period or so and yet the cost per
transistor has dropped over time.
Page 14
Introduction
Moore’s law says that computer power doubles for the same
cost about every two years, implying rapidly falling cost,
increased power and proliferation. If this continues, the
equivalent price of a $600 iPhone would be $18.75 in 2020,
$0.59 in 2030 and overall power or cost improving 1,000,000
times by 2050. How should we account for this possible
scenario in our investment strategies and plan for potential
impact? What products and services have good present
potential but could be enormous if Moore’s law continues?
The table below shows the projected relative computing power
if Moore’s Law continues at its current pace:
If this technological progress continues for another forty years,
computing hardware in 2050 will be more than one million
Page 15
Introduction
times more powerful than today. And that’s building on a base
that already seems amazingly advanced.
Page 16
Introduction
Transistor Technologies
1st Transistor -Thermionic triode (1907)
A triode is an electronic amplification device having three active electrodes. The term most commonly applies to a vacuum tube (or valve in British English) with three elements: the filament or cathode, the grid, and the plate or anode. The triode vacuum tube was the first electronic amplification device, which propelled the electronics age forward, by enabling amplified radio technology and long-distance telephony. Triodes were widely used in consumer electronics until the 1950s, when bipolar junction transistors replaced them. The word is derived from the Greek τρίοδος, tríodos, from tri- (three) and hodós (road, way), originally meaning the place where three roads meet.
The directly-heated cathode (or indirectly by means of a filament) produces an electron charge by thermionic emission. This electron stream is attracted to the positively-charged plate (anode), inducing a current. Applying a negative DC voltage ("bias") to the control grid will repel some of the electron stream back towards the cathode, thus isolating the plate from the cathode; full bias will turn the tube off by blocking all current from the cathode. Conversely, increasing the positive DC voltage on the plate will attract more electrons toward it. As grid bias is increased,
Page 17
Introduction
more of the electron current is repelled, resulting in a smaller current at the plate. If an AC signal is superimposed on the grid, that signal will be amplified and directed toward the plate as the negative DC bias is increased.
The triode is very similar in operation to the n-channel JFET; it is normally on, and progressively switched off as the grid/gate is pulled increasingly negative of the source/cathode.
1st Semiconductor Transistor - Point Contact Transistor (1947)
A point-contact transistor was the first type of solid-state electronic transistor ever constructed. It was made by researchers John Bardeen and Walter Houser Brattain at Bell Laboratories in December 1947.[1] They worked in a group led by physicist William Bradford Shockley. The group had been working together on experiments and theories of electric field effects in solid state materials, with the aim of replacing vacuum tubes with a smaller, less power-consuming device.
The critical experiment, carried out on December 16, 1947, consisted of a block of germanium, a semiconductor, with two very closely spaced gold contacts held against it by a spring. Brattain attached a small strip of gold foil over the point of a plastic triangle — a configuration which is
Page 18
Introduction
essentially a point-contact diode. He then carefully sliced through the gold at the tip of the triangle. This produced two electrically isolated gold contacts very close to each other.
The piece of germanium used had a surface layer with an excess of electrons. When an electric signal traveled in through the gold foil, it injected holes (points which lack electrons). This created a thin layer which had a scarcity of electrons.
A small positive current applied to one of the two contacts had an influence on the current which flowed between the other contact and the base upon which the block of germanium was mounted. In fact, a small change in the first contact current, caused a greater change in the second contact current, thus it was an amplifier. The first contact is the "emitter" and the second contact is the "collector". The low-current input terminal into the point-contact transistor is the emitter, while the output high current terminals are the base and collector. This differs from the later type of bipolar junction transistor invented in 1951 that operates as transistors still do, with the low current input terminal as the base and the two high current output terminals are the emitter and collector.
Other new Semiconductor Technologies for Transistor
Traditional Planar Transistor
In 1959, Dawon Kahng and Martin M. (John) Atalla at Bell Labs invented the metal–oxide–semiconductor field-effect transistor (MOSFET) as an offshoot to the patented FET design.[22] Operationally and structurally different from the bipolar junction transistor,[23] the MOSFET was made by putting an insulating layer on the surface of the semiconductor and then placing a metallic gate electrode on that. It used crystalline silicon for the semiconductor and a thermally oxidized layer of silicon dioxide for the insulator.
The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET) is a transistor used for amplifying or switching electronic signals. Although the MOSFET is a four-
Page 19
Introduction
terminal device with source (S), gate (G), drain (D), and body (B) terminals,[1] the body (or substrate) of the MOSFET often is connected to the source terminal, making it a three-terminal device like other field-effect transistors. When two terminals are connected to each other (short-circuited) only three terminals appear in electrical diagrams. The MOSFET is by far the most common transistor in both digital and analog circuits, though the bipolar junction transistor was at one time much more common.
The MOSFET is used in digital complementary metal–oxide–semiconductor (CMOS) logic,[25] which uses p- and n-channel MOSFETs as building blocks. Overheating is a major concern in integrated circuits since ever more transistors are packed into ever smaller chips. CMOS logic reduces power consumption because no current flows (ideally), and thus no power is consumed, except when the inputs to logic gates are being switched. CMOS accomplishes this current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct and a low voltage on the gates causes the reverse. During the switching time as the voltage goes from one state to another, both MOSFETs will conduct briefly. This arrangement greatly reduces power consumption and heat generation.
Difficulties arising due to MOSFET size reduction
Producing MOSFETs with channel lengths much smaller than a micrometre is a challenge, and the difficulties of semiconductor device fabrication are always a limiting factor in advancing integrated circuit technology. In recent years, the small size of the MOSFET, below a few tens of nanometers, has created operational problems.
Higher subthreshold conduction
As MOSFET geometries shrink, the voltage that can be applied to the gate must be reduced to maintain reliability. To maintain performance, the threshold voltage of the MOSFET has to be reduced as well. As threshold voltage is reduced, the transistor cannot be switched from complete turn-off to complete turn-on with the limited voltage swing available; the circuit design is a compromise between strong current in the "on" case and low current in the "off" case, and the application determines whether to favor one over the other. Subthreshold leakage (including subthreshold conduction, gate-oxide leakage and reverse-biased junction leakage), which was ignored in the past, now can consume upwards of half of the total power consumption of modern high-performance VLSI chips.[29][30][31]
Increased gate-oxide leakage
The gate oxide, which serves as insulator between the gate and channel, should be made as thin as possible to increase the channel conductivity and performance when the transistor is on and to reduce subthreshold leakage when the transistor is off. However, with current gate oxides with a thickness of around 1.2 nm (which in silicon is ~5 atoms thick) the quantum mechanical
Page 20
Introduction
phenomenon of electron tunneling occurs between the gate and channel, leading to increased power consumption.
Insulators that have a larger dielectric constant than silicon dioxide (referred to as high-k dielectrics), such as group IVb metal silicates e.g. hafnium and zirconium silicates and oxides are being used to reduce the gate leakage from the 45 nanometer technology node onwards. Increasing the dielectric constant of the gate dielectric allows a thicker layer while maintaining a high capacitance (capacitance is proportional to dielectric constant and inversely proportional to dielectric thickness). All else equal, a higher dielectric thickness reduces the quantum tunneling current through the dielectric between the gate and the channel. On the other hand, the barrier height of the new gate insulator is an important consideration; the difference in conduction band energy between the semiconductor and the dielectric (and the corresponding difference in valence band energy) also affects leakage current level. For the traditional gate oxide, silicon dioxide, the former barrier is approximately 8 eV. For many alternative dielectrics the value is significantly lower, tending to increase the tunneling current, somewhat negating the advantage of higher dielectric constant.
Increased junction leakage
To make devices smaller, junction design has become more complex, leading to higher doping levels, shallower junctions, "halo" doping and so forth,[32][33] all to decrease drain-induced barrier lowering (see the section on junction design). To keep these complex junctions in place, the annealing steps formerly used to remove damage and electrically active defects must be curtailed[34] increasing junction leakage. Heavier doping is also associated with thinner depletion layers and more recombination centers that result in increased leakage current, even without lattice damage.
Lower output resistance
For analog operation, good gain requires a high MOSFET output impedance, which is to say, the MOSFET current should vary only slightly with the applied drain-to-source voltage. As devices are made smaller, the influence of the drain competes more successfully with that of the gate due to the growing proximity of these two electrodes, increasing the sensitivity of the MOSFET current to the drain voltage. To counteract the resulting decrease in output resistance, circuits are made more complex, either by requiring more devices, for example the cascode and cascade amplifiers, or by feedback circuitry using operational amplifiers, for example a circuit like that in the adjacent figure.
Lower transconductance
The transconductance of the MOSFET decides its gain and is proportional to hole or electron mobility (depending on device type), at least for low drain voltages. As MOSFET size is reduced, the fields in the channel increase and the dopant impurity levels increase. Both changes reduce the carrier mobility, and hence the transconductance. As channel lengths are reduced
Page 21
Introduction
without proportional reduction in drain voltage, raising the electric field in the channel, the result is velocity saturation of the carriers, limiting the current and the transconductance.
Interconnect capacitance
Traditionally, switching time was roughly proportional to the gate capacitance of gates. However, with transistors becoming smaller and more transistors being placed on the chip, interconnect capacitance (the capacitance of the metal-layer connections between different parts of the chip) is becoming a large percentage of capacitance.[35] [36] Signals have to travel through the interconnect, which leads to increased delay and lower performance.
Process variations
With MOSFETS becoming smaller, the number of atoms in the silicon that produce many of the transistor's properties is becoming fewer, with the result that control of dopant numbers and placement is more erratic. During chip manufacturing, random process variations affect all transistor dimensions: length, width, junction depths, oxide thickness etc., and become a greater percentage of overall transistor size as the transistor shrinks. The transistor characteristics become less certain, more statistical. The random nature of manufacture means we do not know which particular example MOSFETs actually will end up in a particular instance of the circuit. This uncertainty forces a less optimal design because the design must work for a great variety of possible component MOSFETs. See process variation, design for manufacturability, reliability engineering, and statistical process control.[37]
Modeling challenges
Modern ICs are computer-simulated with the goal of obtaining working circuits from the very first manufactured lot. As devices are miniaturized, the complexity of the processing makes it difficult to predict exactly what the final devices look like, and modeling of physical processes becomes more challenging as well. In addition, microscopic variations in structure due simply to the probabilistic nature of atomic processes require statistical (not just deterministic) predictions. These factors combine to make adequate simulation and "right the first time" manufacture difficult.
Ballistic Transistor
Page 22
Introduction
Ballistic deflection transistors are electronic devices being developed for very high-speed integrated circuits. Instead of switching the flow of several electrons using gates, as it is done in field-effect transistors, they manipulate the course of single electrons using electromagnetic forces. Free flowing electrons are forced around a wedge-shaped obstacle (the 'deflector') on one of two paths, corresponding to a logical '1' or '0'. Initially impelled by the circuits electric field, electrons proceed on their respective paths via this electromagnetic deflection. The 'ballistic' title was chosen to reflect the property of an individual electron traversing the transistor material: a two-dimensional electron gas, acting as a thin sheet semiconductor.[
The Ballistic Deflection Transistor (BDT) should produce far less heat and run far faster than standard transistors because it does not start and stop the flow of its electrons the way conventional designs do. It resembles a roadway intersection, except in the middle of the intersection sits a triangular block. From the "south" an electron is fired, as it approaches the crossroads, it passes through an electrical field that pushes the electron slightly east or west. When the electron reaches the middle of the intersection, it bounces off one side of the triangle block and is deflected straight along either the east or west roads. In this way, if the electron current travels along the east road, it may be counted as a zero, and as a one if it travels down the west road.
A traditional transistor registers a "one" as a collection of electrons on a capacitor, and a "zero" when those electrons are removed. Moving electrons on and off the capacitor is akin to filling and emptying a bucket of water. The drawback to this method is that it takes time to fill and empty that bucket. That refill time limits the speed of the transistor—the transistors in today's laptops run at perhaps two gigahertz, meaning two billion refills every second. A second drawback is that these transistors produce immense amounts of heat when that energy is emptied.
The BDT design should also be able to resist much of the electrical noise present in all electronic devices because the noise would only be present in the electrical "steering" field, and calculations show the variations of the noise would cancel themselves out as the electron passes through.
The BDT is "ballistic" because it is made from a sheet of semiconductor material called a "2D electron gas," which allows the electrons to travel without hitting impurities, which would impede the transistor's performance.
Page 23
Introduction
Carbon Nanotube FET
A carbon nanotube field-effect transistor (CNTFET) refers to a field-effect transistor that utilizes a single carbon nanotube or an array of carbon nanotubes as the channel material instead of bulk silicon in the traditional MOSFET structure. First demonstrated in 1998, there have been major developments in CNTFETs.
According to Moore's law, the dimensions of individual devices in an integrated circuit have been decreased by a factor of approximately two every two years. This scaling down of devices has been the driving force in technological advances since late 20th century. However, as noted by ITRS 2009 edition, further scaling down has faced serious limits related to fabrication technology and device performances as the critical dimension shrunk down to sub-22 nm range.[2] The limits involve electron tunneling through short channels and thin insulator films, the associated leakage currents, passive power dissipation, short channel effects, and variations in device structure and doping.[3] These limits can be overcome to some extent and facilitate further scaling down of device dimensions by modifying the channel material in the traditional bulk MOSFET structure with a single carbon nanotube or an array of carbon nanotubes.
Key Advantages
Better Control over channel formation.Better Threshold Voltage.Better Subthreshold slope.High Mobility.High Current density.High Trans-conductance.
3-D TransistorPage 24
Introduction
Page 25