Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
www.7x24exchange.org
S P R I N G 2 0 0 6
STS Communication• New Wavestar communications
package• Triple redundant logic and
power supplies• Redundant LCD screen with
Waveform capture• Industry’s first biometric
security system
ADVANCED
Branch Circuit Monitoring System • Kits for any existing distribution
cabinets• Track, report and manage current,
voltage and power measures to server level
• Manage current voltage and powerin server cabinets
• Communicate through modbus orSNMP protocol and/or local monitor
NEW KITS & VOLTAGE VERSION
PDU Performance• Re-engineered rigidized doors• Vision panels and improved
locks• Hinged interior panels• Fits BCMS with no alterations• Flexible custom designs
IMPROVED
someone has to leadsomeone has to lead
Power Distribution, Inc.4200 Oakleys Court
Richmond, VA 23223800.225.4838
804.737.1703 faxe-mail: [email protected]
web site: www.pdicorp.com
2006 SPRING CONFERENCE HIGHLIGHTS
Spring 2006Directors & Officers
CHAIRMAN OF THE BOARDRobert J. CassilianoBusiness Information Services, Inc.(201) 672-0630
PRESIDENTDavid SjogrenStrategic Facilities Inc.(973) 875-7701
VICE PRESIDENT-PROGRAMSJohn OyhagarayFirst Data Corp./Western Union(201) 263-5653
DIRECTORRoy L. ChapmanAmerican Express(602) 766-6502
CHAPTER REPRESENTATIVEWilliam LeedeckeVanguard Group(610) 669-5707
VENDOR REPRESENTATIVEDouglas H. SandbergASCO Power Technologies(973) 966-2079
ADMINISTRATIVE DIRECTORKathleen A. Dolci (646) 486-3818 x103
MEMBERSHIP & EDUCATIONTara Oehlmann, Ed.M.(646) 486-3818 x104
CONFERENCESBrandon A. Dolci(646) 486-3818 x108
The Spring Conference themed “End-to-End
Reliability: The Business Connection” will be
held June 4-7 at the JW Marriott Orlando,
Grande Lakes in Orlando, FL. Program
highlights include a Keynote Address by
Brad Boston, Senior Vice President and Chief
Information Officer for Cisco, entitled
"Cisco Systems Data Center Migration”; a
keynote by John Sammans, Principal of
Technology Planning for Vanguard Group,
entitled "The Data Center Value Proposition"
and a Keynote Address by Paul Perez, Vice
President of Storage, Networking and
Infrastructure for Hewlett Packard, entitled
“IT Future Technologies & Direction”.
The Tuesday Evening Vendor Sponsored
event will be a night at Universal Studios
“City Walk” where 7x24 guests will have
dinner in Bob Marley’s and Pat O’Briens.
Marley’s is an authentic representation of the
Kingston site, which was Bob’s home and
now serves as a museum and the worldwide
headquarters of the Bob Marley Foundation.
Pat O’Brien’s is an exact replica of the famed
New Orleans watering hole where it’s Mardi
Gras 365 days a year. The venues are
connected so you can visit, eat and
experience both establishments. After dinner
it’s off to the Terminator 2 exhibit in
3-D which ends with a live show customized
for 7x24 Exchange attendees that will have
you saying “Asta La Vista Baby”. So let your
dread locks down and beware of flying
beaded necklaces as we present a night at
Universal Studios Orlando.
FOR SPRING CONFERENCE PROGRAM AND REGISTRATION INFORMATION PLEASE VISIT 7X24EXCHANGE.ORG OR CALL (646) 486-3818.
Special thanks to the following vendor organizations for making this event possible:
ABB, American Power Conversion, Cummins,Danaher Power Solutions, DDC/MTU Power Generation,Eaton, Kohler Power Systems, Power Distribution Inc,Russelectric, SIEMENS, StarLine and Square D.
3SPRING 2006
OPTICAL CABLING INFRASTRUCTURE IN DATA CENTERSby Doug Coleman, Manager of Technology and Standards, Corning Cable Systems
Data center design and cabling infrastructure architecture has
evolved over the years as needs and technologies have changed.
The data center manager relied on experience and what solutions
worked for him in the past, and, more importantly, what did not
work. Today’s data center requires a more rigorous approach
from a planning viewpoint due to the faster pace at which the
data center is changing. In recent history, we used stacks of
servers and large tape carousels; now we see blade servers and
RAID systems, and the cabling infrastructure must be capable of
servicing these changing needs.
Fortunately, industry guidance has arrived now that the new
standard for data centers is available. This document, published
as Telecommunications Industry Association ANSI/TIA/EIA-942,
Telecommunications Infrastructure Standard for Data Centers,
lists requirements and provides recommendations for data center
design and construction. TIA-942 includes guidance relative to
many areas and not just structured wiring, i.e.,
grounding/bonding, pathway and spaces, and redundancy,
while TIA-568 only addresses structured wiring. As the
commercial building standard ANSI/TIA/EIA-568-B.1-2001,
Commercial Building Telecommunications Cabling Standard, has
done for commercial LANs, the new data center standard
provides a great tool for planning and designing the key
components that make up a data center. This article will look at
the different areas in the data center where optical cabling is
used and discuss best practices for implementing the optical
infrastructure.
Using the TIA-942 standard as a guide, the enterprise data
center is divided into functional areas as shown in Figure 1. For
more detail on the activities occurring in each area, or for
information on the parts of the data center not shown here, the
reader is encouraged to obtain a copy of the TIA-942 standard,
available from Global Engineering at http://global.ihs.com/.
We will focus here on the access flooring area since that is
where most of the optical cabling is located.
WHAT SOLUTION TO USE?The most efficient optical infrastructure is one where all of the
components are pre-terminated in the factory. All connectors are
installed and tested in the factory, and packaged such that
components are not damaged during installation. The installer
unpacks the components, pulls the pre-connectorized cable
assembly into place, snaps in all the connectors, installs the
patch cords connecting to the end equipment, and the system is
up and running. This is the fastest installation method and
provides the best solution for turning up servers quickly and with
the least risk of not meeting the customer’s availability
expectations. The following paragraphs detail each of the
components of this solution.
WHAT FIBER TO USE?The most common fiber type used in the data center remains
multimode fiber due to the continued cost benefits of multimode
electronics. Most data centers are using 50 micron multimode
fiber in the optical infrastructure along with a small amount of
single-mode fiber. The type of 50 micron fiber used depends on
the size of the data center and the applications currently running
or planned for the future. If the design plans include 10 Gb/s
data rates, then one of the laser-optimized 50 micron fibers is
used. The most common fiber type used in data centers is 10
Gb/s 300 meter 50 micron laser-optimized multimode fiber
(LOMMF), which has a minimum effective modal bandwidth
(minEMBc) of 2000 MHz•km at 850 nm and provides
bandwidth scalability from 10 Mb/s up to 10 Gb/s without
needing to change the optical cabling infrastructure. If a longer
reach is needed, then 10 Gb/s 550 meter fiber is available that
FIGURE 1. FUNCTIONAL AREAS OF THE ENTERPRISE DATA CENTER.
4 SPRING 2006
Responsible for your company’s critical power? You may needprofessional help.
When critical power rests on your shoulders, let the experts ease your burden. For more than 100 years, Square D® has been the nation’s leading electrical brand. Now, that expertise brings you the Critical Power Competency Center. One of the first of its kind in the industry.
Staffed by experts who have dedicated their careers to solving the most challenging power problems, the Critical Power Competency Center is a cross-functional team specializing in Electrical System Design, Electrical Product Design and Manufacturing, Power Management and Control Systems, and Project Management.
They know the most efficient way to create a power system with all the capabilities required for 24/7/365 reliability. And they combine best-in-class Square D products to create a system that will always deliver the power you need.
That kind of help will put your mind at ease. So turn reliability on. Get a free critical power information kit at www.criticalpowernow.com/kit.cfm.
Turn it on.
6 SPRING 2006
OPTICAL CABLING INFRASTRUCTURE IN DATA CENTERS continued from page 4
provides a minEMBc of 4700 MHz•km and allows 10 Gb/s
operation up to 550 meters (1800 ft.). This fiber is also
backwards compatible down to 10 Mb/s.
Due to the continued high cost of single-mode electronics,
single-mode fiber is only used for the special links that require
its capabilities. This includes the OC loop entrance into the data
center and the core router connections. Also, IBM has
standardized on single-mode fiber for its FICON server-to-director
connections.
WHAT CABLE TO USE?The best optical cabling solution for the data center environment
is pre-terminated cable assemblies and Plug & Play System pre-
terminated connector modules. The cable assemblies, commonly
referred to as optical trunks, consist of an optical cable with
MTP/MPO connectors on each end. The MTP/MPO connector is a
12-fiber push-pull connector that operates much like the SC
connector, only it connects 12 fibers at one time and is
standards compliant for Ethernet and Fibre Channel. Figure 2shows a pre-terminated optical trunk and protective pulling grip.
The type of cable used in the optical trunk depends on the data
center environment and the fiber count. For lower fiber counts,
i.e. 12 and 24 fibers, the cable usually contains loose 250
micron or tight-buffered 900 micron optical fibers. For larger
fiber counts, ribbon cables provide the best cable design since
the fiber density is better with ribbon cables and results in a
smaller cable diameter. Ribbon cables contain stacks of 12-fiber
ribbons in a large, single tube. As a result of using these
ribbons, a 96-fiber ribbon cable has an outside diameter of 0.54
inches. That is 48 channels of information in only 0.54 inches.
Optical fiber provides the additional benefit of not suffering from
the crosstalk problems that plague copper cables that are placed
close to each other in a bundle. The smaller optical cables
improve pathway utilization and minimize cooling air obstruction
when used in under floor pathways. Use of overhead cable
pathways does not have the cooling air impediment problem,
but pathway fill is still important. Regardless of the fiber count,
distribution cables are used in the data center. “Distribution
cable” refers to the ruggedness classification of the cable as
specified in the indoor optical fiber premises cabling standard
ICEA S-83-596. This standard calls for a higher crush and
tensile performance than the interconnect cable class of cables
used by some cabling vendors. The “interconnect cabling”
classification refers to what is essentially patch cord cable, and
does not provide a robust enough cable solution for use in
overhead ladder racks or in basket trays under the access floor.
For applications where additional fiber protection is desired,
interlocking armored cable is used since the spirally wrapped
steel tape armor significantly increases the crush resistance of
the cable. Use this cable design when multiple trades such as
electricians, plumbers and others will be accessing the space
under the access floor tiles. Damage to the optical cable due to
accidental crush is mitigated. Figure 3 shows the construction
of an interlocking armored cable.
Regardless of the type of cable used, it must meet certain flame
ratings in order to be allowed in the data center. Since mostFIGURE 2. PRE-TERMINATED CABLE ASSEMBLY WITH MTP/MPO
CONNECTORS AND PROTECTIVE PULLING GRIP.
FIGURE 3. INTERLOCKING ARMORED CABLE.
7SPRING 2006
access floor areas utilize the space below the floor as a cooling
air supply plenum, Plenum-rated cables as specified in the
National Electric Code (NEC) may be required for use in this
space. Local building codes should be consulted to verify
compliance with the regulations in your area.
The connectorized ends of the optical trunk are shipped from the
factory installed in a protective covering that protects the
connectors from damage during transit and cable installation.
Pre-terminated Plug & Play System connector modules provide
the interface between the MTP/MPO connectors on the trunks and
the electronics ports. The module contains one or two MTP
adapters at the back of the module and simplex or duplex
adapters on the front of the module. LC, SC, MT-RJ, or ST
connector styles are available on the front, and an optical
assembly inside the module connects the front adapters to the
MTP adapter(s) on the rear of the module. The connector
requested on the front side usually depends on the connector
style in the electronics so that hybrid patch cords are not
needed. A hybrid patch cord would be a patch cord with, for
example, SC connectors on one end and LC connectors on the
other. The most common connector type in the data center is the
LC since it is now used on most new electronics. Figure 4shows an example of a Plug & Play Systems module with 12 LC
duplex adapters on the front and two MTP adapters on the back,
giving 24 fibers of bandwidth in one snap-in package for instant
scalability.
WHAT CONNECTOR TO USE?As stated before, the connector to use in the infrastructure
generally depends on the electronics. Usually you want the
infrastructure connector type to match the electronics connector
type in order to keep things simple. The LC connector is the most
popular connector used in the United States today. Figure 5shows an LC connector and duplex adapter.
PATCH PANEL SELECTIONWithin the Main Distribution Area (MDA), the Fiber Distribution
Frame (FDF) is established as the line of equipment racks or
equipment cabinets that houses the central termination point for
all fiber connections. Fiber patch panels are located in the FDF
and provide protection for the jumpers as well as cable strain-
relief for the cables coming from the other data center areas. The
FIGURE 5. LC CONNECTOR AND DUPLEX LC ADAPTER
FIGURE 4. PRE-TERMINATED PLUG & PLAY SYSTEMS MODULESHOWING 12 LC DUPLEX CONNECTIONS.
8 SPRING 2006
OPTICAL CABLING INFRASTRUCTURE IN DATA CENTERS continued from page 7
patch panel used is a 4U tall housing (4U equaling 4 x 1.75 or
7.00 inches tall) that in many cases can efficiently house up to
288 fiber connections. The housing has front and back doors to
protect the connections contained within. Figure 6 shows an
example of the type of patch panel used in the MDA.
For the Zone Distribution Area, patch panel selection depends on
what space is available. If space in an equipment rack or
cabinet is available, then the 4U housing is used. If no rack
space is available, or if it is preferred that this area be located
under the access flooring, then a different type of housing is
used. One such housing is the Fiber Zone Box, shown in Figure7, which houses up to 12 Plug & Play Systems connector
modules in a pivoting bulkhead panel for easy patch cord
access. If lower fiber counts, or where copper zone connections
are needed in the same housing, this zone box can be
reconfigured to allow fiber and copper connections in the same
housing. Shown below is an example of a copper and fiber zone
distribution housing. This housing can also be used as the EDA
administration point.
The preferred installation for the ZDA is above the access floor in
a rack or cabinet. This is because administration is easier than
going under the floor to add services or change patch
assignments. The ZDA is an interconnect point and provides
connection between the MDA and the EDA. However, space may
not be available above the floor in a cabinet or rack so sub-floor
may be the only option.
In the Equipment Distribution Area (EDA), space for patch panels
is usually at a premium, so 1U housings are generally used.
These provide fiber patch ports for up to 72 or 96 fibers in 1U of
space. Key to this patch panel is the ability to add capacity on
an incremental basis since most users do not need full capacity
from the start and therefore do not want to purchase the full 96-
fiber capacity up front. This capacity is most frequently added in
12- and 24-fiber increments, and is accomplished by
purchasing additional Plug & Play Systems modules. If the
installed trunks did not account for future growth then additional
trunks are needed as well.
PREFERRED ARCHITECTUREPre-wiring the data center with optical connectivity is the best
way to provide bandwidth where it is needed. Using optical
trunks with 12- or 24-fiber Plug & Play Systems modules
provides incremental bandwidth in an economical, easy-to-
install package that minimizes disruption to the space and
avoids pulling up a stack of floor tiles to pull a cable every time
another server is added. Using a zone architecture and providing
space for future growth, along with selecting the appropriate
optical fiber type, is the best way to ensure you will be able to
satisfy your customers’ needs for a long time with a reliable,
easy to scale infrastructure that’s fast to implement.
FIGURE 7. ZONE HOUSING CONTAINING FIBER AND COPPER CONNECTIONS.
FIGURE 6. PATCH PANEL WITH JUMPER MANAGEMENT, 4U (7 INCHES) TALL.
9SPRING 2006
With Sentry! Smart CDU
Switched CDU
Server Technology, Inc.1040 Sandhill Drive
Reno, NV 89521USA www.servertech.com
toll free +1.800.835.1515tel +1.775.284.2000fax +1.775.284.2065
©Server Technology, Inc. Sentry is a trademark of Server Technology, Inc.
Solutions for the Data Center Equipment Cabinet
With Sentry!
How Do You DistributePower in Your DataCenter Cabinet?
Sentry Single or 3-Phase products with 100-120V,208-240V or 230V AC voltages & 20A, 30A or 60Ainput currents.They provide power distributionsolutions from high density applications to highdensity server clusters.
try!CDU Product Family: Basic, Metered, Smart & Switched
Metered CDU
Smart CDU
> Local Input Current MonitoringMetered CDUMetered CDU
> Reliable & Economical> Reliable & EconomicalBasic CDU
Smart CDU
Switched CDU
> Local Input Current Monitoring> Supports External Temp. & Humidity Probes> IP Monitoring of Power Temp. & Humidity
Switched CDU> Local Input Current Monitoring> Supports External Temp. & Humidity Probes> IP Monitoring of Power, Temp. & Humidity> Remote Power Control for Each Outlet:
ON / OFF / Reboot
SAVE THE DATES!
2006 Fall Conference
Hyatt Regency Scottsdale at Gainey Ranch
Scottsdale, AZNovember 12 - 15, 2006
THEME: TBD
2007 Spring Conference
Boca Raton Resort & Club
Boca Raton, FLJune 3 - 6, 2007
THEME: TBD
Visit www.7x24exchange.org or call 646-486-3818 for conference program details and to register.
10 SPRING 2006
INTRODUCTIONIt is a well known fact that the latest computing technology is
pushing the limits of today’s data centers in more ways than
one. Most end users cite challenges in space utilization, power
delivery, cooling, and even structural loading. Server form
factors have shrunk from the multi-EIA unit packages of
yesterday to the sleek blade form factor that allows as many as
84 servers to be packaged in a single rack. Consolidation of
applications from larger, legacy machines to the smaller, more
powerful blades creates both the ability and the desire to pack
more servers into existing data center spaces.
The real life case study of Georgia Tech’s Razor HPC cluster, at
the Institute’s Center for the Study of Systems Biology,
demonstrates a solution for two of the above parametric
challenges: space utilization and cooling. A water-cooled, rack-
level heat exchanger was deployed to help create a very high
density (300W/sqft) cooling solution within an existing facility
where significant cooling limitations existed. In effect, the rack
door heat exchanger solution allowed for the creation of an area
with cooling density 10 times greater than the capabilities of the
rest of the facility.
REQUIREMENTSIt was established by the end user that the computing cluster
would initially consist primarily of 1000 blades servers. In
addition, support configurations of storage, management, and
networking hardware were required to operate Razor. Table 1provides a summary of the power and cooling requirements for
the compute cluster.
The original floor plan layout that was considered for the facility,
requiring approximately 1600 sq ft., is shown in Figure 1. The
layout, as depicted, is intended to show fully loaded racks of
blades (6 chassis per rack) in the 12 red racks. Support
infrastructure is depicted by the yellow racks. An alternate
method for deploying the blades across this square footage
would be to populate each rack half way (3 chassis per rack),
with twice as many red racks. These additional racks would
reside in the open spaces between the existing racks, as shown
in Figure 2.
A CASE STUDY IN HIGH DENSITY COOLING: A PRACTICAL APPLICATION FOR WATER IN THE DATA CENTER
Most data center owners today cite challenges inspace utilization, power delivery, cooling, and evenstructural loading in their facilities. The real lifecase study of a computing cluster at Georgia Techdemonstrates a cost-effective, forward-looking solu-tion for two of the above parametric challenges:space utilization and cooling. A water based rackdoor heat exchanger was deployed as the centerpieceof a cooling solution for an area where thepower/cooling density was 10 times greater than thecapabilities of the rest of the facility. The solutionwas pivotal in creating the showcase-caliber facilitythat was desired by the end user.
Bartosz Ilkowski, Ph.D.Senior Research Technologist, Georgia Institute of Technology
Bret W. Lehman, PESenior Engineer, IBM Corporation
Stephen Peet Manager – Tech Support,BellSouth Corporate Real Estate and Services
Stephen BattenfeldMechanical Department HeadMinick Engineering
TABLE 1. POWER AND COOLING SUMMARY
11SPRING 2006
CHALLENGESA number of user-imposed challenges forced formulation of more
nimble implementation plan. First, the hosting environment for
the cluster was required to be of showcase quality, with
aesthetics being of the utmost importance. Tours were intended
for the area, and large areas of underutilized floor space were
deemed undesirable; so the floor area of the cluster was required
to be reduced to a bare minimum. Excessive noise and
discomfort from air movement were likewise required to be
reduced to a minimum. Finally, an extremely tight schedule for
design and build required that the facility be completed in
roughly 30 days.
SOLUTIONIn order to meet the requirements of the mounting challenges,
the strategic decision to employ rack door heat exchanger was
made. The device is a copper-tube, aluminum-fin, air to water
heat exchanger that replaces the rear panel of a computer rack.
Hot air from the server exhausts passes across the heat
exchanger coil, removing approximately 55% of the rack heat
load from the air stream before it enters the room. It is a
completely open system with no power or supplemental air
movers required. Its function significantly reduces the burden on
the room air conditioning system; cutting down on the capacity
of air conditioning that must be installed, as well as significantly
reducing the noise and discomfort associated with moving the
air that performs cooling function. It was decided to implement
this technology only on the racks filled with high density blade
servers.
The first challenge the heat exchangers resolved was
underutilized floor space. By utilizing the heat exchangers, it
became possible to fully load six blade chassis per cabinet. In
this manner, the square footage required to house and cool the
cluster was reduced to an optimal 1000 sq ft. Removal of such
a large amount of heat from the room air stream significantly
reduced the amount of air movement necessary for the cooling
solution, thereby reducing noise and discomfort and mitigating
the second challenge. Finally, the facility had - in surplus - four
spare 20-ton air conditioning units which could provide exactly
the amount of sensible air side cooling required – with N+1
redundancy. This helped alleviate the final concern regarding
the implementation schedule. Figure 3 shows the final floor
layout, requiring only about 1000 sq ft. The blade racks in this
figure are shown in yellow, while the support hardware is shown
in blue racks.
The entire high density cluster area was completely segregated
from the remainder of the data center below the raised floor.
This, along with the general layout of the key components of the
cooling solution, further optimized the cooling solution in two
ways. First of all, a very high static pressure was generated at
the perforated tile locations, shown in the form of three rows of
yellow and pink colored tiles at the bottom of Figure 3. Air was
directed below the raised floor in the direction indicated by the
blue arrows on the four air conditioning units shown at the top of
the figure. By partitioning the entire subfloor area, a dead-head
situation was created in the perforated tile area, thereby
FIGURE 2. ALTERNATE FLOOR PLAN LAYOUT
FIGURE 1. ORIGINAL FLOOR PLAN LAYOUT
12 SPRING 2006
maximizing static pressure and air flow rates. Secondly,
because the air conditioning units were located in such close
proximity to the rack exhausts, direct return of warm air to the
unit intakes was ensured to optimize unit efficiency. Finally the
concept of the hot aisle-cold aisle principle was taken to the
extreme – a wall completely separating the warm and cold
sides of the cluster, shown as the thick dashed line in Figure 3,
guaranteed an absolute minimum of warm air recirculation, a
problem that plagues many modern-day data centers.
Table 2 presents a comparison of key parameters between the
original planned cooling solution and the hybrid solution that
was ultimately implemented. It is clear that the introduction of a
water-based rack option helped to create the desired showcase
facility, with minimal floor space and air movement. The
savings are quantified in the form of air conditioning hardware
savings and space savings (assuming that build-out of
additional raised floor space would have been required). A
fringe benefit of this solution was additional savings in the form
A CASE STUDY IN HIGH DENSITY COOLING: A PRACTICAL APPLICATION FOR WATER IN THE DATA CENTER continued from page 11
FIGURE 3. FINAL FLOOR PLAN LAYOUT
TABLE 2. COMPARISON OF KEY PARAMETERS
13SPRING 2006
of operational costs. The overall efficiency of transferring heat
with water is higher and annual savings are indicated, assuming
$0.08 per kilowatt-hour, as well.
SUMMARYIncreasing heat densities and the desire to pack more computing
power into smaller spaces created a number of challenges for
deployment of a powerful supercomputer for Georgia Tech’s
Center for the Study of Systems Biology. The facility was
required to be of showcase quality, with fully utilized floor space,
as well as minimal discomfort from noise and air movement. A
hybrid cooling solution featuring a water-based rack heat
exchanger proved to be the most effective way to create an
optimal solution within the parameters given. The device is
capable of removing 55% of the heat load within a rack,
allowing for maximum packing density for the blades in the
cluster and an optimal floor space requirement of 1000 sq ft.
The total requirement for air conditioning was cut roughly in half,
minimizing cooling hardware and air moving requirements. This
solution will serve as an effective model for how end users can
achieve high density cooling solutions as they transition from
today’s data center facilities forward into future designs.
RAZOR FUN FACTSAt the time it was performance-tested in January of 2006, Razor
would have been rated the 41st fastest supercomputer in the
world, according to Top500.org. It was the world’s fastest
Ethernet-connected cluster.
FUTURE WORKPlans are in place to gather electrical input data, tile airflow
data, and water temperature and flow data on the operating
cluster. This will allow for a complete energy balance to be
conducted, fully documenting the performance of all the
components in the cooling solution for Razor. A comprehensive,
documented report is planned for later in 2006.
2005 FALL CONFERENCE ATTENDEE LISTABB Inc.
Active Power
Aetna Inc.
AFCO Systems
AIG Technologies
AKF Engineers
Alber Corp.
Amdocs, Inc.
America Online
American Express
American Honda Motor Co., Inc.
American Power Conversion
AT&T
Austin Generator Service
Automated Logic Contracting Services
Bala Consulting Engineers
Barclays Capital
Barloworld
Barnett Consulting Engineers, Inc.
Barrett Woodyard & Associate
Bick Group
Black & Veatch
Blue Canopy Group, LLC
Blum Consulting Engineers Inc.
Boston Communications Group, Inc.
Business Information Services, Inc.
C&D Technologies
Callison Architecture
Care Factor
Carnegie Mellon University
Caterpillar
CCG Facilities Integration
Cendant Corp
CheckFree Corporation
Chevron Information Technology Co
ChoicePoint
Citigroup Technology
Civil Engineering Directorate
Colliers Turley Martin Tucker
Computer Sciences Corporation
ComRent International LLC
Connectivity Technologies
Constructors & Associates, Inc.
Convergent Systems Integrations, Inc.
Cosentini Associates
Countrywide Financial Services
Cox Enterprises
Cummins Power Generation
Cupertino Electric, Inc.
Cushman & Wakefield
Danaher Power Solutions
Data Aire Inc.
Data Power Monitoring Corp.
Dataglobe Canada, Inc.
Deloitte & Touche
Department of Defense
Depository Trust and Clearing Co.
DFW Consulting Group
Digital Realty Trust
DLB Assoc. Consulting Engineers PC
Douglas Battery Manufacturing Company
dRay Tech, Inc.
DSA Encore, LLP
Duke Power Company
Dunham Associates
DVL, Inc.
E5 Group, Inc.
East Penn Manufacturing Co.
Eaton Power Quality Corporation
Efiniti Technology, Inc.
Electrorack Enclosure Products
EMC Corporation
EMCOR Facilities Services
Emerson Network Power
Energy and Power Management
14 SPRING 2006
2005 FALL CONFERENCE ATTENDEE LIST continued from page 13
EnerSys Inc.
Engineered Computer Rooms
Environmental Systems Design
Expert Alliance
Exponent
Exxon Mobil
EYP Mission Critical Facilities
Facilities Engineering Assoc.
Federal Reserve Bank of Dallas
Federal Reserve Bank of NY
Fidelity Investments
First Data Corp/Western Union
Flagship Automation
Foley Inc.
Free Scale
GE
GE Zenith Controls
Generac Power Systems
Generator Services Co., Inc.
Gensler
Gilbane Building Company
Glumac International
Goldman Sachs & Co.
H&R Property Management Ltd.
H.F. Lenz Co.
Hewlett-Packard Company
Highland Associates
Hitec Power Solutions, Inc
Holder Construction Company
Hood-Patterson & Dewar, Inc.
Horizon Project Advisors, LLC
Howard S. Wright Construction Co.
IBM Corporation
IBM Research
IBM Systems & Technology Group
IDC Architects
Innovative Research
Intel Corporation
Isothermal Systems Research, Inc.
ISTG/GFS
JDC Power Systems, Inc.
JE Dunn Construction Company
JJA, Inc.
Johnson Controls, Inc.
Jones Lang LaSalle
JP Morgan Chase
JT Packard
Kaiser Permanente
Kling
Knowles Atomic Power Labs
Kohler Power Systems
LayerZero Power Systems, Inc.
Little Diversified Architectural Consulting
Logicalis
Manomet Solutions
Mark G. Anderson Consulting
Master Card International
Mazzetti & Associates
McGough Companies - Construction
Medco
MGE UPS Systems
Michaud Cooley Erickson
Mission Critical Enterprises
Mission Critical West Inc.
Morgan Stanley
Morrison Hershfield Corporation
MTechnology, Inc.
New York Mercantile Exchange
Northam Realty Advisers
NOVA Corporation
NTT Facilities, Inc.
Orgname_IO
Orr Protection Systems, Inc.
Parsons Commerical Technology Group
Parsons Electric LLC
PermAlert ESP Inc.
Pershing
Peterson Power Systems, Inc.
Piller, Inc.
Power Concepts LLC
Power Distribution, Inc.
Power Management Concepts
Power Measurement USA
Power Service Concepts
Power System Solutions, LLC
Prince William County Econ Dev
Public Service Electric & Gas
R.E. Wall & Associates
Rittal Corporation
RMH Group, Inc.
Rosendin Electric, Inc.
RTKL Associates, Inc.
Russelectric
Ryan Companies US, Inc.
S&C Electric Co.
Safeway/IT/Data Center Facilities
Saft America, Inc.
Sanmina - SCI
Satcon Power Systems, Inc.
SECURA Insurance Company
Server Technology, Inc.
Shaw Cablesystems G.P.
Siemens
Sigma 7 design group
Sigma Six Solutions
SPL Integrated Solutions
Square D/Schneider Electric
State of California HHSDC
State Street Corporation
Staubach
Strategic Facilities, Inc.
Structure Tone, Inc.
Susquehanna International Group, LLP
Switch Communications Group
Syska Hennessy Group
TAS, Ltd.
Technical Innovation (TI)
Tecom, Inc.
The Hartford
The Turner Foundation
The World Bank
Tishman Speyer Properties
Tishman Technologies Corp.
Transtector Systems
Travelers
Triton Technology Systems, Inc.
Turner Construction Company
United Parcel Service
United Technologies Power
University of Alaska
University of Kansas Computing Services
Uptronix, Inc.
UTC Power
Vanderbilt University Med. Ctr.
Vanguard Group
VeriCenter
Veris Industries
Verizon Wireless
Wachovia Corporation
Walker Engineering, Inc.
Washington Mutual
WB Engineering and Consulting
WDM Inc.
WebGen Systems, Inc.
Wells Fargo Bank
Wells Global
Whiting-Turner Contracting Co.
Wick Fisher White Engineers
Wild Oats Markets, Inc.
Worldspan
Wright Line LLC
WWCOT
Zachry Construction Corporation
Member Advertising Rate Card
NEWSLINK OPPORTUNITIESFollowing are the Editorial Guidelines for Newslink together with the Member Advertising Rate Card.Advertisers interested in placing an ad may fax the insertion order to 7x24 Exchange at 212.645.1147 oremail to [email protected]. Questions? Please call Jeremy O’Rourke at 646.486.3818x109.
Advertiser indemnifies 7x24 Exchange against losses or liabilities arising from this advertising. 7x24 Exchange assumes no liability whatsoever, except to the extent of a one time paid advertisement ofthe same specification, in the next or similar publication, if any proven or admitted errors or omissions have occurred. Payment is due upon receipt of the invoice. Interest shall be charged at 2% permonth compounded to yield 26.82% per year on overdue accounts. Revisions to previously submitted ad copy are subject to additional charges. A charge of $30.00 will be levied for returned checks.In the event of a contract cancellation, the advertiser or agency agrees to repay 7x24 Exchange any discounts granted for multiple insertions less any discount applicable for the number of insertionscompleted in the contract. All cancellations must be received in writing prior to the advertising sales deadline. All premium positions are non-cancelable. Prices are net of agency commission.
NET BLACK AND WHITE RATES
Size 1X 2X 3X
Full Page $1,500 $1,300 $1,1002/3 Page 1,100 1,000 9001/2 Page Island 900 800 7001/2 Page 700 600 5501/3 Page 600 550 5001/4 Page 500 450 400
COVERS & PREMIUM POSITIONS – INCLUDES 4 COLOR
Size 1X 2X 3X
DPS $5,000 $4,500 $4,0002nd / 3rd Cover 2,500 2,200 2,0004th Cover 3,500 2,750 2,500
Full
Page
Full
Page
Blee
d1/
4Ve
rtica
l
1/4
Hor
izon
tal
1/3
Verti
cal
1/3
Hor
izon
tal
1/8
Verti
cal
1/8
Hor
izon
tal
1/6
Verti
cal
1/6
Hor
izon
tal
1/2
Verti
cal
1/2
Hor
izon
tal
1/2
Islan
d
2/3
Verti
cal
2/3
Hor
izon
tal
Live Area: 7.5” x 10”Trim Size: 8.5” x 11”Bleed Size: 8.75” x 11.25”Halftone Screen: 133 lines up to 150 linesDPS Mechanical Requirements:Live Area: 16” x 10”Trim Size: 17” x 11”Bleed Size: 17.25” x 11.25”Halftone Screen: 133 lines up to 150 lines
8 1/2” x 11” MECHANICAL REQUIREMENTS
COLOR RATESProcess Color (4/c) $900
PMS Colors (add per color) $600
Process Colors (add per color) $500
Revisions and Proofs: $50
Position Guarantee: 15% premium
*Non-Members add 40% to all rates
NON-BLEED AD DIMENSIONSSize Width LengthFull Page 7.5” 10”2/3 Horizontal 7.5” 6.5”2/3 Vertical 5” 10”1/2 Island 4.875” 7.5”1/2 Horizontal 7.5” 4.875”1/2 Vertical 3.625” 10”1/3 Horizontal 7.5” 3.25”1/3 Vertical 2.5” 10”1/4 Horizontal 4.5” 3.25”1/4 Vertical 3.25” 4.5”
15SPRING 2006
EDITORIAL GUIDELINES FOR NEWSLINK
Manuscript specifications: Feature articles vary in length from 500 to 2,000words. While Newslink accepts articles in a variety of formats, it prefers toreceive materials on CD. All articles must be received by the deadline to beconsidered for a specific issue. Material submitted after the deadline will beconsidered for the following issue.
Bylines: All articles should include a brief (1-2 sentence) author biographicalsketch at the end of the article, that includes the author's name, title, affiliation,address, and phone number. Photos of authors are never used. Newslink doesnot pay authors for contributions.
Visuals: Authors are encouraged to submit photographs and charts, graphs, orother illustration that will help readers understand the process being described,though it does not guarantee that visuals will be used with the article. Submitall charts, graphs, and other artwork separately; do not incorporate them in thebody of the article. Indicate caption material separately. Newslink reserves theright to publish submitted visuals.
Editorial proceduresAll articles are reviewed for suitability. Accepted materials are then edited forgrammar and to conform with Newslink's editorial style. All attempts are madeto preserve the author's writing style, however, Newslink has the right to edit forstyle, clarity, and to fit space allotments, and to make final selection onheadlines, subheads, and graphic treatment. Manuscript submission impliesauthor agreement with 7x24 Exchange's Editorial Policies.
CopyrightNewslink requires first serial rights for submitted articles. This means theauthor(s) grant Newslink the right to publish the article for the first time. Wealso request permission for electronic distribution on 7x24 Exchange's website, www.7x24exchange.org.
DisclaimerThe responsibility for accuracy remains with the author. The opinions andinformation in bylined articles in this publication are those of the authors anddo not necessarily reflect those of the Officers and Board of Directors of 7x24Exchange.
CORPORATE LEADERSHIP PROGRAM SPONSORS
322 Eighth Avenue, Suite 501New York, NY 10001www.7x24exchange.org
PRE SORTEDFIRST CLASS
U.S. POSTAGE
PAIDWILKES-BARRE, PAPERMIT NO. 188
Fall Conference 2005End to End Reliability: Infrastructure: Hardware, Software & Support
SILVER MEMBERS
KEY MEMBERS
CONTRIBUTOR MEMBERS
Questions? Call 646.486.3818 or visit www.7x24exchange.org