Bangalore datacenter specificationb) Proximity Area
.....................................................................................................................
4
2. Building Specification
..............................................................................................................
4
3. High Availability
.......................................................................................................................
5
Redundancy:
...............................................................................................................................
8
Switchboard
.................................................................................................................................
8
Grounding:
.................................................................................................................................
10
Datacenter PDU-A1 & Datacenter PDU-B1:
.............................................................................
13
HVAC Panel 1 & 2:
....................................................................................................................
13
AMF Panel:
................................................................................................................................
13
Air-conditioning:
.........................................................................................................................
16
7. Fire detections and suppression System
................................................................................
1
Water Leak Detection System :
..................................................................................................
4
Pest Control & Rodent Repellent & System:
..............................................................................
4
8. Physical Security Specification
...............................................................................................
5
Access Control :
..........................................................................................................................
5
10. Data Centre Network Specification:
....................................................................................
8
11. WAN:
...................................................................................................................................
8
13. Inter DC/ DR facility:
............................................................................................................
9
14. Meet Me Room:
...................................................................................................................
9
15. MDA Room:
.........................................................................................................................
9
16. Cloud Architecture:
............................................................................................................
10
17. eNlight 360o
.......................................................................................................................
10
19. eNlight 360o Supports
.......................................................................................................
13
20. Managed Security
Services:..............................................................................................
14
Perimeter Firewall
.....................................................................................................................
14
23. Network intrusion detection – Anomaly guard, SRX and Snort
........................................ 15
24. Network of scrubbing centers
............................................................................................
15
25. Data center Operations control Specification
....................................................................
16
26. Data center Maintenance
..................................................................................................
16
1. Overview This document will provide a brief information about
the Rated III ESDS Bangalore
Data Center. This will also assist you to understand how maximum
uptime is achieved
and maintained by ESDS to provide consistent service and impressive
performance
to all its customers.
1. Location Being located in STPI, Electronic City Bangalore, the
Bangalore Data Center resides
on a Plain spanning an area of 1.012 Hectare. The building covers
0.19 Hectare and
0.0825 Hectare is the landscaped area.
a) Address of datacenter: ESDS Software Solution Pvt. Ltd.,
76 & 77, Cyber Park, 6th Floor, Hosur Rd,
Electronic City, Bengaluru, Karnataka 560100
b) Proximity Area Proximity area from datacenter are as
below.
Kempegowda International Airport – 53km
Hosur Road – 2km
• All emergency response services like hospital, fire mitigation
department, police
station is within 2 KM range so that can provide support within 15
minutes of time.
• Nearest Metro Railroad facility is located at a distance of 19 km
and easily
accessible by Public and Private Transport and an upcoming
connection to
Electronic City is underway.
2. Building Specification The foundation of the entire building is
placed on a 50-Meter-deep RCC Plate which is
sufficient enough to sustain moderate Seismic waves. The building
is enforced with RCC
Columns using high quality M30 concrete with high loading capacity.
As per industry
best practices, the Data Centre is located on the 1 st floor of the
building having load
bearing capacity of 1250 kg / sq. mtr. Also, the walls have been
constructed using a
composition of Vermiculite along with mortar to increase the
insulation strength thus
increasing the efficiency. Every floor plate has ducts present in
between them so as to
facilitate safe and standardized installation and transmission of
cables for network and
power. The structure also boasts two huge flood drains surrounding
the periphery of the
building so as to accumulate any amount of flood water. Sumps and
sinks have also been
provisioned to increase the ground water levels and mitigate
floods.
3. High Availability
High availability is a characteristic of a system, which aims to
ensure an agreed level
of operational performance for a higher than normal period.
There are three principles of system design in high availability
engineering:
1. Elimination of single points of failure. This means adding
redundancy to the
system so that failure of a component does not mean failure of the
entire
system.
2. Reliable crossover. In multithreaded systems, the crossover
point itself tends to
become a single point of failure. High availability engineering
must provide for
reliable crossover.
Salient Features:
The Data Centre area is logically divided in Zones based on the
level of security
as described below:
Zone A: is the DC Server room area having server racks, storage
racks
and networking equipment. The area of Zone-A is approximately 7000
sq. feet.
Zone B: comprises of NOC room, SOC room, reception area, Help
Desk
area, Call Centre, staging room. This zone area is approximately
8570 sq. feet.
Zone C: comprises of room for power panels, BMS Manager Room, AHU,
UPS, Telecom Room, etc. This zone is approximately 3412 + 1500 =
4912 Sq. feet Level
-1 and Level –G respectively.
The hosting racks with varying power loads (3 KVA to 12 KVA) have
been designed
taking into consideration the optimum amount of cooling for
equipment / servers.
Modeling techniques such as Thermal modeling have used to arrive at
the
placement of racks in the DC server room.
Design Parameters
The facilities have divided into the following sections
according to usage and reliability requirements:
ZONE A ZONE B ZONE C
Server racks,
Networking racks,
Structured cabling
NOC Room
4. Rated III Certified
Datacenters are classified in different levels depending on the
redundancy and
concurrent maintainability of Non-IT & IT system availability.
The Classification System
provides the data center industry with a consistent method to
compare typically unique,
customized facilities based on expected site infrastructure
performance & uptime.
Furthermore, this enables companies to align their data center
infrastructure investment
with business goals specific to growth and technology
strategies.
Briefing up all the features of Rated III standard we get:
N+1 Fault tolerance (N = dedicated operations + 1 as Backup)
Security of data
5. Electrical System Specifications
Availability for distribution system:
The entire DC has been fed with 1600 KVA power load with a scale up
option
having two incoming feeder lines i.e. primary feeder from BESCOM
Velankani
Station located in phase-1 of Electronics City and secondary feeder
from Malgudi
Station of Electronics city Phase-2.
The distribution system has been designed to meet with rated – 3
requirements and
has enough provision to scale up if required in a later stage. It
system has provision
for Dual Bus configuration in order to have dual power supplies to
each rack,
thus minimizing downtime during maintenance operations. Dual feeder
are also
provided for incoming feed from the main feeder.
Redundancy:
Power Supply for each rack has fed from different power sources.
The concept is
based on n + 1 redundancy, where n is the number of systems/Need Or
main
items of equipment required to maintain the specified operational
requirements. That
means, failure of a single such system or equipment item can be
tolerated.
Switchboard
All switchboards have been designed & implemented to support
non-linear load with
dedicated neutral grounding system provided through isolation
transformer to meet the
standards (1.7 times or 2x phase/line conductors, this is as per
IEEE1100- 1999
specifications.) Panel boards are divided into two, one from
Utility and the other from
generator. These panels have been installed separately in their
respective zones.
On the Panel board, color-coding stickers are placed to identify
the generator power and
utility power. (Blue Indicates Utility power and Orange indicates
the UPS Power)
Electrical Switchboards are designed and installed for 1.6MVa
Capacity load with
N+N level redundancy.
Incoming electrical lines having primary and secondary Transient
Voltage Surge
Suppressors (TVSS) installed, primary TVSS just after the Main LT
switchboard and
secondary just before the UPS. The primary is capable of handling
very high transients
(kilovolt range) caused by lightning strikes or HT surges and the
secondary would take
care of whatever manages to pass through (several hundred volts in
range) the
primary TVSS.
The main incoming supply from the Utility is of 11kV. This incoming
supply is brought to a
SF-6 CB of 11kV and then fed to the transformer. The same has been
stepped down to
usable 415V of AC supply using a transformer capacity of
750kVA.
From the Transformer and DG Set the electricity supply is channeled
using high quality armored
cables which are placed in the Main electrical trench. The main
electrical trench then is split is
two different paths for more redundancy, one path being the DG path
and the second being the
utility path.
Grounding:
Grounding with single ground system with separated ground window
for power and data
conforming to international standards.
Utility Power: The main incoming from transformer is brought into
Utility Panel also called
as main distribution panel. From the main utility panel, APFC
(Automatic Power Factor
Correction) panel is charged. The APFC panel allows us to maintain
a constant and steady
power factor to minimize system losses. Two feeders from Utility
panel are provided to Main
ATS 1 & 2 This switch has two incoming one outgoing feeding to
MLTP 1 & 2 in first floor from
MLTP 1 & 2 UPS input/HVAC /Auxiliary are feed from Auxiliary
panel outgoing is fed to the
30kVA UPS which acts as a backup supply for all non-critical loads.
The output of the UPS is
then given to the Workstation panel which caters to all desktop bay
power supplies and. For
emergency lights two feeders of MLTP 1 & 2 supply to the UPS IP
Source A and UPS IP Source
B Panel each. This panel acts as an incomer to 200kVA Datacenter
UPS.
DG Power: The outgoing power from DG is fed into Main ATS panel.
Two panels for two
DG’s. These panels serve the purpose distribution of supply. The
Critical Panels have one
coming and two outgoing mechanisms. Hence for two ATS panels there
are 4 outgoings.
The outgoing from Critical Panels is fed into the ATS 1 & 2.
One panel has two incomers so
technically two panels serve for 4 incomers. Each panel has two
outgoings as well.
The Data Centre with adequate backup has been designed with
two-generator set to take
care of high availability. The generator has 750KVA*2 Nos capacity
to supply to full load
specifications.
Generators are compatible for continuous operation, specially
designed for data center
continuous application.
UPS IP Panel A & UPS IP Panel B: This panel serves as an input
for 200kVA UPS
Systems A & B. There are two incomings to the panels each. One
incoming MLTP-1 and other
MLTP-2
UPS OP Panel A & UPS OP panel B: The two outputs from 200kVA
Server UPS A &
B is fed into the above name two panels respectively. These panels
have one incomer each
from every UPS and two outgoing feeders each. Total number of six
outgoing feeders. These
six outputs are then fed to the Data Center PDU panels.
Datacenter PDU-A1 & Datacenter PDU-B1: These two panels have
incomers UPS
OP panel A & B. The Datacenter PDU’s then have multiple
outgoing feeders to feed server
racks. Server racks are fed using copper cable from Below raise
floor. Each row has two
copper cable system out of the two systems, one is fed from
Datacenter PDU1 and the other
from Datacenter PDU 2. The cable path used are different as
well.
HVAC Panel 1 & 2: These panels have one incoming each from MLTP
1 & 2. Both the
panels are redundant to each other. There are 7 PAC’s operating as
of today for 5060 sq. ft. of
area. The 7 PAC’s are fed using ATS supplies. The ATS are fed by
dual inputs from two HVAC
panels using different paths. The outgoing feeders consist of one
feeder from each panel
feeding an HVAC Panel. There are two HVAC panels. So, one feeder
from each MLTP 1 & 2
panel feeds the same.
AMF Panel: The Data Centre having an AMF Panel connecting the DG,
UPS such that
automatic switchover takes place during power failure.
Automation is in place, the moment utility power failure DG will
quick start within 10 sec and
power will reach to data center within 20-30 sec.
200kVA UPS Systems: The UPS systems installed are of Reilo make
which are running
in parallel operation mode. The UPS Systems are true online UPS
which provide a constant
and stable output to the Datacenter Panels. The UPS systems are
supported by 120Ah
batteries of 80 numbers (40 nos. x 2). At full loading 15 min
backup is achieved. Cooling for
this UPS room is attained by two PAC's installed of 5 TR Capacity
of Climaveneta make.
Furthermore, the supply is distributed using multiple distribution
boards at different locations as
per the requirement.
NOTE: Every equipment installed operates in the N+1
redundancy.
6. Data Center Cooling Specifications Since cooling serves as the
most critical part after electricity for Data center operations,
they
complete cooling system is designed as per the international
standard to operate at maximum
efficiency. We have seven PAC units installed in the datacenter
responsible for primary cooling.
The units available are R407 coolant-based units having a capacity
of 16TR each. Each of the
system is connected to a central BMS system. Moreover, all the
units are made to communicate
with each other and similarly logic has been defined for the units
to operate alternatively as per
the cooling environment. Humidification happens using an internally
connected water supply
line. Present Data center is designed to handle high density
equipment load easily and
efficiently. The Direct expansion (DX) based Precision Air
Conditioners (PAC) units are
employed in data center facility, PAC unit pumps pressurized air in
the false floor plenum to
maintain a server inlet temperature within the proper temperature
range. Data center inside
conditions are maintained at 20+/-1°C temperature and 50+/-5%
relative humidity (following
Current ASHRAE recommendations), PAC systems are designed with N+1
configuration to
maintain redundancy of cooling. A unique Cold Aisle Containment
(CAC) approach established
in data center. Along with CAC, fillers in all racks are used to
provide better CFM for each rack.
However, as mentioned earlier, we have the provision to provide
separate water-based chiller
systems as per client requirement and the cooling will be
maintained in the same fashion as
mentioned in case of PAC’s. The analysis for heat rejection and
inlet air temperature is done
on daily basis using manual as well as auto sensing methods. In
Auto sensing methods,
temperature monitoring sensors are placed at locations of inlet and
outlet which provides a
scenario of heat generated. This is also integrated with the BMS
system, so that accurate
monitoring and logs are maintained. Manual check is also done to
cross verify for the same.
Air-conditioning: Since Zone A is a critical area, a separate air
conditioning system (precision air
conditioning) has been exclusively installed to maintain the
temperature requirements
for Zone A. Zone B & C can have a common air conditioning
system. The general
requirements for the two zones are as specified below:
Zone A: Zone A has been provided with precision air conditioning on
a 24 x 7
operating basis at least meeting with rated-3 architecture
requirements and
having enough provision to scale it to next level as may be
required in a later
stage. The units are able to switch the air conditioner on and off
automatically and
alternately for effective usage. The units are based on down-flow
fashion, air-
cooled conditioning system.
Zone B/C: Zone B/C are provided with split-type comfort air-cooled
system (at
least meeting with rated-3 architecture requirements). Help Desk
& NOC and SOC
area have been provided with a separate air conditioning system, so
that the air
conditioning units can be switched off whenever required.
General Description of Equipment:
The equipment has been manufactured complying with ASHRAE/ISHRAE
quality
assurance standard and was factory tested prior to dispatch. These
units were factory
assembled which confirms to the following.
Air Filtration conforming to EU3 standards with 50mm thick
disposable pleated cell
filters fitted on the return airside of the evaporator coil and
having a maximum
efficiency of 30%.
Cabinet conforming to Class 1 BS 476 Part 6 & 7
standards.
Electric Re-heater will be operating at black heat temperature,
protected against
airflow failure, and overheat cutout.
Humidifier is capable of having an adjustable capacity control
ranging from
40%-100%. The steam cylinder has been constructed from high
temperature and
is suitable for operation on mains water without use of a break
tank. The humidifier
equipped with an automatic water supply and flushing system.
Power Panel is capable of operating at 420V, 3 phases, and 50Hz
electrical
supply and is capable of withstanding voltage variation of ±5%. A
main isolator is
provided and sized accordingly to meet the systems total power
requirements.
Within the panel individual power loads are distributed equally
across the three
phases and all individual wires are color-coded and numbered to
facilitate ease
of servicing.
control with automatic monitoring and control of cooling, heating,
humidification,
dehumidification and air filtration function are installed.
The server room is having an emergency panic latch door with
automatic alarm system.
The DC has been provided with a fireproof cabinet to store on-site
backup tapes taken
daily, weekly, monthly and half-yearly. Walls & doors of the
Data Centre are Fire-Rated
to prevent any further spread of fire.
Hot aisle/cold aisle is a data center floor plan in which rows of
cabinets are configured
with air intakes facing the middle of the cold aisle. The cold
aisles have perforated tiles
that blow cold air from the computer room air-conditioning (CRAC)
units up through the
floor.
The servers’ hot air returns blow heat exhaust out the back of
cabinets into hot aisles.
The hot air is then sucked into the CRAC unit to be cooled and
redistributed through
cold aisles.
7. Fire detections and suppression System Fire detection and
suppression system have been installed considering all the possible
event
occurrences. Fire Detection is done using VESDA (Very early smoke
detection appliance)
and Fire detectors. The VESDA system is very essential since it
continuously monitors the
data center air for presence of any traces of smoke in the air.
This is also achieved using light
reflection technology. Air from various parts of the data center is
sucked in and then is passed
through a light reflection zone. Any changes in the output pattern
of the reflection leads to an
alarm for smoke presence in the air. Along with this, the detection
is also done using the latest
technology detectors that are mounted as per the standard and
adhering to the data center
norms. Once both the alarms i.e. Fire detector alarm and the VESDA
alarm gets activated
and cross-zoned then, it actuates a trigger for release of
Suppressant. For the purpose of fire
suppression, NOVEC 1230 gas is used. This is the next generation
gas to FM 200. This is
ozone friendly and has more quick response to suppress the fire.
The diffuser nozzles are
used in the data center above and below the floor. This is also
monitored and supervised
using BMS.
The entire Data Centre is divided into two major areas, critical
and non-critical. The
critical area consists of server room (Zone A) and non-critical
areas consist of other
areas (Zone B, C).
NFPA standard 2001 compliant fire suppression system are installed
for Zone A.
Novec 1230 Gas suppression System is in place and for other areas,
hand- held
firefighting devices are installed at accessible locations; these
are primarily CO2 gas-
based Fire Extinguishers.
120 kg x 9 nos. Cylinders are installed to protect Zone A (Data
Center)
Water Leak Detection System :
Sensing cable is installed along room perimeter especially along
the precision air
conditioning and under the PAC drain water line.
Pest Control & Rodent Repellent & System:
Pest Control system is provided for the entire Data Centre &
Rodent repellant system is
provided mainly in areas where false flooring is provided within
the Data Centre. The
electronic Repellent system would protect the entire volume of
space under
consideration including above false ceiling, below false ceiling
and below false floor.
8. Physical Security Specification For perimeter patrol we have
security guards round the clock. On main entrance Security
Supervisor and minimum two security guards are stationed round the
clock. For Planned visit
one of the Security guard accompanies and does the internal
security clearance.
We have complete premises covered by high compound walls with
barbed wire fencing over
them (Drawing as attached). The main entrance of the building is
guarded by two reinforced
gates. Between the gates there are physical security guards with
visitor management is there
for added security. Another layer is of metal detectors. We have
walk through metal detectors
and then a layer of handheld metal detectors. The critical areas
like UPS room and Panel
room are surrounded by Dual Authentication Biometrics, RFID’s as
well as mantrap before
entering datacenter. Data Center is on first floor, so the visitor
will be escorted by a security
guard from the entrance to the Server Room. Entry to the Server
room area is through the
mantrap. All the critical areas are under CCTV Cross surveillance
and the feeds are recoded
and viewed at the high security BMS Room. Non-Critical areas are
guarded by CCTV
Surveillance.
Access Control :
Proximity card reader and proximity access control system &
Biometric systems have
been installed with its software for monitoring the access of
individual persons in the Data
Centre. These are installed inside as well as outside the Level 2
premises.
Biometric authentication is deployed at the main access door of the
server room area
(Level 3). This device would support fingerprint scanning and
numeric authentication.
Raised floor and Insulation :
Cement fill raised floor panel with anti-static finish is installed
on bolted- stringer system
in order to maintain more rigidness and stability for the
concentrate load and rolling load.
This type of system is better for frequent panel movement.
UDL 1250KG/Sq. mtr. It can sustain high-density rack load.
Insulation under the raised floor has been provided to prevent the
condensation
caused by down-flow conditioning within DC area and network area.
Perforate
panels are provided for at least 10% of total DC area and network
area.
Galvanized coating for materials such as ceiling grids, raised
floor supports, etc. are
electroplated galvanized. This is to avoid zinc whiskers or
metallic contamination.
9. Structured cabling specifications Structured cabling is the
design and installation of a cabling systems that will
support
multiple hardware uses systems and be suitable for today’s needs
and those of the future.
With a correctly installed system your requirements of today and of
tomorrow will be
catered for and whatever hardware you choose to add will be
supported.
Lines patched as data ports into a network switch require simple
straight-through patch
cables at each end to connect a computer. Voice patches to PBXs
require an adapter at
the remote end to translate the configuration.
10. Data Centre Network Specification:
The Data Centre Network is built for reliability, scalability and
performance.
This allows us to pursue the goal of highest uptime while providing
best performance and
reliability and thus enhance our customer experience.
11. WAN: 1) Peering with multiple carriers for internet
connectivity via a stable and redundant platform
which allows us to provide constant Internet connectivity even
during device/ connection
failure or during maintenance of the host ISP.
2) Our Infrastructure supports provision of high bandwidth as
required to ensure that our
customer environments can scale as required.
3) High Speed connectivity through multiple Providers to our
Datacenter within Bangalore
and other geographies to ensure provisioning of DC-DR and other
business connectivity
solutions from one provider.
4) IPV6 Network Availability.
12. Data Centre LAN: 1) Data Centre Spine-Leaf architecture with
high-speed 100GE connectivity with certified
structured cabling in place.
3) Loop-free and high scalable EVPN-VXLAN technology.
4) Secured and Isolated LAN and SAN environment.
5) eNlight cloud for automation of server/ VM provisioning.
13. Inter DC/ DR facility: 1) Reliable and dedicated P2P
connectivity between multiple DC/ DR sites
2) Inter DC connectivity by using EVPN-VXLAN overlay routing
technology ensuring Active-
Active and Active-Passive DR solutions.
3) Storage to storage & VM to VM replication between multiple
DC/ DR sites
4) Network is designed to deliver disaster recovery service without
changing IP addresses
14. Meet Me Room: 1) To provide secure and isolated space to manage
ISP services for data center network
2) High speed P2P, MPLS and ILL services available on demand from
major telecom
providers
3) Different high-speed connectivity options between MMR and MDA
for internal and
Customer MPLS and other telecom connectivity to their Setup.
15. MDA Room: 1) Separate MDA room to host critical equipment and
ensure scalability of various solutions
opted by customers
2) Separate access and isolated from equipment distribution area to
ensure easy
troubleshooting while maintaining isolation for customer
environment
16. Cloud Architecture: 1) Support for multiple types of cloud
architecture like public, private, collocated option etc. to
ensure flexibility in solution offered to customer.
2) Highly scalable and reliable architecture built orchestrated by
or patented eNlight360
platform.
3) Easy deployment to ensure quick delivery of customer
requirement.
4) Scalable compute and storage architecture ensuring provision of
highly scalable, reliable
and flexible solutions to our customer.
17. eNlight 360o
eNlight 360° cloud solution comes with a full-blown hybrid cloud
orchestration layer along
with complete Datacenter Management Suite i.e. eMagic (DCIM) and
Security Scanner
(VTMScan), that makes it the most unique offering in the market
today. It’s a Next Generation
Hybrid Cloud orchestration software suite and can be setup in your
own premises thus, giving
you the security of Private Cloud and scalability of Public
Cloud.
Features of eNlight 360°
• From Physical Infra (Bare Metal) to Virtual Infra – Manage
Everything
• On Premise – DC Management & Cloud Computing
• Auto Discovery of Physical / Virtual Infrastructure
• Auto Scaling – Patented Technology
• Compute, Storage, Network Management
• Multi Tenancy
19. eNlight 360o Supports
3) Gain competitive advantage with 24x7x365 day support.
4) Peace of mind with quick response to queries / tickets.
5) Support evolving needs and solve business problems.
6) Certificate Compliance.
7) Because we have the expertise and resources to deliver
success.
8) Trained and skilled staff is successfully providing managed
support from 10 years.
21. Security: 1) Various isolated community cloud infrastructure to
adhering to standards and
compliances required by various market segments.
2) Micro segmentation with separated perimeter security
layer.
22. Data center Network Security Specification
Perimeter Firewall
• ESDS has partnered with OEMs like CISCO and JUNIPER to install
and manage a
powerful perimeter firewall.
• ESDS perimeter Firewalls allow only specific and known traffic
inside the data center
hence keeping the malicious elements out of the network.
• Perimeter firewalls works as NAT gateway for eNlight cloud and
enterprise customers.
• Firewalls to deliver 10 and 40 Gbps throughput.
• Perimeter firewalls work as a shield for inside network against
the outside threats.
• In line as well as off-line mode of deployment to handle any
amount of traffic.
23. Network intrusion detection – Anomaly guard, SRX and
Snort
• Multiple levels of intrusion detection systems with heterogeneous
platform.
• Nearly real-time update of anomaly signatures in order to protect
from recent threats.
• Integration with core devices in order to null router traffic
from malicious sources.
• Deep inspection of traffic and different criteria and automated
actions for violations of
rule sets.
24. Network of scrubbing centers
ESDS along with its technology partners have set up various level
scrubbing farms to counter
the Multi Giga bit attacks.
• Under normal circumstances all the customer traffic is cleanly
routed from the ISP
network to ESDS DC and to the customer’s servers.
• When any server is under attack, ESDS detects malicious network
packets and DDOS
signature patterns and notifies user.
• When a state of attack is declared ESDS NOC engineer with the
help of ISP partners
injects a route into BGP router and diverts all the traffic to
Scrubbing farms.
• Scrubbing farms with state of the art cutting edge technology
analyze the network
packets and their attack type and drop those packets hence stopping
the malicious
traffic to hit the customer servers.
• DDoS prevention without any major latency change.
25. Data center Operations control Specification Separate Network
Operations Center teams are available which take care of IT
Equipment
Control and Electrical Mechanical System Control. There are 10
Engineers for IT Equipment
control and 6 qualified and trained engineers for BMS. Both these
teams work round the clock
in shifts to ensure smooth functioning and operation of the
complete setup. The similar
fashion Manpower and setup is available at Nasik Datacenter.
26. Data center Maintenance Preventive Maintenance Schedule is
carried out on Daily, Weekly, Quarterly and Half Yearly
basis depending on the type of component under maintenance.
Attached are the preventive
maintenance reports. Our datacenter being a rated III certified,
redundancy is maintained at
every level or equipment and material. Moreover, apart from this,
we have maintained a
minimum stock level of components depending on the failure
possibility rate. Basic electrical
components are in high availability zone in the Stores. Rest of the
components have minimum
availability.