Upload
trankhanh
View
213
Download
0
Embed Size (px)
Citation preview
High Performance Computing (HPC) and Activities of Computer Centre IIT KanpurActivities of Computer Centre, IIT Kanpur
A presentation to the Board of Governors, IIT Kanpur
May 28, 2010
Amalendu ChandraHead, Computer Centre
IIT Kanpur
Acknowledgments
Board of Governors, IIT KanpurBoard of Governors, IIT Kanpur
CC Engineers, HPC Group
IWD Engineers
CC@IITK has a glorious historyComputer Centre
This centre was established in 1964 and it wasstarted in Western Labs under Department ofElectrical Engineering.
It moved to its present building in 1969, when itwas recognized as an independent department inthe Institute.
IBM-7044IBM-1620 was the first Computer acquired by IITKanpur. Next was IBM-7044 in 1966, followed byan IBM-1401.
IBM-7044
Several specialized Computers such as IBM-1800,PDP-1 etc were added in subsequent years.
The next major upgrade was the addition of DEC-1090 mainframe computer in 1979, which was thefirst sharing computer of IIT Kanpur. This was firstcomputer which had terminals.
PDP-1
In 1989, CC purchased Super minicomputers of HP9000 series.
History of Computer CentreComputer Centre
In 1987, the first PC lab was setup providing DOSenvironmentenvironment.
Convex 220 was setup in 1990. CC got its MiniSupercomputerp p
IBM SP2 was setup in 1999. First parallel computerHP Servers
Email service was run under Ernet project in early1990s and was subsequently moved to computercentre around 1994
In 1995 the Campus Network was upgraded to 100Mbps Fiber Backbone and 10 Mbps UTP AccessNetwork.
Convex-220
64 Kbps Internet link was setup in 1998 and todaythe bandwidth has increased to more than 1 Gbps.Linux Cluster (SUN & HP) set up in 2004-05
Current Staff of Computer Centre
Principal Computer Engineer : 2 + 1 (on deputation)
Senior Computer Engineer : 3 + 3 (OA)
Computer Engineer : 1+1 (retiring soon)
Jr. Technical Superintendent : 3Jr. Technical Superintendent : 3
Jr. Technician : 2
Facilities Provided by CCComputer Centre
Computer Center provides state-of-the-art Computing, E-mail,Internet and other facilities 24 hours a day and 365 days ayear for more than 7500 users.year for more than 7500 users.
Major facilities provided by CC are:Computing hardwareComputing hardwareApplication softwareCampus NetworkEmail and InternetEmail and InternetLinux and Windows LabsFile Storage and BackupServicesServicesHosting of IITK websiteOffice Automation (under DD)Technical support via phone and emailTechnical support via phone and emailMaintenance of PCs and peripherals
Activities• 4-5 hours of Classes for UG and PG per day at CC• Students work on computing and CAD assignments using CC Lab facilities• User training / Education / Familiarization / Help with compilation coding etc.• Workshops and Seminars• Troubleshooting • Research and Related Activity• Maintenance of Hardware and Software• Maintenance of Hardware and Software• Software installation and upgradation• Development and Installation of regularly used software (in-house)• Mass User registration / authentication and login id distribution for supportMass User registration / authentication and login id distribution for support• Support for
– Alumni Office, DRPG Lists, Office Automation, OARS, Regular Courses Mailing, Conferences, Short Term Courses, Mass Internal Mailing and announcements, NO Dues supply and repair of PCs for administration and other sections webNO Dues, supply and repair of PCs for administration and other sections, web hosting
• Security• Mail, Domain Name Service, Networking, Software Downloads sites, Mirror
Sit i tSite maintenance
Computing FacilityComputer Centre
p g y
C C h 3 4 CPU d d 146 d l CPUComputer Centre has 3 4-CPU master nodes and 146 dual-CPUcompute nodes in the Cluster amounting to 292 cores ofcomputing power connected over 1Gbps LAN.
Applications - Gaussian, Linda, Charmm, FEM, Diff.Equ. SolvingMolpro, parallel libraries etc.
Environment - Parallel as wellEnvironment - Parallel as well as sequential on open source OS
A new facility of about 3000A new facility of about 3000cores of very fast computenodes with 40 Gbps Infinibandinterconnect is in the pipeline.
Storage and BackupComputer Centre
StorageOne 33 TB HP Storage WorksgEVA8000 Enterprise Virtual Array andOne 6TB SUN StorEdge 6120
File ServiceFile ServiceOne HP Storage Works Clustered FileSystems (Poly Serve SymmetricCluster File System) and One VirtualFile Service (PolyServe Matrix Server)File Service (PolyServe Matrix Server)
Backup ServiceOne Backup Server for Users' homepdirectories and Users' Mail with HPMSL6000 DPBackup Policy: Daily incrementalbackup, saved for one week, Weeklyi t l b k d f
HP EVA 8000 Enterprise VA
incremental backup, saved for oneMonth and Monthly full backup,saved for one year.
New Storage: 100 TB for HPC
Linux and Windows EnvironmentLinux Working Environment• Three Labs equipped with 143 Latest Configuration PC with Ubuntu and
Fedora.• 9 Computational Servers• 9 Computational Servers.• Software Installed for Simulations and Modelling, Optimization etc.
Windows Working EnvironmentWindows Working Environment• Domain Controller and License Server, SAMBA Server and Deployment
Server• Two Labs equipped with 75 Latest
Configuration PCs • Software Installed in wide category
Email Setup
• Approximately 7500 mail boxes• Quota ranging from 500MB to 1500 MBQuota ranging from 500MB to 1500 MB• Around 250000 Mails sent and received every
day out of which 90% of incoming mail is SPAMday out of which 90% of incoming mail is SPAM which is filtered by Barracuda SPAM firewall
• Both Linux Postfix and Microsoft ExchangeBoth Linux Postfix and Microsoft Exchange platforms provided
• The setup is state-of-the art enterprise class• The setup is state-of-the art enterprise class providing high availability and fault tolerance.
Email Architectureiitk.ac.in and other served domains
Internet
HTTP/PUSH
SMTP ServerUn-Authenticated
MS
HTTP
Other Served Domains Iitkalumni.org security.iitk.ac.in anataragni.iitk.ac.in
Spam Filter Pair Local lists server
MS Exchange
ServerMS EXCHANGE PROTOCOLS
gtechkriti.org cse.iitk.ac.in etc.
SMTP
Mail Hub PairSMTP Server
Web Mail ServerIMAPHTTP HTTP
SMTP Exchange Registerd users
SMTP ServerPOPIMAPPOP
Local Mail StoreWebMail and
MSXchange Clients
SMTP/POP ClientsAuthenticated /
Unauthenticated
Current Network SetupComputer Centre
I tit t Gi bit LAN (L l AInstitute Gigabit LAN (Local Area Network) with more than 15000 nodes covering Academic Area and Student Hostels.
2 Core Switches, ~50 Distribution Switches, ~800 Access SwitchesGigabit Fiber Optic Backbone Network with more than 21 Kms of Fiber laid inwith more than 21 Kms of Fiber laid in the CampusFully Managed Network
1 Gbps Internet Bandwidth from Airtel
Backup Bandwidth from Reliance and NKN
Overlay Wi Fi Network in theOverlay Wi-Fi Network in the Academic Area
~500 Access Points
Internet Application Servers forInternet Application Servers for providing Web, Mail, Proxy, DNS and other Internet services to the users.
Network ArchitectureComputer Centre
FORTIGATE 3600A UTM (HA MODE)L2 Switch
AIRTEL MUXCISCO ROUTER
7200
CISCO 6500CORE SWITCHES
CISCO DS 3750
CISCO AS 2960G
CISCO AS 2960G
BSNL MUX
AIRTEL MUX 7200
InternetInternetCISCO AS 2960G
CISCO AP 1131
CloudCloud
Network: Immediate and Long Term Plans
Computer Centre
Network: Immediate and Long Term Plans
Immediate Plans:
Expand the network to accommodate new buildings/facilities and increasing student strengthfacilities and increasing student strength
Provide 1 Gbps LAN in the residential area
Long Term Plans:
Build an All IP Network to provide integrated Voice-Video-Data network
Provide IP based Voice/Video phones and DesktopConferencing facilityConferencing facility
Cyber SecurityComputer Centre
y y
In view of the requirement to strengthen the Cyber Security,f ll i h b k i hfollowing steps have been taken in the recent past:
All the switches have been replaced with managed switches.This allows binding an IP address with every network portThis allows binding an IP address with every network portwhich helps in tracing the machine/individual who has beeninvolved in any hacking.
A CCTV IP camera has been installed at the entrance of CCto record the entry and exit of users.
A Fortigate UTM (Unified Threat Management) device hasbeen installed at the Internet Gateway to monitor and controlthe Internet Traffic and allow tighter control on the traffic toInternet Application Ser ers for better sec ritInternet Application Servers for better security.
Measures to Improve Cyber Security
Block all the unused TCP/UDP ports for Internet ApplicationServers on the Internet Gateway Firewall.Implement electronic Access Control based on identity in allImplement electronic Access Control based on identity in allCC labs. In addition, install CC Camera in all the labs.Implement DHCP based IP address allocation policy. Amachine should be able to use the network only if it hasb ll t d IP dd th h DHCPbeen allocated IP address through DHCP.Implement Network authentication for both wired andwireless network. Make provision for issuing temporary idsfor visitors.for visitors.Implement Wireless Intrusion Detection and Preventionsystem.Implement more secure authentication than currentlyImplement more secure authentication than currentlyfollowed scheme.Implement better security for system logs on the individualservers. This will help trace the attack.
Activities UnderwayComputer Centre
y
Windows and Linux Labs for more than 200 users inWindows and Linux Labs for more than 200 users inNew Core BuildingGigabit LAN connectivity in ResidencesGigabit LAN connectivity in ResidencesNew Mail Storage (25 TB)New UPS (900 KVA)New UPS (900 KVA)GPU servers and high end workstationsNew HPC facilityNew HPC facility
Why HPC ?
To solve complex problems in science and engineering
Higher resolution simulations for longer time
Sometimes experiments cannot be done !!Computational experiments can be used to
simulate extreme conditionssimulate extreme conditions
Vast expertise in Numerical MethodspVast expertise in Numerical Methods
A li tiApplications Hardware
HPC SupportSystems
Basicscience
Visualization Numericall ith
yscience
Visualization algorithms
More than 100 faculty members across yvarious disciplines are involved in computing
HPC Research @IITK
Multiscale, Adaptive Finite Element Methods
using Domain Decomposition
Flow Past Bodies with Complex Geometry and
Large Eddy Simulation of Turbulence
Vortex Dominated Flows and Heat Transferp y
Corners
Flow Induced Vibrations
Analysis of Aircraft Structures
Vi t l R lit
Pseudo-spectral Turbulence Simulations
Geo-seismic Prospecting
Enhanced Oil Recovery
Stress Analysis and Composite MaterialsVirtual Reality
Computational Chemistry
Nanoblock Self Assembly
Molecular Simulation (Molecular Dynamics &
Vibration and Control
Semiconductor Physics, Feynman Integrals
Thermal and Hydraulic Turbomachinery
Numerical Weather PredictionMonte Carlo Methods)
Statistical Thermodynamics
Geometric Optimization of Large Organic
Systems
Numerical Weather Prediction
Turbulence Modelling through RANS
Neural Networks
Impurities in Anti-Ferro MagnetsSystems
Electronic Structure Calculations
Aggregation and Etching
Quantum Simulations
Raman Scattering
Spin Fluctuation in Quantum Magnets
Robotics
Multi-Body DynamicsThin Film Dynamics
Optical / EM Field Calculations
Parallel Spectral Element Methods
Multi-Body Dynamics
Computer Aided Tomography
Nuclear Magnetic Resonance
Awards and Honours
S.S. Bhatnagar Prize
Fellowship of Academies:FNA; FASc; FNAE
Research Fellowships:J.C. Bose; Swarnajayanti; Raja Ramanna
Status of Central HPC Facilities at Academic Institutions
• IISc Bangalore: 8192 processors IBM BLUE GENE (17 TF)
• IIT Bombay : 380 Nodes : Xeon Dual Core (partly on infiniband)IIT Bombay : 380 Nodes : Xeon Dual Core (partly on infiniband)
• IIT Madras : 256 Nodes, Xeon dual core (partly on infiniband)
• JNCASR : 128 nodes, Xeon dual core (infiniband)
• Univ. of Hyderabad : P690 SMP server (32 processors x 4)
• IISER Pune: 64 Nodes Xeon Quadcore (Infiniband)IISER Pune: 64 Nodes Xeon Quadcore (Infiniband) • IIT Hyderabad: 64 Nodes Xeon Quadcore (Infiniband) (6 TF)
IIT K 144 d AMD O t i l• IIT Kanpur : 144 nodes, AMD Opteron single core 5 year old Hardware (< 1 TF)
Our goal is to …• be the best in the country and one of the best• be the best in the country and one of the best
in the world in HPC
• carry out cutting edge research on computationalscience and engineering
• develop large parallel software for research li tiapplications
• collaborate within IITK and with neighbouring• collaborate within IITK and with neighbouring academic institutions
• have training programs for students and scientists.
The New HPC Setup
Smaller TestThe MainCluster: 260 nodes Clusters
Dual proc, Nehalem Quad coreDual proc; Nehalem Quad core
Cluster: 260 nodes
HPCInfinibandN t k
ServersNetwork
(40 Gbps)Disk
100 TB storageNehalem Quadcore/GPU
Visualization LabHi h d hi W/SHigh end graphics W/S
System IntegrationLinux cluster (260 nodes)
Mstr Mgmt Mgmt Mgmt Comp Comp Compcompute nodes
GB switchesConnection with IITK network
IB switch layer
GB switchGB switch
servers(Multi-node) SSSw
itchSw
itch
Smaller Test Clusters
comp Comp Compcompute
nodes
Storage100 TB disk
GB switch
New HPC Facility at IITK
The integrated facility will have a total of 372 nodes and a projected delivered performance of ~ 30 TFand a projected delivered performance of ~ 30 TF
Should be the best HPC facility among all academic y gInstitutes in the country.
Second best among Government organizations (C-DAC has got a 38 TF cluster)
We might break into top 500 globally!
List of work associated with the HPC set-up
1. Layout of the proposed area for HPC facility2. PAC requirements for the facility3 AC (Non-PAC) requirements3. AC (Non-PAC) requirements4. UPS requirements5. Total power requirements6 UPS/b tt / t l l6. UPS/battery/control panel rooms7. How to provide the required power from main/DG set8. Civil work in the proposed HPC areap p9. Civil work in the UPS/Battery rooms10. Electrical work/laying of lines and panels 11 Fire safety issues11. Fire safety issues12. Building Management System (BMS)
CC and IWD components of HPC-related work p
1. HPC Systems CC2 UPS (700 KW) CC2. UPS (700 KW) CC3. PAC (630 kW) CC4. Fire safety and BMS CC
5. AC (non-PAC) IWD6. Substation (1.5 MW) IWD ( )7. DG set (1.5 MW) IWD8. Electrical equipment/distribution IWD9 Civil work*/laying of electrical lines IWD9. Civil work /laying of electrical lines IWD
*Flooring work in main HPC area will be done by PAC vendor
A HUB for Collaborative Research
BHU
Alld U
SGPIMS,
Delhi UHCRI, Alld
HPC
BHU SGPIMS, LKW
AMU CDRIHPC Centre
IITK
AMUAligarh
CDRI,LKW
Kanpur Univ + HBTI
JNU
HBTI
MNNITAllahabad
NIIT, Alld Lucknow
Univ
Training and Workshops
• Visitors program• Summer schools/workshopsSummer schools/workshops• International/national conferences
HPC ti• HPC users meeting
• Academic Program at IIT Kanpur – Masters in Computational Science andp
Engineering– Doctoral ProgramDoctoral Program
Computer Centre as both Service and Academic Centre
Immediate need for HPC Four Project Scientists (DST, advertisement made)System Administrator, Secretary (Institute)
For General CC jobs: Engineers, Technical staff
Number of users has gone up, number of CC personnel has gone down
Immediate need of more office space for HPC and CC staff. Also, a seminar room, visitors room and test labs should be there in place for this transformation to occurfor this transformation to occur
Need more space, more manpowerA proposal with more details has been sent to the SpaceA proposal with more details has been sent to the Space Committee