Upload
rowa
View
28
Download
3
Embed Size (px)
DESCRIPTION
CISL Organizational Structure. FY 2008 $37.8M 174 Staff. $1.5M 13 staff. Laboratory Directorate Al Kellie Associate Director of NCAR 6 employees. Laboratory Administration & Outreach Services Janice Kauvar, Administrator 6 employees. NWSC Project Office K rista Laursen, Director. - PowerPoint PPT Presentation
Citation preview
OSD Exec17 February 2009
CISL Organizational Structure
Technology Development Rich Loft, Director
5 employees
Operations & ServicesTom Bettge, Director
6 employees
NWSC Project OfficeKrista Laursen, Director
Computational MathematicsDoug Nychka5 employees
Turbulence Numerics TeamGeophysical Turbulence
Annick Pouquet5 employees
Data Assimilation ResearchJeff Anderson5 employees
Geophysical Statistics ProjectSteve Sain
5 employees
Visualization & EnablingTechnologiesDon Middleton16 employees
Computer ScienceHenry Tufo
12 employees
Earth System ModelingInfrastructureCecelia DeLuca5 employees
Network Engineering& Telecommunications
Marla Meehl25 employees
High-end ServicesGene Harano
19 employees
IMAGeDoug Nychka, Director
6 employees
$1.5M13 staff
$6.9M38 Staff
$24.8M97 Staff
$4.6M26 Staff
Laboratory DirectorateAl Kellie
Associate Director of NCAR6 employees
Data SupportSteven Worley9 employees
Enterprise Services Aaron Andersen38 employees
FY 2008$37.8M 174 Staff
Includes all NSF Base funds budgeted except indirectIncludes UCAR Communications Pool & WEG fundsDoes not include Computing SPERStaff count includes indirect
Laboratory Administration& Outreach Services
Janice Kauvar, Administrator6 employees
OSD Exec17 February 2009
Data Support Section (DSS)
• What we do – nutshell– Curate and Steward the Research Data Archives (RDA)
• Qualified staff – all with MS or greater degrees in Met. or Ocn.
– Provide access to RDA and aid users and projects with data issues
• Recap of AMS BoF presentation– Many new data assets, JRA-25, TIGGE, ERA-Interim, etc– Accurate data discovery, built on metadata standards– Strong/full project portfolio for the coming years
• One metric chart for example
OSD Exec17 February 2009
DSS Nuggets and Challenges
• A few nuggets from under the hood– Greatly improved data access, systematic across 650 datasets
in the RDA– Wide spread usage of DB’s have provided efficiency
• Metadata, user registration, user access, archiving
– RDA data management adheres to long-lived archive principles. Great position for the future - represents NSF very well.
• Challenges– Maintain balance between adding valuable content (research
datasets) and further improving access – both are beneficial– Minimize security firewall impacts on open data access– Govern manager’s (me) expectations, not overwhelm the staff
OSD Exec17 February 2009
Enterprise Services Section (ESS)High Level Overview
• Computer Production Group (CPG)– 7x24 computer operations– 1st level customer support
• Distributed Systems Group (DSG)– IT infrastructure that makes everything else work
• Infrastructure Support Group (ISG)– Computer room infrastructure, cooling, electrical– Co-location oversight (new task)
• Workstation Support Team (WsST)– User support for workstation-related issues
• Web Engineering Group (WEG)– Provide the software infrastructure for UCAR/NCAR web presence
• Database Support Team (DBST)– Computer allocations, metrics, reporting, decision support
OSD Exec17 February 2009
Enterprise Service SectionChallenges
• Recruiting staff and planning for succession– Gap between baby-boomer population & early career staff– Unix heavy environment that is not taught in CS & MIS curriculums
• Computer room infrastructure capacity– Co-location rooms– At any given time we are finding that equipment in proposals etc. may
outstrip capacity• People data is complex and disorganized
– Groups, visitors, affiliates, computer users– Authorization
• Overall complexity of the computing environment and diversity of demands– Developers, Systems administrators, Administrative
OSD Exec17 February 2009
High-End Services Section (HSS)Services
• High Performance Computing– Flagship: 127 node (4,064 pe) IBM Power 6 – ~80 TFLOPS peak– Smaller Linux clusters – General and dedicated– General and priority computing resources (seasonal campaigns, AMPS,
model development, capability/capacity runs)• Data Archive
– 6.8 PetaBytes total data, 4.6 PetaBytes unique data, 52 Million files– Long term data preservation– New HPSS system for TeraGrid use and evaluation as a future archive
system• Data Analysis and Visualization
– VAPOR (Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers) - design, development, customer support, outreach & training
– DAV Customer Support both to NCAR & TeraGrid communities– DAV Resources: DA cluster with Visualization nodes, multi-TB shared
filesystem for data sharing within the HPC environment• Consulting Services
– Effective usage of high performance computing resources– Program debugging– Porting assistance
OSD Exec17 February 2009
High-End Services Section (HSS)Challenges
• High Performance Computing– Maintaining compatible levels of software and firmware– System upgrades– Debug tools
• Data Archive– Managing growth – Cost of doing business– Technology migration– Data integrity/availability
• Data Analysis and Visualization– VAPOR – enhancements and outreach to additional scientific
communities– DAV customer support outreach to additional scientific communities– Integration of DAV services with CISL, TeraGrid, and other external data
services• Consulting Services (tailoring methods for:)
– Petascale computing (scaling parallel programs)– New modeling paradigms such as data assimilation– Improving efficiency on distributed clusters
• Balancing security risks with usability
OSD Exec17 February 2009
NETS Strategies
• Ensure that network cabling across all UCAR campuses is capable of supporting Gigabit Ethernet (1000Mb/s) to all UCAR workspaces by the end of FY2009
• Renew one-third of all UCAR network equipment each year by replacement or upgrade
• Renew the UCAR network LAN cabling plant every ten years by replacement or upgrade
• Renew the Voice over IP phones every five years• Research and deploy advanced services such as local area and
metropolitan area wireless, unified communications, and optical Wave Division Multiplexing (WDM).
• Manage and operate regional networks and related infrastructure: The Front Range GigaPoP (FRGP), UCAR Point of Presence (UPoP), Bi-State Optical Network (BiSON), Boulder Point of Presence (BPoP)
• Design, implement, and maintain NSC networking
OSD Exec17 February 2009
NETS Challenges
• Next generation network technology such as those being explored by the NSF Global Environment for Network Innovations (GENI) project, NSF’s emerging cyberinfrastructure initiatives, and others
• Dynamic optical network switching
• Wireless and sensor networks
• Large scale regional network aggregation – super aggregation
• Security
OSD Exec17 February 2009
High-End System Procurement Processes
• Procurement-specific Scientific & Technical Advisory Panels– Advise CISL on main requirements and user/application representation– Aid in the development of detailed technical requirements– Internal / External web presence and process archiving
• Competitive Best-Value RFP Process– Partnership with UCAR Contracts office– Technical Evaluation Criteria & Spreadsheets– LTD and Benchmark Evaluations, Intercomparisons– “BAFO” process & Final Evaluation – Subcontract Negotiations & CISL, NCAR, UCAR Approvals & NSF Approval
• Delivery & Installation Oversight• Acceptance Testing & Evaluation vs. Subcontract• Subcontract Monitoring (w/ CISL HSS, ReSET)
– Technical Requirements– Performance, Reliability, Service Delivery Metrics
• Maintain Industry Awareness & Technical Competence
OSD Exec17 February 2009
Recent & Future High-End System Procurements
NSC Comp/Stor RFP/eval/subk
NSC Comp/Stor
ARCS RFP/eval/subk
ARCS blackforest
ARCS bluesky
ARCS bluevista
Linux RFP/eval/subk
Linux lightning
Linux pegasus
Linux thunder
BG/L RFP/eval/subk
BG/L frost
ICESS RFP/eval/subk
ICESS blueice
ICESS bluefire
AMSTAR RFP/eval/subk
AMSTAR
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
Procurement Processes
High-End System Deployment/Production
Future Procurement(s)
Future High-End System(s)
ARCS $36.5M
Linux $1.8M
BG/L $1.7M
ICESS $15.0M
AMSTAR $5.2M
Computer+MassStor $30M+?
To
day
OSD Exec17 February 2009
Management of Computing Resources
• Manage allocations processes– Universities (CHAP)– CSL (CSLAP)– NCAR (NCAR Executive Committee)
• Work with management and users to deliver resources based on allocations– Use cut-offs, queue priorities, allocation groupings
• Resolve allocation/charging problems; set up special projects/campaigns
OSD Exec17 February 2009
Resource Management Challenges
• Incentives for users to manage their mass storage growth, so we don’t have to resort to quotas or some other system that doesn’t respect scientific priorities.
• Keeping the supercomputers busy throughout the year, while providing reasonable turnaround for all users
• Turning away university researchers without NSF funding