23
Report on HEPiX Fall 2010 A personal view Thomas Finnern (DESY/IT Systems and Operations) Peter van der Reest (DESY/IT Information Fabrics) Report on HEPiX Fall 2010, Ithaca, NY November 1st - 5th, 2010 @ Cornell University

Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Report on HEPiX Fall 2010A personal view

Thomas Finnern (DESY/IT Systems and Operations)Peter van der Reest (DESY/IT Information Fabrics)

Report onHEPiX Fall 2010, Ithaca, NYNovember 1st - 5th, 2010 @ Cornell University

Page 2: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 2

Event Outline

> Chuck Boeheim: Introduction

> Michel Jouvin: Welcome

> Site Reports

> Technical Topics (Tracks)� Virtualisation

� Grids and Clouds

� Storage and Filesystems

� Security and Networking

� Extra SessionsIPV6LustrePrinting

� Miscellaneous Talks

� Benchmarking

� Data centres and Monitoring

� Operating Systems & Applications

> Sandy Philpott: Wrap Up

> 4 DESY Talks� Thomas Finnern: BIRD: Batch Infrastructure Resource at DESY

� Peter van der Reest: DESY Site Report

� Patrick Fuhrmann: First results from the WLCG NFS4.1 Demonstrator

� Wolfgang Friebel: Rapid web application design for silicon detector measurements

Page 3: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 3

People and Miscellaneous

> Psychology at Cornell

> 59 attendees from 12 countries and 24 different sites.

> Alpine

> Frank Schlünzen� Studied @ Cornell

James Maas:Power Napping

“Siestas revisited”

No longer do we take catnaps; instead, we take power naps. Unlike “dozing off,” which is presumably involuntary and thus a sign of laziness, power napping is deliberate and thus a sign of responsibility.

Page 4: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 4

Meeting Attendance

Page 5: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 5

Observations by Thomas Finnern

> 4.5 Days in 45 Minutes !

> Virtualisierung

� Grid, Desktop, Server

> Batch

� Future of Sun Grid Engine

> Operating Systems

� Future of Solaris

� Scientific Linux Schedule

> Security and Networking

Page 6: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 6

Site Reports Summary

> More Data

> More „Green Computing“

> More Consolidation

> More Standardisation ?

> ITIL(IT Infrastructure Library) + ISO20000 growing

> Clouds begin to fly

Page 7: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 7

Site Reports (1)

> LEPP Site Report BOUGIE, Devin� Energy Recover Linac (ERL)

� 148 Node n1ge Solaris Batch Farm

� 200 Core n1ge 6.0 Linux

� VMs under KVM

� Move VMs to iSCSI

� LPR Printing to Cups Server

� HA with Redhat Cluster Suite

> CERN site report Dr. MEINHARD, Helge� IT Reorganisation 2010

� September: Personnel Rochade

� ITIL V3 Service Management:

One Service Desk� One number to ring� One Place to Go� 24/7 coverage� Automation of all known Processes

� Oracle VM for non-mission-critical Services

� Mac Support

No managed PlatformNo iPhone, nor iPad

� SLC not longer for Laptops

� Batch Jobs on VMs

� Parallel Cluster

Intel NetEffect instead of Infiniband� Obligatory web-based security course (with test)

� EVO -> Vidyo ?: „Problems not technical“

> Fermilab Site Report Dr. CHADWICK, Keith� Business as Usual

� Stats only Done when Power On -> „Uptime 100%“

� ITIL: 4 Topics Done: Incident, …

> INFN-T1 site report CHIERICI, Andrea� Worker Nodes on Demand /WNoDeS

Local SubmissionVirtual Interactive PoolCloud ComputingGrid ComputingUp to 2000 VMs

> CC-IN2P3 Site report Mr. OLIVERO, Philippe� BQS -> Grid Engine

> ASGC site report LEE, Felix� LHC Tier 1

� Cloud Activities

HEP, BioResearch, …, …OpenNebula: Xen + KVM+ 100 Blades

> NDGF Site Report WADENSTEIN, Erik Mattias� Distributed Tier 1

� New Denmark

� Norwegian soon

> DESY Site Report VAN DER REEST, Peter� Lots of new Collaborations

� Few VOs (OOO) to many VOs(O)

Page 8: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 8

Site Reports (2)

> RAL Site Report BLY, Martin� Less money next time …

� Resistance against new emails: N.N.@xxx

� Quattor (5200 Cores, 700+ Machines)

> SLAC Site Report WACHSMANN, Alf� Re-defined itself: HEP -> Photon

� No ITIL, but Service Portfolio/Catalog + some SLAs

� New SunCat Group 4 Science and IT 4 Admin

� Oracle: Solaris Storage Servers 2 RHEL

> Jefferson Lab Site Report PHILPOTT, Sandy� VMware 4.1 with vCenter

� 7 Hosts / 16/8 TB iSCSI

� Identity Management /Single Sign on

� WEB -> Drupal

> Site Report GSI SCHON, Walter� FAIR

> LAL + GRIF Site Report JOUVIN, Michel� Stratuslab (OpenNebula)

� Quattor

> Saclay (IRFU) site report MICOUT, Pierrick� …

> Prague Institute of Physics Site Report, KUNDRAT, Jan

� Started 2002

� Power Down due multiple 100 Ampere Fuse Blow Out

� Care about mulitiphase and correct phase connects

> KISTI - GSDC site report BONNAUD, Christophe

� KISTI -슈퍼컴퓨팅센터

� Korea Institute of Science and Technology

� VMware / VMware Center

> BNL Site Report WONG, Tony� Overheated CPUs with thermal grease on heatsink (DELL

assembly bug)

� DOE mandated security protocol

UNIX Centralisation Project> NERSC/PDSF Status report - A Year of

Changes SAKREJDA, Iwona� Large Scale Computing System

� Franklin: 9532 nodes 38128 cores

� Hopper: Cray XT: 150k cores 2 PB Disk

� Clusters Carver + PDSF

� Magellan “Cloud” Testbed 700+ cores

� 3 days Downtime 4 retiring old hardware

� Sun Grid Engine: Good Scaling and Performance

Page 9: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 9

Session 2: Virtualisation

> Report from the Virtualisation Working Group CASS, Tony (EVO)

� Image Generation Policy

� Image Exchange

� Image Expiry/Revocation

� Image Contextualisation

� Multiple Hypervisor Support

> A scheme for defining and deploying trusted Virtual Machines to Grid Sites using Configuration Management Systems YAO, Yushu

� Lawrence Berkeley National Lab. (LBNL)

� Trust VM Creation Recipes, not Images

� Trusted Base Images + Vo Recipe + Site Recipe

� Puppet as CMS

> cvmfs - a caching filesystem for software distribution Mr. COLLIER, Ian Peter

� Problems with VO Software Distribution

NFS & COInstallation Privileges

> cvmfs (cont.)� A client server filesystem

� Implemented as a FUSE module

� Uses only outgoingt HTTP connections

� Test at PIC and RAL

� Outperforms NFS and AFS

� Scalable, rpm/yum installable

� WN+Squid in ½ day

� Still experimental, but …

> Virtualization at CERN-IT: Overview and Service Consolidation MEINHARD, Helge

� Ease IT Management and Processes

Service Consolidation (Hyper-V + MS SCVMM)Batch Virtualisation (OpenNebula or ISF)Self-Services Kiosk (Hyper-V)Testbeds (XEN)

� Long-Term: All Virtual

� Short-Term: Pysical = Virtual

� No Commodity Hardware

� Server (8 Cores + 48 GByte) in Groups of 8

� Virtual „Small Disk Servers“

> CERNs image distribution system for the internal cloud WARTEL, Romain

� List of Endorsers

� VM VMIs Catalog

� Bittorrent Transfer

Page 10: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 10

Session 9: Grids and CloudsSession 10: Virtualisation

> ATLAS Analysis on ARCWADENSTEIN, Erik Mattias

� NDGF: ARC/Atlas

> Access Grid via Web CALZOLARI, Federico

� Portal: L-Grid

� JavaApplet: VOMS + MyProxy

> VOMS/VOMRS ConvergenceCECCANTI, Andrea

� EMI

> CloudCRV - Cluster Deployment and Configuration Automation on the Cloud YAO, Yushu

� “Analogy with Fridge”

� CloudCRV: Cloud/Batch Configuration System

� Cloud Generation by Scripting, not Image Distribution

> Status update of the CERN Virtual Infrastructure BELL, Tim

� MS Hyper-V

Delegation, migration, checkpointing, PowerShell

� CVI in Numbers

680 MachinesWindows/Linux170 Hypervisores in 6 top level groups

� 5 dedicated (known people), one self-service

� WEB and SOAP interfaces

� Update/Patch on Disk (when shut down)

� Standard Installation

� Pyramid-Disk-Servers or DELL-Blades

Minutely or Secondly Machine Migration

� No Linux Hypervisor: ParavirtualisedDrivers by Microsoft (GPL)

� Improved Disk I/O

� Grown dramatically, new user communities, Linux w. Hyper-V OK

Page 11: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 11

Session 15: Grids and Clouds

> FermiCloud - Current Status CHADWICK, Keith

� Need for more computing

� Replace old Hardware

� Various Clouds

� VMware, Oracle VM, Xen (sl5) -> KVM (sl6)

� Evaluation Eucalyptus (+--), Nimbus (+-), OpenNebula(++-)

� Contextualisation

� Security (Fermi Network enclaves: Science, General, Jail)

� Even infiniband and MPI

� Testing EC2 Rest: Useful ? Secure ?

� Best: OpenNebula or Eucalyptus

� Cern: Why not Openstack ?

> The CERN internal cloud infrastructure: a status report SCHWICKERATH, Ulrich

� 1st and only now: batch

� Xen -> KVM

� Centrally Managed

� IaaS

� ISF: Infrastructure sharing Facility (Platform Computing)

� ONE: OpenNebula

� Batch Life Time < 24 h

� Peak 16000 VMs (ONE), 10000 (ISF)

� Small Subset Running in Production (16 -> 96)

> StratusLab, mixing grid and clouds JOUVIN, Michel

� Vital IaaS Features …

� Benefits: Deployment, Reliability/Robustness, Customisation

� http://stratuslab.eu

� Grid on top of Cloud

� Availabilty: Nov 9th 2010

� OpenNebula, ttylinux, ubuntu, CentOS, Manual/Quattor

� Elastics: User Infinite but Machine Room limited, hide queuing from user resources searching ?

� Public: Amazon, flexiscale, ElasticHosts, GC?GRID

� Reluctance to Run User-Defined Images

> Magellan at NERSC: A Testbed to Explore Cloud Computing for Science SAKREJDA, Iwona

� Clouds as a DOE Project for midrange scientific computing

What are the unique needs and features of a science cloud?What applications can efficiently run on a cloud?

� Benchmarking cloud technologies *Hadoop, Eucalyptus) and platforms (Amazon EC2, Azure)

Can scientific applications use a data-as-a-service or softwareas-a-service model?What are the security implications of user-controlled cloud images?Is it practical to deploy a single logical cloud across multiple DOE sites?What is the cost and energy efficiency of clouds?

� Eucalyptus-Firefox-Apple-Plugin

Page 12: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 12

Session 7: Storage and File Systems

> Current storage status and plans at IN2P3 BRINETTE, Pierre-Emmanuel� …

> Storage at FNAL: State and Outlook CRAWFORD, Matt� …

> BNL storage experiences RIND, Ofer� …

> First results from the WLCG NFS4.1 Demonstrator FUHRMANN, Patrick� The Solution

> Progress Report 4.2010 for HEPiX Storage Working Group MASLENNIKOV, Andrei� openAFS(/OSD), GPFS, Lustre, dCache, NFS4, Xrootd, Hadoop, etc.

� Testbed @ KIT

� Use and Test Cases

> CASTOR development status and deployment experience at CERN JANYST, Lukasz� 11,5 PByte / 555 M Files this year

� HSM

� “Heavy Iron Test” Peak: 7 GB/s or 250 TB/d

> High Performance Storage Pools for LHC JANYST, Lukasz� …

Page 13: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 13

Session 8: Security and Networking

> New network architecture at IN2P3-CC CESSIEUX, Guillaume

� 10/20/40/60 G Backbones/Interconnects

� Cisco Nexus 7018 as core device

� Switch to new net takes 5 h w 4 people

> Plans for a Single Kerberos Service at CERN JANYST, Lukasz

� 2 Domains at CERN

� Unix Heimdal replaced by SSO Windows KDC

� Arc, Acrontab, Batch Clients

� Batch: No key driven tokens

� Changes: TGT, Forwarding, …

� August 2010 to May 2011

> Update on computer security WARTEL, Romain

� Getting more, Money driven

� Botnets, Rootkits (in kernel mem)

� New Rootkits: Filessystem, Hypervisor, …

� Impact on Grid: Large # Host, HA, High Bandwidth, Shares

� Reasons: Compromised User, WEB Apps, No Patch

� Help: Common Security Policies, …

> IPV6 @ INFN PRELZ, Francesco� Reasons: IPsec, Mobility and Anycast Apps, … ?

� Security: Ext. Protection, …

� Sixxs IPV6 Cool Staff

� Double Stack: IPv6 available/there, but moving very slow

� DHCPv6 looks complicated

> HEP and IPv6 KELSEY, David� Status: USA DOE until 2012/2014

Page 14: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 14

IPv6

With the shortage of IPv4 available addresses, and mandates for some sites to migrate to IPv6, HEPiX will set up an IPv6 discussion group.

We will likely form an IPv6 working group as well. This will certainly be a focus topic before and during the next meeting.

� Analysis of applications, security, available tools

� Setup of distributed testbed

� Timetable for upgrades, implementation, deployment

� Include an effort/resource requirements analysis

HEPiX point of contact is David Kelsey from RAL.

Page 15: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 15

Session X: BOF Lustre Consortium

With Oracle’s spring announcement for their planned Lustre support only on Solaris, not Linux, and only on Oracle-qualified vendor/hardware platforms, non-profit organizations in North America and Europe are forming to address installations outside of Oracle’s support model

Lustre sites are encouraged to participate in the appropriate consortium, and participate in the upcoming meeting at SuperComputing if attending.

We will set up a mailing list for interested members.

HEPiX points of contact

� GSI – Walter Schoen

� Fermilab – Matt Crawford

Page 16: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 16

Session P: Exchange on Printing

> Lprng, cups, …

>MFP, HP, Ricoh, …

> Security, Config, …

>…

Page 17: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 17

Session 11 + 14: Miscelllaneous

> Digital Library and Conferencing update SMITH, Tim

� Digital Library

Spires (48%), arXiv(39 %), Google(7%), ADSCDSWare became Invenio (in 2006)InSpire (cern, desy, fermi, slac)

� Data from Spires, Tech from Invenio

� Parallelization and dispatching of intensive Tasks

� Data Mining� INtegrated DIgital COnference

New Interface:� From REST to AJAX� Mod_python to mod_wsgi

URL mapping (e.g. short cuts)Room Booking

� Map of Rooms (Google API)Collaboration Tools as plugin

� Integration: EVO, H.323, ...� Recording / Webcast

Chat Rooms100 known instances in 33 countrieshttp://indico-software.orgDrag and Drop

> Update on the CERN Search EngineBELL, Tim

� Unstructered Info Search compared to inSpire

� Enterprise Search

Protected Documents / ACLs / RolesFind valid docs and also IT-Service-Docs, no spamRanking with little keywording difficultManual intervention for essential eventsTwiki Search not as good as central searchGoogle better because of cross site and user dataGoogle price ~ # docsGoogle search on protected pages did not workAdd docs from Indico, EDMS, Sharepoint, Drupal, …Migrate to FAST Search in 2010

> Rapid web application design for silicon detector measurementsFRIEBEL, Wolfgang

� Rapid Design 4 all languages

� Favorit: Moose (OO with perl6)SL6-Addon: from EPEL ?, Tweak rpms, Openafs, Revisor, Icewm, Network Drivers ?, MS parav. Drivers, actual addon-patches

Page 18: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 18

Session 3: BenchmarkingSession 5: Data Center and Monitoring

> Measurement of HS06 on Intel Westmere and AMD Magny-Coursprocessors MICHELOTTO, Michele

� Now Available

� 6/12 and 4/8 Cores

� Hyperthreading Good until Nmax – X1-4

� Cern plans 4 HT, but no double performance ? + 30 % ?

> ASSETS at LEPP - our FLOSS Inventory and Monitoring serverPULVER, James

� OCSNG

Software and Machine State (GLPI)Monitoring (Zenoss)

Page 19: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 19

Session 12: Datacenter and Monitoring

> BIRD: Batch Infrastructure Resource at DESY FINNERN, Thomas

� Discussion on Oracle and Sourceforge (2b cont.)

> CERN IT Facility Planning and Procurement BARRING, Olof

� Planning, tech specs validation of bids

Procurement Servers + StorageHardware ExpertiseAsset Management

� Procurement issuesEarly failures (or infant mortality)Degraded warrenty serviceLow margin bidders can go bankrupt“Installation = Removal” due 2 power constraints“Strong competitive tender” problems

> JLab HPC Upgrades - GPU and Lustre Experiences PHILPOTT, Sandy

� GPUComparison CPU GPUGPU 532 = 200000 Cores, 600 TFlops single precision

� Cuda Language (high level ~C , only NVIDIA)� OpenCl Language (low level ~ASM)

Lattice QCD memory bandwidth intensiveOne quad cpu node outperforms 1 rack of conventional cpusGPU cost is 10x lower, as is electrical use !

� Lustre300 TB on commodity hardwareProblem descriptionPlansNVIDIA vs. AMD (w. Open CL)

> Lessons learnt from Large LSF scalability tests SCHWICKERATH, Ulrich

� LSF < 20000 Hosts < 500000 Jobs

� 3500 hosts, 25k cores, 80 queues, …> Quattor Update COLLIER, Ian Peter

� Migration to Quattor Started May 2009

� Everything is fine, but took more effort

� Now saving 1/3 to ½ FTE with 700+ WN and disk server

� Plan for Quattorextension/improvements/templates

> CC-IN2P3 Infrastructure Improvements OLIVERO, philippe

� New Computing Room

� Electrical Redundancy> CERN Computer Centre Status and Proposed Upgrade SALTER, Wayne

� Partly Successor of Tony Cass

� Designed 1970, refurbished 2000

� 2,5 MW available, 3.5 MW needed

� Separate Rooms: Backup, Telecom WAN, CC, …

Page 20: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 20

Session 13: Operating Systems and Applications

> Update on Windows 7 at CERN & Remote Desktop Gateway BELL, Tim

� Windows 7NICE management frameworkStarted 31st March 2010Default 32bit, on request w. hardware 64bit2GB Mem, 60G Disk, 2GHz CPU (4GB 64bit)Handling: Unsufficient Hardware, Vista, XP, …

� Remote Gateway DesktopIn case of pandemy or other reason 4 home workRdp over https to TSG

> Deployment of Exchange 2010 mail platformGRZYWACZEWSKI, Pawel

� 18000 mailboxes

� 18000 mailing lists

� Microsoft Exchange (alt 95%, neu 5%)

� Renew mailbox hosting infrastructure

� Modernize webmail

� Adapt ever-growing mail usage

� New Mail: JBODs, 2 CCs, Edge-Servers, Tivoli, …

� New Features in Exchange 2010

� Oct. 2010 Pilot

> distcc at CERN KELEMEN, Peter� Compiling and Assembling

� Compilation Unit, Object file, Client, Server(s)

� Google’s next-gun ?

� Icecream ? Buggy …

> Update on the anti spam system at CERNGRZYWACZEWSKI, Pawel

� 180 billionen spams per day @ world

� 1 million mail per day @ cern

� MS Forefront Protection 2010

� 94 % Reject, 1 % SPAM, 5 % OK

� Reject(Blacklist) : 94 % source, 4 % protocol, 2 % content

> New Tools Used by the S.L. Team DAWSON, Troy� Bodhi (or Scripts ?) <-> KojiDB

� Koji(yum+Mock) <-> rpms -> mash(signs,tags,…) -> repo

� Repo -> Pungi / Revisor -> Distribution

� Spacewalk(Email+Packages) -> Errata

� Community

� SL6 Pre-Alpha

> Scientific Linux Status Report and Plenary DiscussionDAWSON, Troy

� “Joke”

� Stats ftp.scientificlinux.org

� 5.5 May 2010

� SL6 Preparation

� Spacewalk Investigation

� SL4 Security/Fastbug

� SL5 Security/Fastbug

� SL5.6 Spring 2011 ?

� SL6.0 Troy: April 2011 (Conny: December 2010)

� Plenary Diskussion:

Sl3.9 obsoletes“Extra Packages for Enterprise Linux (EPEL) “

Page 21: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 21

Session 16: Board Summary and Meeting Wrap-Up

> Successful

> The meeting contained most of the ongoing HEPiX tracks

> No really dominating track

> Still a strong interest on virtualization/clouds and storage

> Lustre Consortium (Walter Schön + Matt Crawford)

> IPV6

� USA

� Working Group ?

Page 22: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 22

Upcoming Meetings

> Spring 2011 planned for

� GSI May 2-6, organizer Walter Schoen

� Continue ongoing HEPiX tracks

Focus/update on Lustre and IPv6

� Possible LCG workshop immediately after HEPiX

Who may attend a meeting on Friday afternoon and Saturday

> Fall 2011: HEPiX 20th anniversary

� planned for TRIUMF, organizer Steve McDonald

To be confirmed after approval of TRIUMF directors

Page 23: Report on HEPiX Fall 2010bib-pubdb1.desy.de/record/93760/files/ThomasFinnern_Report_HEPiX2010.pdf> CERN site report Dr. MEINHARD, Helge IT Reorganisation 2010 September: Personnel

Th. Finnern & P. v.d. Reest | Report on HEPiX Fall 2010 | Page 23

Last Slide

> Thank you 4 listening