Upload
myrtle-shepherd
View
213
Download
1
Embed Size (px)
Citation preview
RAL Site Report
Castor Face-to-Face meeting September 2014
Rob Appleyard, Shaun de Witt, Juan Sierra
Contents• Operations• 2.1.14 – Our experience at RAL• Future Plans for CASTOR at RAL• Ceph (Shaun)• DB report (Juan)
RAL• Load issues (lots of xroot transfers -> wait i/o on DS)
- crashing• Draining• Rebalancing• DB deadlocks (Juan)• SL6 – shift to full Quattor configuration• 2013 Disk deployment – 57*110TB nodes
– Current numbers
• Elasticsearch – scaled up
Operations
2013-14• Mostly smooth operation• Deployed 2013 disk buy into production
– 54*120TB RAID 6 nodes
• Change of leadership– Rob now CASTOR service manager, taking over from
Matt
– Matt leading DB team
CASTOR 2.1.14
Issues from the Upgrade• Very long DB upgrade time for ATLAS
– Scripts not optimised for large disk-only instance
• ALICE xroot package not available for SL5• We feel a lot of these are caused by differences in
usage patterns between RAL and CERN.• Rebalancing – see later.• Upgrade led us to find lots of crap in our DBs
– Tables present in inappropriate DBs, bad constraints, etc.
Feedback from production• New admin commands work well (modifydiskserver etc.)• Read-only mode very useful
– Free space on RO disk servers still reported as available for use
• Some lack of documentation– E.g. we didn’t know about modifydbconfig– deletediskcopy as replacement for cleanlostfiles
• In fact we have our own home rolled one which could be submitted as contrib
• cleanlostfiles [diskcopyid|diskserver[:filesystem]]
Feedback from production (2)• We needed a way to spot unmigrated files on D0T1
nodes– printmigrationstatus doesn’t tell you about failed
migrations.
– Home-rolled ‘printcanbemigr’ script created for this use case.
• LHCb wanted an http endpoint – implementing test WebDAV interface for CASTOR– Graduate project
– Interested to hear how xroot is going to do this…
SRM• Stream of ‘SRM DB Duplicate’ problems
– Easy clean-up
– But they are disruptive• Duplicate users• Duplicate files (more common, less problematic)
• Hotfix applied to SRM DB to deal with clients who put double-slashes in their filenames
Xroot• High load on disk servers tends to produce high Wait
I/O – 50 concurrent xroot transfers…
Xroot (2)– Experimentation with transfer counts in diskmanager to
optimise # of allowed transfers for each node
– Currently Shaun is the single xroot expert, but we’re trying to fix that
– Xroot manager daemon seems leaky…
Draining• Problem with draining svcclasses with > 1 copy
– ‘patched’ thanks to CERN• Overall better, but consistency of draining still a
problem– Draining a whole server causes problems (TM crashes,
DoS vs user requests)– Draining single filesystem seems better– But frequently needs kicking (many remaining files,
draining still running, but nothing happening)– Also seems to be better on servers with 10GB network
Draining example
More on DrainingEvery 10.0s: draindiskserver -q Tue Sep 16 16:06:03 2014
DiskServer MountPoint Created TFiles TSize RFiles RSize Done Failed RunTime Progress ETC Status------------------------------------------------------------------------------------------------------------------------------------------------gdss515.gridpp.rl.ac.uk /exportstage/castor3/ 16-Sep-2014 14:07:25 10498 2558GiB 8542 1983GiB 1956 0 1h58mn38s 22.5 % 6h49mn6s RUNNING------------------------------------------------------------------------------------------------------------------------------------------------ TOTAL 16-Sep-2014 16:06:03 10498 2558GiB 8542 1983GiB 1956 0 22.5 %
[root@lcgcstg01 ~]# listtransfers -p -x -r d2ddest:d2dsrc ------------ TOTAL ------------ ----------- D2DDEST ----------- ----------- D2DSRC ------------DISKPOOL NBSLOTS NBUTPEND NBTPEND NBSPEND NBTRUN NBSRUN NBTPEND NBSPEND NBTRUN NBSRUN NBTPEND NBSPEND NBTRUN NBSRUNatlasStripInput 282000 2150 3266 117560 3046 62180 2287 68610 0 0 979 48950 21 1050
Rebalancing• Too heavyweight• Causes problems…
– DoS to users– Unexplained TM crashes– Too large a queue
• Consider…– Do you really need to rebalance Disk0 svcclasses?– Move it into a controllable daemon
• We tried to tune it using the ‘Sensitivity’ parameter– All-or-nothing behaviour
• We have this turned off for all instances
Future plans for CASTOR at RAL
SL6• Bruno working on this
– We plan to shift all our headnodes over to SL6 this autumn
– Full Quattorisation of all headnodes (no more Puppet)• We’ve wanted to do this for a long time• 2 config management systems is one too many
– Disk servers to follow
Log Analysis• Elasticsearch logging system edging toward
production– Lack of suitable hardware for search cluster
– …but the system works well for now on old worker nodes
– ~5TB logging information currently stored in Elasticsearch
– Looking to scale-out to other Tier 1 applications
– Differing log formats cause problems – better than DLF but xroot and gridftp are still problematic
2015 and beyond…• Can we get 2.1.15 before LHC startup?• Ceph – the future of RAL disk-only?
– Test instance under development – Bruno working on this.
– Shaun will now be telling you more…