Upload
jawaharbabu
View
221
Download
0
Embed Size (px)
Citation preview
7/24/2019 Connecteddata Center Deployment 82
1/180
Connected Backup
Data Center Deployment
Version 8.2
7/24/2019 Connecteddata Center Deployment 82
2/180
Document information
Connected Backup Data Center Deployment
Connected Backup Version 8.2
Printed: May 7, 2007
Printed in USA
Iron Mountain Support Information
800.675.5971
U.S. +1 508 808 7629
E.U. +49 6102 8828855
Copyright
Copyright 2007 Iron Mountain Incorporated. All rights reserved.
Trademarks
Iron Mountain, the design of the mountain, Iron Mountain, the design of the mountain, Connected, Connected DataProtector,Connected EmailOptimizer, DataBundler, MyRoam, Delta Block, and SendOnce are trademarks or registered trademarks of Iron
Mountain Incorporated. All other trademarks and registered trademarks are the property of their respective owners. All other
trademarks and registered trademarks are the property of their respective owners.
Confidentiality
CONFIDENTIAL AND PROPRIETARY INFORMATION OF IRON MOUNTAIN.The information set forth herein represents
the confidential and proprietary information of Iron Mountain. Such information shall only be used for the express purpose authorized
by Iron Mountain and shall not be published, communicated, disclosed or divulged to any person, firm, corporation or legal entity,
directly or indirectly, or to any third person without the prior written consent of Iron Mountain.
Disclaimer
While Iron Mountain has made every effort to ensure the accuracy and completeness of this document, it assumes no responsibility
for the consequences to users of any errors that may be contained herein. The information in this document is subject to change without
notice and should not be considered a commitment by Iron Mountain Incorporated. Some software products marketed by Iron
Mountain Incorporated and its distributors contain proprietary software components of other software vendors.
Iron Mountain Incorporated
745 Atlantic Avenue
Boston, MA 02111
1-800-899-IRON
www.ironmountain.com
mailto:[email protected]://www.ironmountain.com/http://www.ironmountain.com/mailto:[email protected]7/24/2019 Connecteddata Center Deployment 82
3/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 3
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
About this manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Related manuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Other Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Typographical conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Graphical conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Technical support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Iron Mountain Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Part I: About the Data Center
Chapter 1: Data Center Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
BackupServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
IndexServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
ReplicationServer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
PoolServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
HSMServer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
Compactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
DCAlerter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chapter 2: Hierarchical Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
About HSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Migration and purge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Tape Groups and Tape Account Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
Tape Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Multiple tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
Permanent expansion library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Chapter 3: Compactor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
Compactor and Data Center configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
How Compactor operates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
File expiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Part II: Data Center Installation
Chapter 4: Sizing Your Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
Sizing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Sizing estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
Network bandwidth requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
TABLEOFCONTENTS
7/24/2019 Connecteddata Center Deployment 82
4/180
Table of Contents
4 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Chapter 5: Preparing for Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Evaluating configuration and license options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Data Center server requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
Storage solutions requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Network requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
Security requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60Installing and configuring Microsoft software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
Support Center and Account Management Website preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
Enabling the MyRoam feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
Chapter 6: Installing the Data Center Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73
Installing the software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
Verifying successful installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Troubleshooting Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79
Installing Support Center and Account Management Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81
Configuration tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
Chapter 7: Integrating the Data Center with Enterprise Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
Enterprise directory overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
Integration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Preparing for integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Configuring your firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Enabling Support Center access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Map data fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Verifying successful enterprise directory integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Part III: Management
Chapter 8: Managing the Data Center with DCMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
DCMC overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98DCMC user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Chapter 9: Installing Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
Tools overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
Installing the DataBundler Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Installing the Data Center Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
Chapter 10: Event Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
Event Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
Event Log Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
Trace Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Part IV: Maintenance
Chapter 11: Introduction to Data Center Maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
Maintenance task overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
Chapter 12: Daily Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
General morning tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120
7/24/2019 Connecteddata Center Deployment 82
5/180
Table of Contents
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 5
General tasks for the morning and afternoon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
Tasks for Data Centers using tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Chapter 13: Weekly Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127
Verify results of the Weekly Automatic Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128
Backup tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
Check for available disk space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131General tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
Tasks for Data Centers using tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
Chapter 14: Monthly Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
Account management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
Evaluate current Data Center capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
Verify firmware is current . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146
Check software licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
The Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150
Clean library and tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152
Record Keeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
Part V: Appendices
Appendix A: Data Center Specification Worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Software versions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .158
Data Center server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160
Appendix B: Data Center Installation Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Installation worksheets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162
Appendix C: Maintenance Checklists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Performing daily maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Performing Weekly Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
Performing Monthly Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
7/24/2019 Connecteddata Center Deployment 82
6/180
7/24/2019 Connecteddata Center Deployment 82
7/180
7/24/2019 Connecteddata Center Deployment 82
8/180
8 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Chapter 12: Daily Maintenance Contains procedures that describe how to perform the daily
maintenance tasks.
Chapter 13: Weekly Maintenance Contains procedures that describe how to perform the weekly
maintenance tasks.
Chapter 14: Monthly Maintenance Contains procedures that describe how to perform the monthly
maintenance tasks.
Appendix A: Data Center Specification Worksheet Contains a worksheet where you can record important
information about your Data Center configuration.
Appendix B: Data Center Installation Worksheets Contains a worksheet that you can use to record information that
you need when installing the Data Center software.
Appendix C: Maintenance Checklists Contains worksheets that you can use to track the maintenance
procedures.
Section Description
7/24/2019 Connecteddata Center Deployment 82
9/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 9
Related documentation
Related manuals
The following manuals provide additional information about the Connected Backup product:
Other Resources
The following resources provide additional information about the Connected Backup family of products:
Online help All applications provide procedural and conceptual information in an online help system. Click the Help
link, Help button, or ? button to open the online Help.
The Resource Center The Resource Center is a knowledge base of information that all Connected Backup customers
can access. It includes procedures and information not contained in the product manuals or online help systems. TheResource Center is located at https://resourcecenter.connected.com/
Manual Description
ConnectedBackup/PC Product Overview This manual provides an overview of the features in the
Connected Backup product.
ConnectedBackup Upgrading from Pre-8.0 Versions This manual describes how to upgrade a legacy Data Center and
Agent to version 8.2. The manual also describes new and changed
features in version 8.2.
ConnectedBackup Agent Deployment This manual describes how to download, install, and configure
Agents.
Connected Backup/PC Agent Quick Start This short document provides users with a quick reference for
backup and file retrieval procedures.
Connected Backup/PC Account Management Website
Development
This manual describes how to customize the Account
Management Website.
7/24/2019 Connecteddata Center Deployment 82
10/180
10 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Conventions
Typographical conventions
This manual uses the following t ypographi cal conventions:
Graphical conventions
This manual uses the following graphical conventions:
Convention Description
Bold text Indicates one of the following:
A control in an applications user interface
A registry key
Important information
Italic text Indicates one of the following:
The title of a manual or publication
New terminology
A variable for which you supply a value
Monospaced t ext Indicates one of the following:
file name
folder name
code examples
system messages
Monospaced bold text Indicates system commands that you enter.
Convention Description
Indicates additional information that may be of interest to the
reader.
Indicates cautions that, if ignored, can result in damage to
software or hardware.
7/24/2019 Connecteddata Center Deployment 82
11/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 11
Contact Information
Technical support
Use the following to contact technical support:
Email: [email protected]
Telephone: 800.675.5971
U.S. +1 508.808.7629
E.U. +49 6102 8828855
Iron Mountain Website
Access the Iron Mountain Incorporated Website at the following URL:
www.ironmountain.com
http://www.ironmountain.com/http://www.ironmountain.com/7/24/2019 Connecteddata Center Deployment 82
12/180
7/24/2019 Connecteddata Center Deployment 82
13/180
PARTI: ABOUTTHEDATACENTER
Chapter 1: Data Center Services
Chapter 2: Hierarchical Storage ManagerChapter 3:Compactor
7/24/2019 Connecteddata Center Deployment 82
14/180
7/24/2019 Connecteddata Center Deployment 82
15/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 15
1
DATACENTERSERVICES
About this chapter
This chapter describes the types of services the Data Center uses in the following topics:
To learn about... Refer to:
The services that the Data Center uses Services overview, on page 16
What BackupServer does BackupServer, on page 17
What IndexServer does IndexServer, on page 18
What ReplicationServer does ReplicationServer, on page 19
What PoolServer does PoolServer, on page 20
What HSMServer does HSMServer, on page 21
What Compactor does Compactor, on page 22
What DCAlerter does DCAlerter, on page 23
7/24/2019 Connecteddata Center Deployment 82
16/180
Chapter 1/Data Center Services Services overview
16 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Services overview
Types of services
The Data Center uses the following services to perform the necessary tasks of running the Data Center server:
BackupServer used for data backup and retrieval.
IndexServerused to index file and archive set information to databases.
ReplicationServer used for replication between servers in a mirrored pair configuration.
PoolServerused to maintain the shared pool that the SendOncetechnology uses.
HSMServerused to copy archive sets to archive storage devices and purge migrated sets from disk when needed.
Compactor used to remove old data from the Data Center.
DCAlerterused for Data Center event notification.
7/24/2019 Connecteddata Center Deployment 82
17/180
7/24/2019 Connecteddata Center Deployment 82
18/180
Chapter 1/Data Center Services IndexServer
18 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
IndexServer
IndexServer is the Data Center service that indexes file and archive set information to database tables.
How IndexServer works
As users or servers back up archive sets to the Data Center server, information about each file within the archive set must bestored in the Directory database. The IndexServer writes this information to the Directory database once the archive set is
fully written to the Data Center server from the Agent. When the indexing process is finished, the archive set is queued for
replication to the mirrored server, if a mirrored configuration is used.
If the Data Center is mirrored or clustered, the IndexServer writes information to the database for all archive sets that have
been replicated from the mirrored server.
Management
IndexServer starts automatically with Windows Server. Status and statistics for IndexServer are found in DCMC. To view
the service in the DCMC, expand the Data Center server name and click IndexServer.
7/24/2019 Connecteddata Center Deployment 82
19/180
ReplicationServer Chapter 1/Data Center Services
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 19
ReplicationServer
How ReplicationServer works
The ReplicationServer service replicates the following content between the servers in a mirrored pair:
Archive sets
Database table rows
Agent configurations
The ReplicationServer service only runs on mirrored and clustered configurations.
After an archive set is backed up by the Agent to the Data Center server and indexed to the database, it is put into a queue to
be replicated to the mirror. The archive set is replicated as a whole to the mirror rather than bit by bit as it is backed up by
the Agent.
Most, but not all, of the database table rows in the schema are replicated between the servers in a mirrored pair. When a row
is either inserted, deleted, or modified, it is queued for replication between the mirrored servers.
When you use Support Center to create files to be downloaded to Agents, the files created must be replicated between the
mirrored servers. ReplicationServer queues both the Agent configuration files and the corresponding database table rows for
replication to the mirror. In order for file downloads to Agents to be successful, the files and database rows must be on both
servers because Agents can connect to either Data Center server. The server they connect to first is dependent on which server
they are configured to contact first. Therefore, it is necessary for Agent configuration files to be available on all servers in
the Data Center.
Management
ReplicationServer starts automatically with Windows Server. Archive sets and database entries are replicated continuously
when ReplicationServer is running. If it becomes necessary to pause or stop replication, you can pause or stop the service in
the DCMC. You can view the status and progress of the replication service in the DCMC by expanding the Data Center server
name and clicking ReplicationServer.
7/24/2019 Connecteddata Center Deployment 82
20/180
7/24/2019 Connecteddata Center Deployment 82
21/180
HSMServer Chapter 1/Data Center Services
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 21
HSMServer
How HSMServer works
HSMServer is the Data Center service that processes the copying of archive sets between the local servers disk and the
archive storage device. The HSMServer contains the following components: HSMClient
BackupHSM
HSMPurge
HSMClient is invoked by BackupServer to pass archive set copy requests to the BackupHSMservice. The HSMClient
monitors the processing of the requests and mediates between BackupServer (the Windows service) and BackupHSM.
BackupHSM handles the operations for archive storage devices. HSMServer supports tape libraries and EMC Centera
archive storage devices.
It is not recommended that you pause the BackupHSM service. When BackupHSM is paused you cannot cancel requests or
view the status in DCMC. You can unmount a tape manually from a tape library while BackupHSM is paused.
The library audits its contents and then BackupHSM audits the library. If it is necessary to stop HSM activities, stoppingBackupHSM alerts the service to complete the current request and then stop.
It is the job of HSMPurge to migrate (copy) archive sets from disk to the archive storage device and, when necessary, purge
(delete) archive sets from disk in order to create free disk space.
When the end user wants to retrieve files, BackupServer sends a request to HSMClient to retrieve the appropriate archive
sets. BackupHSM copies the archive sets from the archive storage device back onto the servers disk where BackupServer
can process them.
Additional information
Refer to Chapter 2: Hierarchical Storage Manager, beginning on page 25, for more information about HSM.
7/24/2019 Connecteddata Center Deployment 82
22/180
Chapter 1/Data Center Services Compactor
22 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Compactor
The Compactor service works to clean old data off the Data Center. Compactor checks for synchronization between mirrored
servers, applies expiration rules to backed up data, and deletes data that is deemed expired. The goal of Compactor is to speed
up the Retrieve process and to reduce the amount of data stored long term on the Data Center.
Additional information
Refer to Chapter 3: Compactor, beginning on page 35, for additional details about the Compactor process.
7/24/2019 Connecteddata Center Deployment 82
23/180
DCAlerter Chapter 1/Data Center Services
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 23
DCAlerter
DCAlerter notifies designated individuals when specific events occur on the Data Center. DCAlerter monitors the Data
Center event logs for specific event IDs configured for notification. When an event ID is logged that has been configured for
notification, DCAlerter sends an email message to the designated individuals.
Refer to Chapter 10: Event Logging, beginning on page 111for additional information about Data Center event logs.
Configuring email for DCAlerter
You can specify your SMTP mail host and an administrator email address for DCAlerter during Data Center Setup. If the
SMTP mail host information is not entered during Data Center Setup, the feature is not activated. Data Center Setup installs
a default set of events for notifications. You can modify the installed settings using DCMC.
Refer to DCMC Help for a procedure to modify the installed settings.
7/24/2019 Connecteddata Center Deployment 82
24/180
7/24/2019 Connecteddata Center Deployment 82
25/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 25
2
HIERARCHICALSTORAGEMANAGER
About this chapter
This chapter contains the following topics:
To learn about... Refer to:
How migration and purge works Migration and purge, on page 27
Tape Groups and Tape Account Groups Tape Groups and Tape Account Groups, on page 28
Tape sets Tape Sets, on page 29
Multiple tape libraries Multiple tape libraries, on page 32
Permanent expansion libraries Permanent expansion library, on page 33
7/24/2019 Connecteddata Center Deployment 82
26/180
Chapter 2/Hierarchical Storage Manager About HSM
26 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
About HSM
Description
Over time, the Agents on many computers perform many backups and the number of archive sets on the Data Center servers
disk grows. When free space on the disk drops below a preconfigured threshold, BackupServer requests Hierarchical StorageManager (HSM) to migrate archive sets from disk to the archive storage device, if one is installed. If no archive storage
device is installed, archive sets are kept only on the Data Center servers disks.
The Compactor service, discussed in Chapter 3: Compactor, beginning on page 35, removes old data and recycles disk space
as needed.
Archive storage devices
The Connected Backup application supports the following types of archive storage devices:
Tape libraries (SCSI and DAS).
EMC Centera.
Visit the Resource Center for an updated list of hardware solutions that are currently supported.
7/24/2019 Connecteddata Center Deployment 82
27/180
Migration and purge Chapter 2/Hierarchical Storage Manager
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 27
Migration and purge
Migration
If your Data Center is configured with HSM, the HSMPurge service migrates archive sets from disk to an archive storage
device when free disk space is reduced to a preset threshold. Upon reaching another free disk space threshold, the migratedarchive sets are purged from disk, freeing disk space for newer backups. If your Data Center is configured to use multiple
disk volumes, the migration and purge processes begin or end when thresholds are reached across all volumes. You can see
the process graphically through the Data Center Management Console (DCMC).
If the Data Center contains unmigrated archive sets and the free space drops below a specified percentage of disk space,
HSMPurge begins migrating the archive sets, while keeping the original archive sets on disk.
Purge
As archive sets are continually backed up to the server and occupy more disk space, free disk space continues to drop. When
free disk space drops to a second specified percentage, HSMPurge starts purging migrated archive sets from disk. The
purging continues until free disk space grows to a third specified percentage. You can specify the disk space percentages for
the migration and purge processes in the DCMC.
Archive sets are not immediately purged from disk after migration to the archive storage device. The reason for this is to keep
as many archive sets as possible available on disk for possible file retrieval requests.
7/24/2019 Connecteddata Center Deployment 82
28/180
Chapter 2/Hierarchical Storage Manager Tape Groups and Tape Account Groups
28 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Tape Groups and Tape Account Groups
Tape Groups provide a method of keeping data from different communities on separate tapes. A community is the basic
organizational unit for accounts on the Data Center server. You might find Tape Groups useful if you have a community
whose data you want to keep on separate tapes in the tape library.
Tape Group 0
Tape Group 0 (zero) is the default Tape Group created by Data Center Setup. The default community is assigned to Tape
Group 0. Unless specified in Support Center, all new communities are also assigned to Tape Group 0.
Tape Account Groups
Tape Account Groups provide a way for HSM to group accounts together for assignment to tape. Tape Account Groups are
groupings of accounts within a Tape Group. The purpose of Tape Account Groups is to fully utilize tape space. Tape Account
Groups have a predetermined maximum number of accounts and quantity of data that are assigned. HSM creates a new Tape
Account Group when the current Tape Account Groups limits are reached.
7/24/2019 Connecteddata Center Deployment 82
29/180
Tape Sets Chapter 2/Hierarchical Storage Manager
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 29
Tape Sets
Overview
Whether you are running a standalone or a mirrored Data Center configuration, there is a risk of losing backed-up data due
to various failures, such as: Disk failure on a standalone Data Center or on one of the servers of a mirrored pair.
Loss of tape cartridge.
Total system loss due to fire or similar disaster.
The amount of risk decreases in a mirrored server environment, where all backed-up data is stored redundantly on two
identical Data Centers, so that if one Data Center of a mirrored pair experiences technical problems, data is still available on
its mirror.
Unlike a mirrored pair, a standalone Data Center only stores a single copy of data on disk or archive storage device (if
applicable). In the event of hardware or software malfunction, service outage, a fire, or similar disaster, backed-up data, both
on disk and on the archive storage device, will be completely lost if no extra protective measures have been taken. To take
such protective measures, you can configure your Data Center to use one or more additional tape sets (refer to Chapter 6:
Installing the Data Center Software, beginning on page 73for installation information). During the migration process, HSM
copies data from disk to tapes that belong to the tape sets. These tape sets are referred to as the Primaryand Secondary Tape
Sets.
The Primary and Secondary Tape Sets serve different functions within the Data Center. Therefore, the methods by which you
create them differ as well.
Use DCMC to configure your Data Center to use Tape Sets. Refer to the DCMC Help for configuration procedures
Primary Tape Set
There is only one Primary Tape Set in the tape library. Tapes that belong to the Primary Tape Set remain permanently in the
library to ensure prompt recovery of archive sets at the end users request. The main purpose of the Primary Tape Set is to
optimize the recovery process for end users if they must retrieve some or all of their data. To maximize the speed andefficiency of file retrieval, data for each individual account is kept together in a Tape Account Group (refer to Tape Groups
and Tape Account Groups, on page 28for more information).
To enable maximum amounts of data to accumulate on disk before each migration, data is migrated to tape infrequently.
When the Data Center disk space usage parameters have been reached, HSMPurge migrates data to the Primary Tape Set
with the goal of consolidating data for each account. For an account to be assigned to a particular tape, the amount of data
that is already on that tape must be under a specific threshold. Imposing a data threshold provides space for future migrations
for accounts that have already been assigned to the tape. Therefore, when an end user initiates a retrieve, the requested data
is quickly located on the tape to which an account is assigned and copied back to disk. Data is migrated from the Data Center
disk to the Primary Tape Set as needed, based on DCMC settings. If the archive disk is properly sized, migration should occur
once a week.
7/24/2019 Connecteddata Center Deployment 82
30/180
Chapter 2/Hierarchical Storage Manager Tape Sets
30 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Secondary Tape Sets
The Data Center software offers a feature that provides redundant protection of backed up data in both a standalone server
and mirrored environment. This includes creating additional copies of archive sets, referred to as Secondary Tape Sets, and
taking them off-site as needed.
The purpose of Secondary Tape Sets is to create and maintain a valid copy of all backed-up data in restorable form so that,
if a major data loss occurs at the Data Center, archive sets are still recoverable using disaster recovery tools and procedures.Therefore, instead of consolidating data for each account on a particular tape, HSM migrates archive sets to the Secondary
Tape Set tapes as quickly as possible. This feature is available to HSM configurations only.
You can use secondary tape sets without a primary tape set. In this situation, the Data Center functions primarily as a disk-
only configuration. A scheduled job runs to request the HSM to create the Secondary Tape Sets. For this type of
configuration, the tape library needs enough slots to hold blank tapes to accommodate one or two days worth of archives
only.
Types of Secondary Tape Sets
There are two kinds of Secondary Tape Sets:
The SendOnce account tape set stores a backup of the SendOnce account (you can create only one copy of a SendOnce
account tape set). This tape set usually remains on-site and is especially helpful in a standalone Data Centerconfiguration, enabling fast recovery of backed up data, lost due to a bad tape or a disk failure. When the SendOnce
account tape set tape becomes full, you can remove it from the library and store it on the shelf at the same location.
Off-site Secondary Tape Sets contain a complete copy of archive sets (with the exception of the SendOnce account) and
are intended for off-site storage. Depending on your organizations needs, you can configure the system to create one or
more off-site Secondary Tape Sets. For maximum data protection, tapes in these tape sets are filled and removed from
the library as often as possible. After the tapes are removed from the library, they must be stored in a safe location,
preferably in a different building. Therefore, in the event of full-system crash, the most recent users data would still be
available on the off-site Secondary Tape Set tapes.
Deciding to use Secondary Tape Sets
To decide whether or not to use Secondary Tape Sets, you should consider the following:
The amount of risk involved in your Data Center operations.
If you are running a standalone Data Center, the risk of losing some or all of your backed up data is much higher than
in a mirrored environment. If you run a mirrored Data Center, data is still at risk if one of the mirrors is completely
destroyed.
The advantages and disadvantages of this setup and how it can affect your Data Center operations.
The primary advantage of having Secondary Tape Sets is in having an ultimate degree of protection against loss or
damage of backup data. It is particularly valuable in a standalone server environment, where the risk of losing data due
to a disk or tape failure is especially high. In the event of an entire system crash, the off-site tapes from the Secondary
Tape Set remain the only source of end-user data, which would otherwise be lost forever.
7/24/2019 Connecteddata Center Deployment 82
31/180
Tape Sets Chapter 2/Hierarchical Storage Manager
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 31
Although a mirrored server configuration provides an extra degree of data protection against all possible failures by storing
data redundantly at the two identical Data Centers, Secondary Tape Sets are still very helpful in the following situations:
You must quickly restore archive sets that are lost or damaged due to a tape failure.
One of the servers in a mirrored pair is completely destroyed, and you must quickly move backed-up data to a new
mirror.
The primary disadvantage of using Secondary Tape Sets is the increasing cost of media (you must provide additional tapesto maintain this setup) and operation maintenance. Your decision is therefore a trade-off of cost against the level of risk you
are ready to accept.
Taking Secondary Tape Set tapes off site
To minimize the vulnerability of data in case of disk failure, fire, or other disaster, two schedules have been defined for the
Secondary Tape Sets: the migration schedule and the extraction schedule.
If you host your own Data Center and would like details about removing Secondary Tape Set tapes from a tape library using
the DCMC, refer toRemove Secondary Tapes, on page 125.
Migration schedule
Frequency of data migration to the Secondary Tape Set is determined by the migration schedule. In a single server
environment, the risk of losing data due to disk failure is much higher than in a mirrored server configuration. To reduce this
risk, data must be migrated to the Secondary Tape Set as frequently as possible. Instead of being demand driven, migration
is scheduled to run daily or several times per day using the daily automatic procedure. The greater the frequency of migration,
the less the data loss if the disk were to fail. Migration to the Secondary Tape Set can also be performed with the DCMC.
To ensure data safety in case of fire or other disaster that might result in loss of the entire Data Center, Secondary Tape Set
tapes must be removed from the library and taken off-site as often as possible. The extraction scheduledefines how often the
Secondary Tape Set tapes are removed from the library.
Extraction schedule
The frequency of tape extraction is determined by the following factors:
The amount of data that the Data Center receives daily (if the Data Center has a large user community, tape removal
should be performed more frequently)
The number of blank tapes that the user provides to support the Secondary Tape Set configuration
You can set the extraction interval to less than, equal to, or greater than a day. You should remove Secondary Tape Set tapes
from the library every other day or as soon as the tape gets full (waiting until the tape gets full reduces the cost of media, but
increases the risk of losing backed-up data due to complete disk loss). After the tapes are removed, they should be stored in
a safe location, preferably in a different building. Then, in the event of a full system crash, the most recent data can still be
retrieved from the off-site Secondary Tape Sets.
7/24/2019 Connecteddata Center Deployment 82
32/180
Chapter 2/Hierarchical Storage Manager Multiple tape libraries
32 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Multiple tape libraries
Using multiple tape libraries
The Data Center can run with two tape libraries attached to each server. You might use multiple tape libraries for any of the
following situations: You have an existing tape library and would like to replace it by transitioning to a new tape library (for example, if you
are replacing an older tape library with one that uses newer technology).
You want to keep your existing tape library, but you must use an additional reliever library temporarily until you can
free up tape space on the original library.
You want to permanently use multiple tape libraries to expand your total available tape capacity.
Each of these situations poses its own unique considerations and procedures. For information on installing two tape libraries
on your Data Center or adding a second tape library, visit the Resource Center.
Transition to a new tape library
If you want to replace your original tape library with a new one, you must make the transition over a period of time during
which you copy the data from the old library to the new library. A likely example of this situation is if you are replacing an
older tape library with one that uses newer technology.
When you replace a library, your goal is to stop using the old tape library, start using the new library, and copy the data from
the tapes in the old library to the tapes in the new library. Visit the Resource Centerfor a procedure to transition to a new
library.
Temporary reliever library
There might be times when you must use an additional tape library for temporary extended storage until Compactor is able
to free sufficient space in your original library. Your original library would remain your permanent library, and the additional
temporary library would remain in use only for as long as needed.
In this situation, you would simply connect the additional library and let the Compactor service run until it has freed up
enough tape space to warrant removing the additional library.
CAUTION
The previous process requires you to transfer tapes back and forth between tape libraries.
Therefore, the two libraries must be of compatible tape and barcode technologies.
7/24/2019 Connecteddata Center Deployment 82
33/180
Permanent expansion library Chapter 2/Hierarchical Storage Manager
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 33
Permanent expansion l ibrary
Using a permanent expansion library
If you are using multiple tape libraries because you want to permanently expand your available tape capacity, then you must
plan to keep the multiple libraries in use for an indefinite amount of time. Unlike the previous situations, your goal in thissituation is notto work toward using only one library again. Instead your goal is to continually use the multiple libraries as
efficiently as possible. Doing so means balancing tape utilization among all libraries in use.
To balance tape utilization, you should understand the following concepts:
How tape utilization works in HSM
How to balance tape utilization across multiple libraries
How to work with libraries of different technologies
Understanding tape utilization in HSM
When HSM migrates data to tape, it accesses the tapes in the alphabetical and numerical order of their labels. Regardless of
where or when the tapes are inserted, HSM looks for the next tape labeled alphabetically (or numerically) when the previous
tape is full.
For example, assume you have multiple tape libraries with 100 tapes that are labeled ABK001, ABK002,..., ABK100 (you
could have inserted these tapes at any time, in any order, or in any library). When ABK001 is full, HSM then migrates data
to ABK002. When ABK002 is full, HSM migrates data to ABK003, and so forth. It does not matter which library the tapes
are in.
If you have more than one Tape Group, you can split the tapes for the Tape Group between the two libraries. This is not a
concern if the libraries and tapes are of the same technology. The same holds true for Tape Account Groups. It is not a concern
if a Tape Account Group is split across two libraries. For more information on Tape Groups and Tape Account Groups refer
to Tape Groups and Tape Account Groups, on page 28
How HSM determines tape capacityThe driver installed for the tape drive determines a tapes capacity at the time the tape is loaded. HSM requests this
information from the drive shortly after each tape is loaded and then stores this information in an internal database.
Balancing tape utilization
To balance the workload across tape libraries, you should insert the tapes into the tape libraries so that their labels span the
libraries evenly.
For example, assume you have two libraries, each with a 50-tape capacity (a total of 100 tapes). Assume the barcode labels
that you attached to the tapes are ABK001, ABK002,..., ABK100. When you insert the tapes into the two libraries, you should
insert ABK001 into the first library, ABK002 into the second library, ABK003 into the first library, ABK004 into the second
library, and so forth. Then, when one tape is full and HSM accesses the next tape, it alternates between each tape library.
Working with libraries of d ifferent technologies
Balancing tape utilization is easy if you use libraries that are of compatible tape and barcode technologies because you can
simply move tapes between the libraries to get the order that yields optimum load balancing. However, this process is not as
easy if you use libraries of different tape and barcode technologies because you cannot simply move tapes between such
libraries.
7/24/2019 Connecteddata Center Deployment 82
34/180
Chapter 2/Hierarchical Storage Manager Permanent expansion library
34 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
If you use libraries of different tape and barcode technologies, you must prepare in advance of setting up the new tapes. When
you order barcode labels for new tapes, order labels with the same barcode labels as your other libraries. For example, if one
library uses ABK001-ABK200, order labels with ABK001-ABK200 for the additional library. That way you can attach the
barcodes, alternating numbers for each library. For example, use the ABK001 label for the first library, the ABK002 label
for the second library, the ABK003 label for the first library, the ABK004 label for the second library, and so forth. Then
HSM alternates libraries when migrating data to a new tape.
7/24/2019 Connecteddata Center Deployment 82
35/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 35
3
COMPACTOR
About this chapter
This chapter contains a description on the Compactor service. This chapter contains the following topics:
To learn about... Refer to:
How different configurations use Compactor Compactor and Data Center configurations, on page 36
How Compactor operates How Compactor operates, on page 37
How file expiration affects Compactor File expiration, on page 40
7/24/2019 Connecteddata Center Deployment 82
36/180
Chapter 3/Compactor Compactor and Data Center configurations
36 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Compactor and Data Center configurations
The Compactor service
As a Data Center service, Compactor runs automatically and continuously based on Data Center activity. Compactor has
several purposes: Reduce overall storage requirement for the Data Center.
Improve Agent file retrieval performance.
Limit the number of tapes needed for account recovery.
Free tape and disk space by removing expired data.
Reduce the size of the databases.
Improve data integrity.
Compactor configurations
Compactor runs on all Data Center configurations but runs differently on a mirrored configuration than it does on a
standalone Data Center server. It also works differently with HSM as opposed to a disk-only configuration.
Compactor in mirrored Data Centers
For mirrored Data Centers, the Compactor service runs on both servers but only one of the servers in the pair controls the
workload of the compaction process. This server is referred to as the primary server. If you are running a clustered Data
Center, there is one primary server for every mirrored pair in the cluster. For example, a clustered Data Center with three
mirrored pairs has three primary servers. You can check the status of the primary server(s) in the Compactor view of DCMC.
Administ ration of Compactor
For assistance in administering Compactor, use DCMC to:
Start, stop, or pause the Compactor service
Specify startup parameters
Monitor Compactor progress for the current session
View recyclable tapes for reuse or removal from the library
Monitor disk space
Monitor Compactor progress for the past 90 days
You can access DCMC Help from DCMC for more information on these topics.
7/24/2019 Connecteddata Center Deployment 82
37/180
How Compactor operates Chapter 3/Compactor
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 37
How Compactor operates
Compactor Tasks
The Compactor service removes older, unnecessary data from the Data Center. To accomplish this task, it does the following:
1. Checks for necessary disk space (HSM configurations only).
2. Selects accounts or a Tape Account Group.
3. Performs a system analysis and repair.
4. Marks files as expired.
5. Repackages archive sets.
6. Deletes expired archive sets and database entries.
7. Migrates new archive sets to tape.
8. Informs the Agent of changes.
Check disk space (HSM conf igurations only)
Before Compactor begins processing accounts, it checks for necessary disk space on all servers where HSM is installed. It
compares the DiskCache value in the Windows registry to the sum of free disk space on the archive partitions and the amount
of space taken up by customer archive sets. If there is available space, the compaction process proceeds. If there is not enough
available space, Compactor writes an error message to the Application log and then stops. A certain amount of disk space is
necessary because all archive sets for an account must be on disk for Compactor to process the account. Compactor also
checks for available disk space before each account is processed.
Disk-only Data Centers
On disk-only Data Centers all of the account's archive sets are already on disk; therefore, the disk cache check is not
necessary. If the free disk space on a disk-only Data Center server drops below 10% of the total disk space, Compactor
attempts to compact all accounts on the server to free up disk space.
Select Account or a Tape Account Group
Compactor must determine which accounts to work on per session. For a Data Center using a tape library, Compactor selects
the oldest Tape Account Group that has not been compacted in a set number of days. The default number of days is 15, but
you can adjust this number in DCMC. For more information about Tape Account Groups refer to Tape Groups and Tape
Account Groups, on page 28.
If a Data Center does not use a tape library for the Primary Tape Set (if it is disk-only or uses Centera), Compactor begins
working on accounts that have not been compacted in a set number of days. The default number of days is 15, but you can
adjust this number in DCMC.
Normally, Compactor runs continuously, but you can start the Compactor service by specifying an account, tape, or TapeAccount Group. You can also run compaction on canceled accounts only. Refer to DCMC Help for additional details about
using switches to start Compactor.
7/24/2019 Connecteddata Center Deployment 82
38/180
Chapter 3/Compactor How Compactor operates
38 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Perform system analysis and repair
On mirrored and clustered Data Centers, Compactor begins processing an account by first locking the account from all other
processes. Compactor checks for synchronicity of the accounts archive sets and database information on the local server and
then between the two servers of a mirrored pair. If inconsistencies exist, Compactor tries to correct them. If the account still
has inconsistencies, the Compactor service marks any files it cannot retrieve for deletion and requests the Data Center send
a notification to the Agent to resend those files.
Mark f iles as expired
Compactor uses rules created during Data Center installation to expire files. These rules include how long a canceled
accounts data are kept, how long files deleted from the Agent computer are kept, how long files excluded from the Agent
backup list are kept, and how many versions of a file are kept and for how long. For more specific information on the
expiration rules refer to File expiration, on page 40.
You can view and change the expiration rules within DCMC. Setting any of the values to -1 turns off the rule. Compactor
runs through every version of every file for the selected account and marks files as expired if a rule applies. Because the
expiration process is run on an account approximately every 90 days, there are times when there are more versions of a file
available than the rules would imply.
In configurations using HSM, when the expiration process is complete, archive sets are copied from tape to disk. Archivesets for accounts that are canceled and ready to be compacted are not copied to disk. These accounts are processed first.
Repackage archive sets
After files have been marked as expired, Compactor can determine which files to delete and which archive sets to repackage
for efficiency.
If a failure to retrieve the archive set from tape or disk occurs, Compactor attempts to retrieve the archive set from the servers
mirror. When working with files in an archive set, Compactor either copies or rebases the file. Rebasing takes the original
base of a file (the first backed-up version) and combines it with its deltas (subsequent changes to backed-up files) to create
a new base. The expired base and deltas are no longer needed and are deleted. Compactor copies files to new archive sets
when a file is not expired but is in an archive set with other files that require rebasing or deletion. After the repackaging
process, Compactor performs additional checks of data integrity on the new archive sets.
Delete archive sets and database entr ies
After all archive sets have been repackaged, Compactor deletes all of the old archive sets from disk. During this process it
also deletes the appropriate database rows for these files and archive sets.
Compactor does not delete archive sets from tape but does delete information regarding the archive sets location on tape
from the database. This action renders the archive sets irretrievable and the tape space expired. When this step is complete
the account is unlocked, allowing access to all processes.
Migrate new archive sets to tape
In configurations using HSM, Compactor migrates new archive sets to the archive storage device. If using a tape library,
Compactor checks to see if there are four blank tapes in the library before beginning migration. Four tapes are recommended
because Tape Account Groups use four tapes by default. If four blank tapes are not available in the library, Compactor writes
an error message to the DCMaint log and the service is paused.
7/24/2019 Connecteddata Center Deployment 82
39/180
How Compactor operates Chapter 3/Compactor
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 39
Notify the Agent about changes
When archive sets have been repackaged or deleted, the BackupServer service must notify the Agent of the change. The next
time the Agent connects to the Data Center server, its file list is updated with the new information from the compaction
process. Files that have been deleted by the Compactor service are no longer restorable by the Agent. Therefore, the Agent
must update its list of files available for retrieval.
Once all new archive sets are migrated, the process begins again with the check for available disk cache and selection of thenext account or Tape Account Group to be compacted.
7/24/2019 Connecteddata Center Deployment 82
40/180
Chapter 3/Compactor File expiration
40 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
File expiration
File expiration process
To reuse disk and archive storage space, the Data Center deletes old data using a process during the Compactor process called
expiration.During setup, you are asked to establish parameters that define when data is old and can be deleted. The file expiration
rules are set to reasonable defaults by Data Center Setup, so you can safely accept the defaults if you are not sure of the
parameters you need. Entering -1 for any of the values turns off the expiration rule.
On a disk-only configuration, file expiration rules are used to keep from running out of disk storage. On a server using HSM,
file expiration rules are only used to minimize growth of data in storage; the disk is kept at an acceptable free space level by
data migration. Consequently, you should monitor a disk-only configuration closely in the weeks after startup, and decrease
the file expiration rules if disk space is being filled too quickly. On both disk-only and HSM configurations, if space is tightly
limited, more aggressive file expiration rules are necessary. File expiration rules are changed using DCMC.
Expiration rules and default settings
The rules and their default settings are detailed as follows:
Canceledspecifies the minimum number of days after an account is canceled until its backed-up data is deleted. The
default number of days until deletion is 60.
Deletedspecifies the minimum number of days that a file is retained after it has been deleted from the Agent that backed
it up. If a file is backed up and later deleted, it is normally retrievable via the Agent. However, if the file has been expired
and compacted from the Data Center, it cannot be retrieved. The default value is 90 days for disk-only configurations
and 180 days for HSM configurations.
Excludedspecifies the number of days that a file is retained after the end-user has excluded it from the backup list on
the Agent. If a file is backed up and then later excluded from the Agent backup list, it is expired and deleted the next
time Compactor runs on the account. The default value is zero days for disk-only and HSM configurations.
RecentVersionsandOldVersionsare used together to specify the number of versions of a file that are retained. Forexample, if RecentVersions = 9 (versions) and OldVersions = 30 (days), then old versions of a file are deleted if they are
more than 30 days old or there are 9 more recent versions. The most recent backed-up version of a file is not expired
using these parameters. The default value for RecentVersions is 10 versions for disk-only configurations and 20 versions
for HSM configurations. The default value for OldVersions is 45 days for disk-only configurations and 90 days for HSM
configurations.
Rule exceptions
It is possible for data to be on the Data Center longer than the expiration rules imply. For example, on January 1st an end user
deletes a file from their computer that has been backed up to the Data Center, and then performs a subsequent backup.
Compactor is set to process accounts no more than every 90 days, the next time Compactor processes this account is on April
15th. In this example, the expiration rule for deleted files is 90 days. As Compactor is processing this account it marks the
file for deletion from the Data Center. The file lived on the Data Center for more than 90 days after its deletion from theend-user computer. This is because the expiration rule values and the number of days between Compactor runs for an account
or Tape Account Group are minimum values. Data could remain on the Data Center longer than these values indicate.
7/24/2019 Connecteddata Center Deployment 82
41/180
PARTII: DATACENTERINSTALLATION
Chapter 4: Sizing Your Data Center
Chapter 5: Preparing for InstallationChapter 6: Installing the Data Center Software
Chapter 7: Integrating the Data Center with Enterprise Directory
7/24/2019 Connecteddata Center Deployment 82
42/180
7/24/2019 Connecteddata Center Deployment 82
43/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 43
4
SIZINGYOURDATACENTER
About this chapter
This chapter provides information that you can use as a guide to determine the hardware requirements for your Data Center.
Precise determination of these requirements is contingent upon a number of variables, most of which are dependant on the
specifics of the end user population, number of servers, and Agent configuration choices. Even if you are licensed for 5,000
users or 500 servers, you still might want to deploy hardware that would serve 10,000 users or 1,000 servers so you can easilyscale the Data Center to handle more users if needed in the future.
This chapter contains the following topics:
To learn about... Refer to:
How sizing affects your Data Center Sizing overview, on page 44
Minimum sizing estimates for the Data Center Sizing estimates, on page 46
Requirements for network bandwidth Network bandwidth requirements, on page 48
7/24/2019 Connecteddata Center Deployment 82
44/180
Chapter 4/Sizing Your Data Center Sizing overview
44 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Sizing overview
Planning your deployment
When sizing your Data Center hardware, plan for your total number of users or servers when completely deployed as opposed
to only the number you plan to initially deploy. If your goal is to have more than 10,000 users or 100 servers backing up tothe Data Center, then you should contact Iron Mountain Digital for assistance in sizing your Data Center hardware.
The information in this chapter provides guidelines for different hardware configurations including disk-only, disk with an
attached tape library, and disk with an attached EMC Centera. Disk-only Data Centers represent configurations using
Network Attached Storage (NAS), Storage Area Network (SAN) and directly attached disk storage. If you are configuring a
mirrored Data Center, each server must conform to the same guidelines.
Assumptions for PC accounts
The information in this chapter is based on the following assumptions about PC accounts:
First backup of each account is 1 GB compressed data, on average. This number does not include common files taking
advantage of SendOnce technology.
Size per month of compressed backup data per end user is 125 MB.
Number of files in first backup is 100,000, on average.
Number of delta files backed up monthly is 8,000 per end user.
Average total account size is 2 GB of compressed data.
Number of days archive sets remain on disk is 5 (HSM only).
Tape capacity is 100 GB (HSM only).
Effects of backed-up data from PC accounts
The number of files backed up, along with the size of the files backed up by your end users have a large impact on the sizing
of the Data Center servers. The size of the files backed up influences the amount of storage space needed for archive sets.The number of files backed up influences the amount of storage space needed for SQL databases, database transaction logs
and, if running a standalone configuration, the database backups.
The sizing charts in this chapter were created with the assumption that end users would be backing up an average of 4,000
files per month with an average total compressed size of 150 MB per month. When sizing your Data Center it is important
to take these parameters into account. If your end users tend to create large files, the size of the Data Center servers must
reflect this activity.
Assumptions for server accounts
The information in this chapter is based on the following assumptions about server account:
First backup of each server account is 10 GB compressed, on average. This number does not include common filestaking advantage of SendOnce technology.
Size per month of compressed backup data per end server account is 1GB.
Number of files in first backup is 100,000, on average.
Number of delta files backed up monthly is 10,000 per server account.
Average total account size is .9 gigabytes of compressed data.
Number of days archive sets remain on disk is 5 (HSM only).
7/24/2019 Connecteddata Center Deployment 82
45/180
Sizing overview Chapter 4/Sizing Your Data Center
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 45
Tape capacity is 15 GB (HSM only).
Effects of backed-up data from server accounts
The number of files backed up, along with the size of the files that server accounts backup have a large impact on the sizing
of the Data Center servers. The size of the files backed up influences the amount of storage space needed for archive sets.
The number of files backed up influences the amount of storage space needed for SQL databases, database transaction logs
and, if running a standalone configuration, the database backups.
The sizing charts in this chapter were created with the assumption that servers would be backing up an average of 4,000 files
per month with an average total compressed size of 150 MB per month. When sizing your Data Center it is important to take
these parameters into account. If the servers store large files, the size of the Data Center servers must reflect this activity.
Additional resources
If your Data Center has variables not accounted for in this chapter, contact Iron Mountain Digital for an individualized sizing
estimation.
Visit the Resource Center for a table that compares each configuration type against each end user range overall.
7/24/2019 Connecteddata Center Deployment 82
46/180
Chapter 4/Sizing Your Data Center Sizing estimates
46 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Sizing estimates
You can estimate the sizing for a Data Center based on the number of accounts.
Sizing estimates for accounts
The following table provides sizing estimates for a Data Center based on the data and the number of PC and server accounts.
Application disk space requirements
The following table provides the minimumapplication disk space requirements for sizing a Data Center.
Note
If you are configuring a mirrored Data Center, each server must conform to the same minimumguidelines.
Volume/folder PC Account Server Account
Archive set 2 TB per 1000 PC accounts for disk-only
configurations
200 GB per 1000 PC accounts for Data
Centers with a Centera
2 TB per 200 server accounts for disk-
only configurations
200 GB per 200 server accounts for Data
Centers with a Centera
Database 50 GB per 1000 PC accounts 50 GB per 200 server accounts
Centera 2 TB per 1000 PC accounts for disk-only
configurations
2 TB per 200 server accounts for disk-
only configurations
Application volume/fo lder Requirement
Operating system volume Windows 2000 Server 8 GB
Windows 2003 Server 10 GB
SQL Server Application volume/folder SQL Server 2000 SP4 1 GB
Data Center Application volume/folder 900 MB and 130 MB per additional English Agent File Set
2 GB and 1.5 GB per additional International Agent File Set
Account Management volume/folder 8 GB
7/24/2019 Connecteddata Center Deployment 82
47/180
Sizing estimates Chapter 4/Sizing Your Data Center
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 47
Hardware requirements
The following table provides minimumhardware requirements for sizing a Data Center.
Hardware Requirement
Processors Server class with dual 2 GHz processors or a single 3
GHz Dual Core processor
Memory 4 GB parity or ECC
Ethernet Adapter 1 GB per second
7/24/2019 Connecteddata Center Deployment 82
48/180
Chapter 4/Sizing Your Data Center Network bandwidth requirements
48 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Network bandwidth requirements
When considering the network load involved with running your own Data Center, you must focus on the number of Agents
you are deploying. There must be sufficient network bandwidth available for the Agents to communicate with the Data
Center server.
If the Data Center is configured as a mirrored pair, each Agent must have access to both of the servers in case the Agent
cannot access its primary server. Additionally, sufficient network bandwidth must be available for the mirrored servers to
communicate with each other.
Network requirements table
The following table lists the recommended network requirements.
Network element Network requirements
Network bandwidth between the client and each Data Center 1 megabit/second based on 5 to 6 MB of compressed data per
user per day.
IP address One per Data Center
Network bandwidth between the two Data Centers (if
mirrored configuration)
1 megabit/second based on 5 to 6 MB of compressed data per
user per day.
7/24/2019 Connecteddata Center Deployment 82
49/180
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 49
5
PREPARINGFORINSTALLATION
Configure your servers
Before you can install the Data Center, make sure your servers are prepared and configured correctly for the installation. This
chapter explains how to prepare your server for a Data Center installation.
About this chapter
This chapter contains the following topics:
To learn about... Refer to:
Tasks you need to complete before installing the Data Center
software
Preinstallation tasks, on page 50
Information about configurations and licensing options Evaluating configuration and license options, on page 51
The requirements for installing the Data Center server
software
Data Center server requirements, on page 52
The requirements for storage solutions Storage solutions requirements, on page 55
The requirements for network connections Network requirements, on page 59
The requirements for security Security requirements, on page 60
How to install the required Microsoft software on the Data
Center server
Installing and configuring Microsoft software, on page 61
How to prepare Support Center and the Account
Management Website with MyRoamTMapplication for use
Support Center and Account Management Website
preparation, on page 68
7/24/2019 Connecteddata Center Deployment 82
50/180
Chapter 5/Preparing for Installation Preinstallation tasks
50 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Preinstallation tasks
Preparing for installation
To prepare your Data Center for installation, you should do the following:
Evaluate the appropriate configuration and licensing for your organization.
Review Data Center server requirements.
Review storage solutions requirements.
Review the network requirements.
Review security requirements for your Data Center.
Install and configure Microsoftsoftware.
Prepare the Support Center and Account Management Website server(s) for installation.
The specific tasks within each of these steps depend on your Data Center configuration. For example, a standalone Data
Center has a different configuration from a mirrored Data Center, and a disk-only Data Center has a different configuration
from a Data Center with Hierarchical Storage Manager (HSM) installed.
Use theData Center Installation WorksheetsinAppendix Bto organize your Data Center information. Having this
information available can make the Data Center installation easier.
Note
The Data Center supports the use of the Account Management Website with MyRoam application for
Connected Backup Agents. If you are using Legacy Agents only, you can ignore all references to the
MyRoam application.
7/24/2019 Connecteddata Center Deployment 82
51/180
Evaluating configuration and license options Chapter 5/Preparing for Installation
Iron Mountain Incorporated ConnectedBackup Data Center Deployment 51
Evaluating configuration and license options
Estimate your needs
Prior to installation, your organization must evaluate deployment options and select the configuration and licensing
agreement most appropriate to your information backup requirements. As part of this process, it is important to determinethe following:
Whether you will have a standalone Data Center or one configured with mirrored pair(s).
Your licensing needs.
The features that you need to deploy.
Standalone Data Center or mirrored pair
The most important decision when preparing to deploy the Data Center is whether you are going to use a standalone Data
Center server or a mirrored pair of servers. There is only one server in a standalone Data Center while a mirrored
configuration has two servers that mirror each other. You can also deploy a clustered Data Center. A clustered Data Center
is similar to a mirrored configuration except that it has more than one mirrored pair. For more information about Data Centerconfigurations, refer to the ConnectedBackup/PC Product Overviewmanual.
The decision to use a standalone Data Center or mirrored pairs depends on the anticipated size of your deployment and the
hardware you have available. Generally speaking, a mirrored configuration requires two of everything that you need for a
standalone Data Center. Of course, the benefit is in having redundant data and server availability during maintenance
downtime or in the event of a disaster.
Whichever configuration you select, you should have an additional server to function as a Web server for Support Center
and, optionally, the MyRoam application.
Licensing
Every Data Center must be licensed. You can purchase licenses for PC accounts only, server accounts only, or both. You
should obtain a permanent license before installing the Data Center software. However, if you do not have a license at the
time of installation, the Setup program creates a temporary license that expires in thirty days.
Your Data Center's license enables optional features your organization has chosen to implement. It also tracks the number of
active users and servers on the Data Center and warns you when the license use is nearing the contracted number.
If at any time you would like to change the features in use at your Data Center or increase the number of end users serviced,
you can contact Support to obtain a new license. For further information about obtaining a new Data Center license, refer to
Chapter 14: Monthly Maintenance.
7/24/2019 Connecteddata Center Deployment 82
52/180
Chapter 5/Preparing for Installation Data Center server requirements
52 ConnectedBackup Data Center Deployment Iron Mountain Incorporated
Data Center server requirements
Sizing your Data Center
You should select the hardware for each Data Center server with care. For current information on how to select an appropriate
processor, amount of memory, RAID storage, and other hardware, refer to Chapter 4: Sizing Your Data Centeror request anindividualized assessment from your Connected representative.
Production-quality Data Center servers should be dedicated to this application and not shared with other applications.
Configuring these servers as primary or secondary MicrosoftWindowsdomain controllers places an additional
performance burden on them and is not normally recommended, unless the Data Center servers are the only ones using the
domain.
Software requirements
To install the Data Center software, your servers must have the following software installed
Component Requirement
Operating system Any of the following Microsoft Windows Server U. S. English operating
systems:
Windows 2003 Standard and Enterprise Editions with Service Pack 2, and
ODBC version 3.0 or later.
Windows 2003 Server Standard and Enterprise Editions R2, and ODBC
version 3.0 or later.
Windows 2000 Server with Service Pack (SP) 4 and all security hotfixes,
and ODBC version 3.0 or later.
Microsoft .NET Framework version 1.1 installed on the Support Center
server.
Important:Windows Small Business Server 2003 is nota supported operating
system for the Data Center server.
Note: On the Web server where Support Center is installed, you must set the
Locale in Regional Options to English (United