50
Hands On MapR CLI only, no GUIViadea Zhu http://weibo.com/viadea March. 2012

Hands on MapR -- Viadea

  • Upload
    viadea

  • View
    2.261

  • Download
    10

Embed Size (px)

DESCRIPTION

MapR

Citation preview

Page 1: Hands on MapR -- Viadea

Hands On MapRCLI only, no GUI☺

Viadea Zhuhttp://weibo.com/viadea

March. 2012

Page 2: Hands on MapR -- Viadea

Agenda• MapR Architecture

• Cluster Management

• Volume

• Mirror

• Schedule

• Snapshot

• NFS

• Managing Data

• Users and Groups

• Troubleshooting and Performance tunning

Page 3: Hands on MapR -- Viadea

MapR Architecture• Basic Services

– CLDB– FileServer– Jobtracker– Tasktracker– Zookeeper– NFS– WebServer

• wardenA process called the warden runs on all nodes to manage,

monitor, and report on the other services on each node.The warden will not start any services unless ZooKeeper is

reachable and more than half of the configured ZooKeepernodes are live.

Page 4: Hands on MapR -- Viadea

Cluster Management

1.Start ZooKeeper on all nodes where it is installed, by issuing the following command

/etc/init.d/mapr-zookeeper start

2.On one of the CLDB nodes and the node running the mapr-webserverservice, start the warden:

/etc/init.d/mapr-warden start

• Bring up cluster:

Page 5: Hands on MapR -- Viadea

Cluster Management

1. Determine which nodes are running the NFS gatewa y.

[root@mdw]# /opt/mapr/bin/maprcli node list -filter "[rp==/*]and[svc==nfs]" -columns id,h,hn,svc, rpid service hostname health ip4277269757083023248 tasktracker,webserver,cldb,fileserver,nfs,hoststats ,jobtracker mdw2 172.28.4.250,10.32.190.66,172.28.8.250,172. 28.12.250 3528082726925061986 tasktracker,fileserver,nfs,hos tstatssdw1 2 172.28.4.1,172.28.8.1,172.28.12.1 5521777324064226112 fileserver,tasktracker,nfs,hos tstatssdw3 0 172.28.8.3,172.28.12.3,172.28.4.3 3482126520576246764 fileserver,tasktracker,nfs,hos tstatssdw5 0 172.28.4.5,172.28.8.5,172.28.12.5 4667932985226440135 fileserver,tasktracker,nfs,hos tstatssdw7 0 172.28.8.7,172.28.12.7,172.28.4.7

• Stop cluster(1):

Page 6: Hands on MapR -- Viadea

Cluster Management

2. Determine which nodes are running the CLDB.

[root@mdw]# /opt/mapr/bin/maprcli node list -filter "[rp==/*]and[svc==cldb]" -columns id,h,hn,svc, rpid service hostname health ip4277269757083023248 tasktracker,webserver,cldb,fileserver,nfs,hoststats ,jobtracker mdw2 172.28.4.250,10.32.190.66,172.28.8.250,172. 28.12.250

• Stop cluster(2):

Page 7: Hands on MapR -- Viadea

Cluster Management

3. List all non-CLDB nodes.

[root@mdw]# /opt/mapr/bin/maprcli node list -filter "[rp==/*]and[svc!=cldb]" -columns id,h,hn,svc, rpid service hostname health ip3528082726925061986 tasktracker,fileserver,nfs,hos tstats sdw1 2 172.28.4.1,172.28.8.1,172.28.12.1 5521777324064226112 fileserver,tasktracker,nfs,hos tstats sdw3 0 172.28.8.3,172.28.12.3,172.28.4.3 3482126520576246764 fileserver,tasktracker,nfs,hos tstats sdw5 0 172.28.4.5,172.28.8.5,172.28.12.5 4667932985226440135 fileserver,tasktracker,nfs,hos tstats sdw7 0 172.28.8.7,172.28.12.7,172.28.4.7

• Stop cluster(3):

Page 8: Hands on MapR -- Viadea

Cluster Management

4. Shut down all NFS instances.

/opt/mapr/bin/maprcli node services -nfs stop -nodes mdw sdw1 sdw3 sdw5 sdw7

5. SSH into each CLDB node and stop the warden./etc/init.d/mapr-warden stop

6. SSH into each of the remaining nodes and stop t he warden./etc/init.d/mapr-warden stop

7. Stop the zookeeper on zookeeper node(s)./etc/init.d/mapr-zookeeper stop

• Stop cluster(4):

Page 9: Hands on MapR -- Viadea

Cluster Management• Restart Webserver:/opt/mapr/adminuiapp/webserver stop

/opt/mapr/adminuiapp/webserver start

• Restart Services: (eg, tasktracker)maprcli node services -nodes mdw -tasktracker stop

maprcli node services -nodes mdw -tasktracker start

• Grant full permission to chosen administrator OS user/opt/mapr/bin/maprcli acl edit -type cluster -user <u ser>:fc

Page 10: Hands on MapR -- Viadea

Cluster Management• Alarm Emailmaprcli alarm config save -values "AE_ALARM_AEQUOTA_E XCEEDED,1,[email protected]"

maprcli alarm config save -values "NODE_ALARM_CORE_PR ESENT,1,[email protected]

• List Alarm[gpadmin@mdw]$ maprcli alarm list -type cluster

alarm state description entity alarm name alarm statec hange time

1 One or more licenses is about to expir e within 28 days CLUSTER CLUSTER_ALARM_LICENSE_NEAR_EXPIRATION 1330171978541

[gpadmin@mdw]$ maprcli alarm list -type node

alarm state description entity alarm name alarm statech ange time

1 Can not determine if service: cldb is r unning. Check logs at: /opt/mapr/logs/cldb.log sdw1 NODE_ALARM_SERVICE_CLDB_DOWN 1324274386763

1 Node has core file(s ) mdw NODE_ALARM_CORE_PRESENT 1330145172579

Page 11: Hands on MapR -- Viadea

Cluster Management• List Nodesmaprcli node list -columns id,h,hn,br,da,dtotal,duse d,davail,fs-heartbeat

maprcli node list -columns id,br,fs-heartbeat,jt-hea rtbeat

• Remove NodesTake sdw5 for example:

1. Stop warden on sdw5:

/etc/init.d/mapr-warden stop

2. Remove on CLDB node:

maprcli node remove -nodes sdw5 -zkconnect sdw1:5181

Page 12: Hands on MapR -- Viadea

Cluster Management• Reformat a nodeTake sdw5 for example:

1. Stop warden:

/etc/init.d/mapr-warden stop

2. Remove the disktab file:

rm /opt/mapr/conf/disktab

3. Create a text file /tmp/disks.txt that lists all the disks and partitions to format for use by Greenplum HD EE.

[root@sdw5 ~]# cat /tmp/disks.txt

/data2/hdpee/storagefile

4. Use disksetup to re-format the disks:

/opt/mapr/server/disksetup -F /tmp/disks.txt

5. Start the Warden:

/etc/init.d/mapr-warden start

Page 13: Hands on MapR -- Viadea

Cluster Management• Add a new node/opt/mapr/server/configure.sh -C mdw -Z sdw1 -N Viade aCluster

/opt/mapr/server/disksetup -F /tmp/disks.txt

/etc/init.d/mapr-warden start

Page 14: Hands on MapR -- Viadea

Volume • Turnoff compression [root@mdw ~]# hadoop mfs -ls|grep var

drwxrwxrwx Z - root root 1 2011-12-19 13:52 268435456 /var

[root@mdw ~]# hadoop mfs -setcompression off /var

[root@mdw ~]# hadoop mfs -ls|grep var

drwxrwxrwx U - root root 1 2011-12-19 13:52 268435456 /var

• Create volumemaprcli volume create -name viadeavol -path /viadeav ol -quota 1G -

advisoryquota 200M

maprcli volume create -name viadeavol.mirror -source viadeavol@viadeacluster -path /viadeavol_mirror -type 1

Page 15: Hands on MapR -- Viadea

Volume • List Volumemaprcli volume list -columns

volumeid,volumetype,volumename,mountdir,mounted,aen ame,quota,used,totalused,actualreplication,rackpath

• Viewing volume propertiesmaprcli volume info -name viadeavol

maprcli volume info -output terse -name viadeavol

• Modify volumemaprcli volume modify -name viadeavol.mirror -source viadeavol

Page 16: Hands on MapR -- Viadea

Volume • Mount/Unmount Volumemaprcli volume unmount -name viadeavol

maprcli volume mount -name viadeavol

• Remove volumemaprcli volume remove -name testvol

• Setting default volume topologymaprcli config save -values

"{\"cldb.default.volume.topology\":\"/default-rack\ "}"

maprcli config save -values "{\"cldb.default.volume.topology\":\"/\"}"

Page 17: Hands on MapR -- Viadea

Volume • CLDB only topology(1)1.Planning: CLDB only nodes: mdw,sdw1

Other nodes: sdw3,sdw5,sdw7

2.Checking node id: maprcli node list -columns id,hostname,"topo(rack)"

3.Move nodes to topology – “cldbonly”:maprcli node move -serverids 4277269757083023248,3528 082726925061986

-topology /cldbonly

4.Move CLDB volume to topology – “cldbonly”:maprcli volume move -name mapr.cldb.internal -topolog y /cldbonly

Page 18: Hands on MapR -- Viadea

Volume • CLDB only topology(2)5.Move non-CLDB nodes to topology – “noncldb”: maprcli node move -serverids

5521777324064226112,3482126520576246764,46679329852 26440135 -topology /noncldb

6.Move non-CLDB volumes to topology – “noncldb”:maprcli volume move -name mapr.var -topology /noncldb

maprcli volume move -name viadeavol -topology /noncld b

maprcli volume move -name mapr.hbase -topology /noncl db

maprcli volume move -name mapr.jobtracker.volume -top ology /noncldb

maprcli volume move -name mapr.cluster.root -topology /noncldb

Page 19: Hands on MapR -- Viadea

Mirror • Local/Remote mirrormaprcli volume create -name viadeavol_mirror1 -sour ce

viadeavol@viadeacluster -path /viadeavol_mirror1 -ty pe 1

maprcli volume create -name viadeavol_mirror2 -sour ce viadeavol@viadeacluster -path /viadeavol_mirror2 -ty pe 1

• Mirror Linkmaprcli volume link create -volume viadeavol -type mi rror -path

/maprfs::mirror::viadeavol

Page 20: Hands on MapR -- Viadea

Mirror • Sync Mirrors using “push”[root@mdw ~]# maprcli volume mirror push -name viadeavol

Starting mirroring of volume viadeavol_mirror2

Starting mirroring of volume viadeavol_mirror1

Mirroring complete for volume viadeavol_mirror1

Mirroring complete for volume viadeavol_mirror2

Successfully completed mirror push to all local mir rors of volume viadeavol

• Sync Mirror using “start”[root@mdw ~]# maprcli volume mirror start -full false -name

viadeavol_mirror1

messages

Started mirror operation for volume(s) 'viadeavol_m irror1'

Page 21: Hands on MapR -- Viadea

Mirror • Stop mirror sync [gpadmin@mdw viadea]$ maprcli volume mirror stop -name

viadeavol_mirror1

messages

Stopped mirror operation for 'viadeavol_mirror1

http://answers.mapr.com/questions/1773/about-stopping-mirror

Answer:

• Both mirror pushmirror pushmirror pushmirror push and mirror startmirror startmirror startmirror start work the same way ... the destination of the mirror pulls the data. The difference is that mirror pushmirror pushmirror pushmirror push is synchronous and the command will wait until the mirroring is complete, while mirror mirror mirror mirror startstartstartstart is asynchronous and only kicks off the mirroring and returns immediately without waiting.

• mirror stopmirror stopmirror stopmirror stop works in both situations.

Page 22: Hands on MapR -- Viadea

Schedule• Create Schedule maprcli schedule create -schedule '{"name":"Schedule -

1","rules":[{"frequency":"once","retain":"1w","time ":13,"date":"12/5/2010"}]}'

• List Schedule [root@mdw binary]# maprcli schedule list -output verbose

id name inuse rules

1 Critical data 0 ...

2 Important data 0 ...

3 Normal data 1 ...

4 mirror_sync 1 ...

5 Schedule-1 0 ...

Page 23: Hands on MapR -- Viadea

Schedule• Remove Schedulemaprclimaprclimaprclimaprcli schedule remove schedule remove schedule remove schedule remove ----id 5id 5id 5id 5

• Modify Schedule maprcli schedule modify -id 0 -name Newname -rules

'[{"frequency":"weekly","date":"sun","time":7,"reta in":"2w"},{"frequency":"daily","time":14,"retain":"1w"}]'

Page 24: Hands on MapR -- Viadea

Snapshot• View snapshot of one volume[gpadmin@mdw viadea]$ hadoop fs -ls /viadeavol_mirror2/.snapshot

Found 5 items

drwxrwxrwx - root root 7 2012-02-24 18:58 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirr orsnap.24-Feb-2012-22-35-51

drwxrwxrwx - root root 8 2012-02-24 22:32 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirr orsnap.25-Feb-2012-01-48-25

drwxrwxrwx - root root 10 2012-02-25 10:44 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirr orsnap.25-Feb-2012-12-05-43

drwxrwxrwx - root root 9 2012-02-24 23:00 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirr orsnap.25-Feb-2012-11-09-49

drwxrwxrwx - root root 0 1970-01-01 08:00 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirr orsnap.24-Feb-2012-22-26-18

Page 25: Hands on MapR -- Viadea

Snapshot• Create snapshotmaprcli volume snapshot create -snapshotname test-sna pshot -volume

viadeavol

• List snapshotmaprcli volume snapshot list -volume viadeavol

• Remove snapshotmaprcli volume snapshot remove -snapshotname test-sna pshotc3 -volume

viadeavol

• Preserve snapshotmaprcli volume snapshot preserve -snapshots 25600008 3

Page 26: Hands on MapR -- Viadea

NFS• Mount1.List the NFS shares exported on the server:[gpadmin@smdw ~]$ /usr/sbin/showmount -e mdw

Export list for mdw:

/mapr *

/mapr/ViadeaCluster *

2.Using root to create the directory on smdw:mkdir /mapr

3.Mount on smdw:mount mdw:/mapr /mapr

4.Change /etc/fstab on smdw:mdw:/mapr /mapr nfs rw 0 0

Page 27: Hands on MapR -- Viadea

NFS• Setting ChunkSize and Compression for a volume[root@smdw viadeavol]# more .dfs_attributes

# lines beginning with # are treated as comments

Compression=true

ChunkSize=268435456

[root@smdw viadeavol]# hadoop mfs -setchunksize 13107000 /viadeavol

setchunksize: chunksize should be a multiple of 64K

[root@smdw viadeavol]# hadoop mfs -setchunksize 13107200 /viadeavol

Page 28: Hands on MapR -- Viadea

NFS• Setting extension of compressed filemaprcli config save -values

{"mapr.fs.nocompression":"bz2,gz,tgz,tbz2,zip,z,Z,m p3,jpg,jpeg,mpg,mpeg,avi,gif,png"}

[gpadmin@mdw viadea]$ maprcli config load -keys mapr.fs.nocompression

mapr.fs.nocompression

bz2,gz,tgz,tbz2,zip,z,Z,mp3,jpg,jpeg,mpg,mpeg,avi,g if,png

Page 29: Hands on MapR -- Viadea

Managing Data• Dump and Restore Volumes1.Full dump:maprcli volume dump create -e endstate -dumpfile fulld ump1 -name

viadeavol

2.Do change to viadeavol3.Incremental dump:maprcli volume dump create -s endstate -e endstate2 - name viadeavol

-dumpfile incrdump1

4.Full restore:maprcli volume dump restore -name viadeavol_restore - dumpfile

fulldump1 -n

6.Mount viadeavol_restore7.Incremental restoremaprcli volume dump restore -name viadeavol_restore - dumpfile

incrdump1

Page 30: Hands on MapR -- Viadea

Managing Data• List Disks information[root@mdw]# /opt/mapr/server/mrconfig disk list

ListDisks resp: status 0 count=1

guid 01C7E418-ACC6-4F15-D202-0141CCEE4E00

size 20480MB

ListDisks /data/hdpee/storagefile

DG 0: Single SingleDisk50218 Online

DG 1: Concat Concat12 Online

SP 0: name SP1, Online, size 9874 MB, free 9379 MB, path/data/hdpee/storagefile

[root@mdw]# /opt/mapr/server/mrconfig sp list

ListSPs resp: status 0:1

No. of SPs (1), totalsize 9874 MB, totalfree 9379 MB

SP 0: name SP1, Online, size 9874 MB, free 9379 MB, path /data/hdpee/storagefile

Page 31: Hands on MapR -- Viadea

Users and Groups• List entity usage[root@mdw]# maprcli entity list

DiskUsage EntityQuota EntityType EntityName VolumeCountEntityAdvisoryquota EntityId EntityEmail

0 0 0 gpadmin 0 0 500 [email protected]

212 0 0 root 19 0 0 [email protected]

0 1048576 0 viadea 1 0 666 [email protected]

Page 32: Hands on MapR -- Viadea

Users and Groups• Cluster Permissionlogin (including cv): Log in to the Greenplum HD EE Control System, use the API and

command-line interface, read access on cluster and volumes

ss :Start/stop services

cv :Create volumesa:Admin access

fc :Full control (administrative access and permission to change the cluster ACL)

Page 33: Hands on MapR -- Viadea

Users and Groups• Volume Permissiondump :Dump the volumerestore :Mirror or restore the volume

m:Modify volume properties, create and delete snapshots

d:Delete a volume

fc :Full control (admin access and permission to change volume ACL)

Page 34: Hands on MapR -- Viadea

Users and Groups• List ACL[root@mdw conf]# maprcli acl show -type cluster

Principal Allowed actions

User root [login, ss, cv, a, fc]

User gpadmin [login, ss, cv, a, fc]

[root@mdw conf]# maprcli acl show -type volume -name viadeavol -user ro ot

Principal Allowed actions

User root [dump, restore, m, d, fc]

Page 35: Hands on MapR -- Viadea

Users and Groups• Modify ACL for a usermaprcli acl edit -type cluster -user viadea:cv

maprcli acl edit -type cluster -user viadea:a

maprcli acl edit -type volume -name viadeavol -user vi adea:m

• Modify ACL for a whole cluster or volumemaprcli acl set -type volume -name test-volume -user

jsmith:dump,restore,m rjones:fc

• Setting volume quotummaprcli volume modify -name viadeavol -quota 2G

• Setting entity quotummaprcli entity modify -type 0 -name viadea -quota 1T

Page 36: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Small Job(1)mapred-site.xml:<property>

<name>mapred.fairscheduler.smalljob.schedule.enable </name>

<value>true</value>

<description>Enable small job fast scheduling insid e fair scheduler.

TaskTrackers should reserve a slot called ephemeral slot which

is used for smalljob if cluster is busy.

</description>

</property>

Page 37: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Small Job(2)<!-- Small job definition. If a job does not satisfy any of following limits

it is not considered as a small job and will be mov ed out of small job pool.

-->

<property>

<name>mapred.fairscheduler.smalljob.max.maps </name>

<value>10</value>

<description>Small job definition. Max number of ma ps allowed in small job. </description>

</property>

<property>

<name>mapred.fairscheduler.smalljob.max.reducers </name>

<value>10</value>

<description>Small job definition. Max number of re ducers allowed in small job. </description>

</property>

Page 38: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Small Job(3)<property>

<name>mapred.fairscheduler.smalljob.max.inputsize </name>

<value>10737418240</value>

<description>Small job definition. Max input size i n bytes allowed for a small job.

Default is 10GB.

</description>

</property>

<property>

<name>mapred.fairscheduler.smalljob.max.reducer.inputsize </name>

<value>1073741824</value>

<description>Small job definition.

Max estimated input size for a reducer allowed in s mall job.

Default is 1GB per reducer.

</description>

</property>

Page 39: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Small Job(4)<property>

<name>mapred.cluster.ephemeral.tasks.memory.limit.mb </name>

<value>200</value>

<description>Small job definition. Max memory in mb ytes reserved for an ephermal slot.

Default is 200mb. This value must be same on JobTra cker and TaskTracker nodes.

</description>

</property>

Page 40: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Memory for Greenplum HD EE Services/opt/mapr/conf/warden.conf

service.command.tt.heapsize.percent=2 #The percen tage of heap space reserved for the TaskTracker.

service.command.tt.heapsize.max=325 #The maximu m heap space that can be used by the TaskTracker.

service.command.tt.heapsize.min=64 #The minimu m heap space for use by the TaskTracker.

[gpadmin@mdw viadea]$ cat /opt/mapr/conf/warden.conf|grep size|grep percent

service.command.jt.heapsize.percent=10

service.command.tt.heapsize.percent=2

service.command.hbmaster.heapsize.percent=4

service.command.hbregion.heapsize.percent=25

service.command.cldb.heapsize.percent=8

service.command.mfs.heapsize.percent=20

service.command.webserver.heapsize.percent=3

service.command.os.heapsize.percent=3

Page 41: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Memory for MapReduce/opt/mapr/hadoop/hadoop-0.20.2/conf/mapred-site.xml

<property>

<name>mapreduce.tasktracker.reserved.physicalmemory.mb </name>

<value></value>

<description> Maximum phyiscal memory tasktracker sho uld reserve for mapreduce tasks.

If tasks use more than the limit, task using maximu m memory will be killed.

Expert only: Set this value iff tasktracker should us e a certain amount of memory

for mapreduce tasks. In MapR Distro warden figures thi s number based

on services configured on a node.

Setting mapreduce.tasktracker.reserved.physicalmemo ry.mb to -1 will disable

physical memory accounting and task management.

</description>

</property>

Page 42: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Memory for MapReduceMap tasks MemoryMap tasks use memory mainly in two ways:

The application consumes memory to run the map function.

The MapReduce framework uses an intermediate buffer to hold serialized (key, value) pairs. (io.sort.mb)

/opt/mapr/hadoop/hadoop-0.20.2/conf/mapred-site.xml

io.sort.mb

Buffer used to hold map outputs in memory before wr iting final map outputs.

Setting this value very low may cause spills. By de fault if leftempty value is set to 50% of heapsize for map.

If a average input to map is "MapIn" bytes then typ ically value of io.sort.mb should be '1.25 times MapIn' bytes.

Page 43: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Memory for MapReduceReduce tasks Memorymapred.reduce.child.java.opts

Java opts for the reduce tasks. Default heapsize(-X mx) is determined by memory reserved for mapreduce at tasktracker.

Reduce task is given more memory than map task.

Default memory for a reduce task = (Total Memory re served for mapreduce) * (2*#reduceslots / (#mapslots + 2*#reduce slots))

Page 44: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Tasks number(1)Map slots should be based on how many map tasks can fit in memory,

and reduce slots should be based on the number of C PUs

mapred.tasktracker.map.tasks.maximum : (CPUS > 2) ? (CPUS * 0.75) : 1 (At least one Map slot, up to 0.75 times the number of CPUs)

mapred.tasktracker.reduce.tasks.maximum : (CPUS > 2) ? (CPUS * 0.50) : 1 (At least one Map slot, up to 0.50 times the nu mber of CPUs)

variables in formula:

CPUS - number of CPUs present on the node

DISKS - number of disks present on the node

MEM - memory reserved for MapReduce tasks

Page 45: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Tasks number(2)mapreduce.tasktracker.prefetch.maptasks

How many map tasks should be scheduled in-advance o n a tasktracker.

To be given in % of map slots. Default is 1.0 which means number of tasks overscheduled = total map slots on TT.

Page 46: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• Final&Important : What needs to collect???

/opt/mapr/support/tools/mapr-support-collect.sh -n s upport-output.txt

[root@mdw collect]# ls -altr /opt/mapr/support/collect/support-output.txt .tar

-rw-r--r-- 1 root root 27607040 Mar 1 22:34 /opt/map r/support/collect/support-output.txt.tar

Page 47: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• What are in the support dump file??

1.“cluster” Directory

2. Directory for each node• [root@mdw support-output.txt]# ls -altr

• total 32

• drwxr-xr-x 3 root root 4096 Mar 1 22:19 cluster

• drwxr-xr-x 8 root root 4096 Mar 1 22:24 .

• drwxr-xr-x 5 root root 4096 Mar 1 22:33 172.28.4.1

• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.8.7

• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.8.3

• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.4.5

• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.4.250

• drwxr-xr-x 4 root root 4096 Mar 1 22:36 ..

Page 48: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• What are in the “cluster” directory?

[root@mdw cluster]# cat cluster.txt|grep Output

Output of /opt/mapr/bin/maprcli node list -json

Output of /opt/mapr/bin/maprcli node topo -json

Output of /opt/mapr/bin/maprcli node heatmap -view st atus -json

Output of /opt/mapr/bin/maprcli volume list -json

Output of /opt/mapr/bin/maprcli dump zkinfo -json

Output of /opt/mapr/bin/maprcli config load -json

Output of /opt/mapr/bin/maprcli alarm list –json

(…)

Page 49: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• What are in the “node” directory?(1)“conf” subdirectory: roles, all conf files, disk info,and some other OS

commands.

“logs” subdirectory:all logs, /var/log/message,some mapr status logs.[root@mdw logs]# cat mfsState.txt|grep Output

Output of /opt/mapr/server/mrconfig -p 5660 info thr eads

Output of /opt/mapr/server/mrconfig -p 5660 info con tainers resync local

Output of /opt/mapr/bin/maprcli trace dump -port 566 0

Output of /opt/mapr/bin/maprcli dump fileserverworki nfo -fileserverip 172.28.4.1

“pam.d” subdirectory

Page 50: Hands on MapR -- Viadea

Troubleshooting&PerformanceTunning

• What are in the “node” directory?(2)MapRBuildVersion

redhat-release

secure.log

sysinfo.txt : some output of OS commands[gpadmin@mdw 172.28.4.1]$ cat sysinfo.txt|grep Output

Output of lscpu

Output of ifconfig -a

Output of uname -a

Output of netstat -an

Output of netstat -rn

Output of hostname

Output of cat /etc/hostname

(…)