39
1 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Apache Spark and Object Stores —What you need to know Steve Loughran [email protected] @steveloughran October 2016

Spark Summit EU talk by Steve Loughran

Embed Size (px)

Citation preview

Page 1: Spark Summit EU talk by Steve Loughran

1 ©HortonworksInc.2011– 2016.AllRightsReserved

ApacheSparkandObjectStores—[email protected]@steveloughran

October2016

Page 2: Spark Summit EU talk by Steve Loughran

Steve Loughran,Hadoop committer, PMC member, …

Chris Nauroth, Apache Hadoop committer & PMC ASF member

Rajesh BalamohanTez Committer, PMC Member

Page 3: Spark Summit EU talk by Steve Loughran

3 ©HortonworksInc.2011– 2016.AllRightsReserved

ORC, Parquetdatasets

inbound

ElasticETL

HDFS

external

Page 4: Spark Summit EU talk by Steve Loughran

4 ©HortonworksInc.2011– 2016.AllRightsReserved

datasets

external

Notebooks

library

Page 5: Spark Summit EU talk by Steve Loughran

5 ©HortonworksInc.2011– 2016.AllRightsReserved

Streaming

Page 6: Spark Summit EU talk by Steve Loughran

6 ©HortonworksInc.2011– 2016.AllRightsReserved

AFilesystem:Directories,Filesà Data

/

work

pending

part-00

part-01

00

00

00

01

01

01

complete

part-01

rename("/work/pending/part-01", "/work/complete")

Page 7: Spark Summit EU talk by Steve Loughran

7 ©HortonworksInc.2011– 2016.AllRightsReserved

ObjectStore:hash(name)->blob

00

00

00

01

01

s01 s02

s03 s04

hash("/work/pending/part-01")["s02", "s03", "s04"]

copy("/work/pending/part-01","/work/complete/part01")

01

010101

delete("/work/pending/part-01")

hash("/work/pending/part-00")["s01", "s02", "s04"]

Page 8: Spark Summit EU talk by Steve Loughran

8 ©HortonworksInc.2011– 2016.AllRightsReserved

RESTAPIs

00

00

00

01

01

s01 s02

s03 s04

HEAD /work/complete/part-01

PUT /work/complete/part01x-amz-copy-source: /work/pending/part-01

01

DELETE /work/pending/part-01

PUT /work/pending/part-01... DATA ...

GET /work/pending/part-01Content-Length: 1-8192

GET /?prefix=/work&delimiter=/

Page 9: Spark Summit EU talk by Steve Loughran

9 ©HortonworksInc.2011– 2016.AllRightsReserved

Often:EventuallyConsistent

00

00

00

01

01

s01 s02

s03 s04

01

DELETE /work/pending/part-00

GET /work/pending/part-00

GET /work/pending/part-00

200

200

200

Page 10: Spark Summit EU talk by Steve Loughran

10 ©HortonworksInc.2011– 2016.AllRightsReserved

org.apache.hadoop.fs.FileSystem

hdfs s3awasb adlswift gs

Page 11: Spark Summit EU talk by Steve Loughran

11 ©HortonworksInc.2011– 2016.AllRightsReserved

s3:// —“inode on S3”

s3n://“Native” S3

s3a://Replaces s3n

swift://OpenStack

wasb://Azure WASBs3a:// Stabilize

oss://Aliyun

gs://Google Cloud

s3a://Speed and consistency adl://

Azure Data Lake

2006

2008

2013

2014

2015

2016

s3://Amazon EMR S3

History of Object Storage Support

Page 12: Spark Summit EU talk by Steve Loughran

12 ©HortonworksInc.2011– 2016.AllRightsReserved

CloudStorageConnectorsAzure WASB ● Strongly consistent

● Good performance● Well-tested on applications (incl. HBase)

ADL ● Strongly consistent● Tuned for big data analytics workloads

Amazon Web Services S3A ● Eventually consistent - consistency work in progress by Hortonworks

● Performance improvements in progress● Active development in Apache

EMRFS ● Proprietary connector used in EMR● Optional strong consistency for a cost

Google Cloud Platform GCS ● Multiple configurable consistency policies● Currently Google open source● Good performance● Could improve test coverage

Page 13: Spark Summit EU talk by Steve Loughran

13 ©HortonworksInc.2011– 2016.AllRightsReserved

FourChallenges

1. Classpath

2. Credentials

3. Code4. Commitment

Let's look At S3 and Azure

Page 14: Spark Summit EU talk by Steve Loughran

14 ©HortonworksInc.2011– 2016.AllRightsReserved

UseS3AtoworkwithS3(EMR: useAmazon'ss3://)

Page 15: Spark Summit EU talk by Steve Loughran

15 ©HortonworksInc.2011– 2016.AllRightsReserved

Classpath:fix“NoFileSystem forscheme:s3a”

hadoop-aws-2.7.x.jar

aws-java-sdk-1.7.4.jarjoda-time-2.9.3.jar(jackson-*-2.6.5.jar)

See SPARK-7481

Get Spark with Hadoop 2.7+ JARs

Page 16: Spark Summit EU talk by Steve Loughran

16 ©HortonworksInc.2011– 2016.AllRightsReserved

Credentials

core-site.xml orspark-default.conf

spark.hadoop.fs.s3a.access.key MY_ACCESS_KEY

spark.hadoop.fs.s3a.secret.key MY_SECRET_KEY

spark-submit automaticallypropagatesEnvironmentVariablesexport AWS_ACCESS_KEY=MY_ACCESS_KEY

export AWS_SECRET_KEY=MY_SECRET_KEY

NEVER: share, check in to SCM, paste in bug reports…

Page 17: Spark Summit EU talk by Steve Loughran

17 ©HortonworksInc.2011– 2016.AllRightsReserved

AuthenticationFailure:403

com.amazonaws.services.s3.model.AmazonS3Exception:The request signature we calculated does not matchthe signature you provided.Check your key and signing method.

1. Check joda-time.jar & JVM version2. Credentials wrong3. Credentials not propagating4. Local system clock (more likely on VMs)

Page 18: Spark Summit EU talk by Steve Loughran

18 ©HortonworksInc.2011– 2016.AllRightsReserved

Code:BasicIO

// Read in public datasetval lines = sc.textFile("s3a://landsat-pds/scene_list.gz")val lineCount = lines.count()

// generate and write dataval numbers = sc.parallelize(1 to 10000)numbers.saveAsTextFile("s3a://hwdev-stevel-demo/counts")

All you need is the URL

Page 19: Spark Summit EU talk by Steve Loughran

19 ©HortonworksInc.2011– 2016.AllRightsReserved

Code:justusetheURLoftheobjectstore

val csvdata = spark.read.options(Map("header" -> "true","inferSchema" -> "true","mode" -> "FAILFAST"))

.csv("s3a://landsat-pds/scene_list.gz")

...read time O(distance)

Page 20: Spark Summit EU talk by Steve Loughran

20 ©HortonworksInc.2011– 2016.AllRightsReserved

DataFrames

val landsat = "s3a://stevel-demo/landsat"csvData.write.parquet(landsat)

val landsatOrc = "s3a://stevel-demo/landsatOrc"csvData.write.orc(landsatOrc)

val df = spark.read.parquet(landsat)val orcDf = spark.read.parquet(landsatOrc)

Page 21: Spark Summit EU talk by Steve Loughran

21 ©HortonworksInc.2011– 2016.AllRightsReserved

FindingdirtydatawithSparkSQL

val sqlDF = spark.sql("SELECT id, acquisitionDate, cloudCover"

+ s" FROM parquet.`${landsat}`")

val negativeClouds = sqlDF.filter("cloudCover < 0")negativeClouds.show()

* filter columns and data early * whether/when to cache()?* copy popular data to HDFS

Page 22: Spark Summit EU talk by Steve Loughran

22 ©HortonworksInc.2011– 2016.AllRightsReserved

spark-default.conf

spark.sql.parquet.filterPushdown truespark.sql.parquet.mergeSchema falsespark.hadoop.parquet.enable.summary-metadata false

spark.sql.orc.filterPushdown truespark.sql.orc.splits.include.file.footer truespark.sql.orc.cache.stripe.details.size 10000

spark.sql.hive.metastorePartitionPruning true

Page 23: Spark Summit EU talk by Steve Loughran

23 ©HortonworksInc.2011– 2016.AllRightsReserved

Notebooks? Classpath & Credentials

Page 24: Spark Summit EU talk by Steve Loughran

24 ©HortonworksInc.2011– 2016.AllRightsReserved

TheCommitmentProblem

⬢ rename() usedforatomiccommitmenttransaction

⬢ timetocopy()+delete()proportionaltodata*files

⬢ S3:6+MB/s⬢ Azure:alotfaster—usually

spark.speculation falsespark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2spark.hadoop.mapreduce.fileoutputcommitter.cleanup.skipped true

Page 25: Spark Summit EU talk by Steve Loughran

25 ©HortonworksInc.2011– 2016.AllRightsReserved

What about Direct Output Committers?

Page 26: Spark Summit EU talk by Steve Loughran

26 ©HortonworksInc.2011– 2016.AllRightsReserved

RecentS3APerformance(Hadoop2.8,HDP2.5,CDH5.9(?))

// forward seek by skipping streamspark.hadoop.fs.s3a.readahead.range 157810688

// faster backward seek for ORC and Parquet inputspark.hadoop.fs.s3a.experimental.input.fadvise random

// PUT blocks in separate threadsspark.hadoop.fs.s3a.fast.output.enabled true

Page 27: Spark Summit EU talk by Steve Loughran

27 ©HortonworksInc.2011– 2016.AllRightsReserved

AzureStorage:wasb://

AfullsubstituteforHDFS

Page 28: Spark Summit EU talk by Steve Loughran

28 ©HortonworksInc.2011– 2016.AllRightsReserved

Classpath:fix“NoFileSystem forscheme:wasb”

wasb:// :Consistent,withveryfastrename(hence:commits)

hadoop-azure-2.7.x.jarazure-storage-2.2.0.jar

+ (jackson-core; http-components, hadoop-common)

Page 29: Spark Summit EU talk by Steve Loughran

29 ©HortonworksInc.2011– 2016.AllRightsReserved

Credentials:core-site.xml /spark-default.conf

<property><name>fs.azure.account.key.example.blob.core.windows.net</name><value>0c0d44ac83ad7f94b0997b36e6e9a25b49a1394c</value></property>

spark.hadoop.fs.azure.account.key.example.blob.core.windows.net0c0d44ac83ad7f94b0997b36e6e9a25b49a1394c

wasb://[email protected]

Page 30: Spark Summit EU talk by Steve Loughran

30 ©HortonworksInc.2011– 2016.AllRightsReserved

Example:AzureStorageandStreaming

val streaming = new StreamingContext(sparkConf,Seconds(10))val azure = "wasb://[email protected]/in"val lines = streaming.textFileStream(azure)val matches = lines.map(line => {

println(line)line

})matches.print()streaming.start()

* PUT into the streaming directory* keep the dir clean* size window for slow scans

Page 31: Spark Summit EU talk by Steve Loughran

31 ©HortonworksInc.2011– 2016.AllRightsReserved

NotCovered

⬢ Partitioning/directorylayout

⬢ InfrastructureThrottling

⬢ Optimalpathnames⬢ Errorhandling

⬢ Metrics

Page 32: Spark Summit EU talk by Steve Loughran

32 ©HortonworksInc.2011– 2016.AllRightsReserved

Summary

⬢ ObjectStoreslookjustlikeanyotherURL

⬢ …butdoneedclasspathandconfiguration

⬢ Issues:performance,commitment

⬢ UseHadoop2.7+JARs

⬢ TunetoreduceI/O

⬢ Keepthosecredentialssecret!

Page 33: Spark Summit EU talk by Steve Loughran
Page 34: Spark Summit EU talk by Steve Loughran

34 ©HortonworksInc.2011– 2016.AllRightsReserved

BackupSlides

Page 35: Spark Summit EU talk by Steve Loughran

35 ©HortonworksInc.2011– 2016.AllRightsReserved

DependenciesinHadoop2.8

hadoop-aws-2.8.x.jar

aws-java-sdk-core-1.10.6.jaraws-java-sdk-kms-1.10.6.jaraws-java-sdk-s3-1.10.6.jarjoda-time-2.9.3.jar (jackson-*-2.6.5.jar)

hadoop-aws-2.8.x.jar

azure-storage-4.2.0.jar

Page 36: Spark Summit EU talk by Steve Loughran

36 ©HortonworksInc.2011– 2016.AllRightsReserved

S3Server-SideEncryption

⬢ EncryptionofdataatrestatS3

⬢ SupportstheSSE-S3option:eachobjectencryptedbyauniquekeyusingAES-256cipher

⬢ NowcoveredinS3Aautomatedtestsuites

⬢ Supportforadditionaloptionsunderdevelopment(SSE-KMSandSSE-C)

Page 37: Spark Summit EU talk by Steve Loughran

37 ©HortonworksInc.2011– 2016.AllRightsReserved

Advancedauthentication

<property><name>fs.s3a.aws.credentials.provider</name><value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,com.amazonaws.auth.EnvironmentVariableCredentialsProvider,com.amazonaws.auth.InstanceProfileCredentialsProvider,org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider</value>

</property>

+encrypted credentials in JECKS files on HDFS

Page 38: Spark Summit EU talk by Steve Loughran

38 ©HortonworksInc.2011– 2016.AllRightsReserved

What Next? Performance and integration

Page 39: Spark Summit EU talk by Steve Loughran

39 ©HortonworksInc.2011– 2016.AllRightsReserved

NextStepsforall ObjectStores

⬢ OutputCommitters– Logicalcommitoperationdecoupledfromrename(non-atomicandcostlyinobjectstores)

⬢ ObjectStoreAbstractionLayer– AvoidimpedancemismatchwithFileSystem API

– ProvidespecificAPIsforbetterintegrationwithobjectstores:saving,listing,copying

⬢ OngoingPerformanceImprovement

⬢ Consistency