Scaling ElasticSearch at Synthesio

Preview:

Citation preview

ELASTICSEARCH @SYNTHESIO

FRED DE VILLAMIL, DIRECTOR OF INFRASTRUCTURE @FDEVILLAMIL FEBRUARY 2016

BACKGROUND

•  FRED DE VILLAMIL, 37 ANS, DIRECTOR OF INFRASTRUCTURE @SYNTHESIO

•  LINUX / (FREE)BSD SINCE 1996 •  OPEN SOURCE CONTRIBUTOR SINCE 1998 •  RUNS ELASTICSEARCH IN PRODUCTION SINCE 0.17.6

ABOUT SYNTHESIO

•  Synthesio is the leading social intelligence tool for social media monitoring & social analytics.

•  Synthesio crawls the Web for relevant data, enriches it with sentiment analysis and demographics to build social analytics dashboards.

ELASTICSEARCH @SYNTHESIO, JANUARY 2016

•  3 clusters, 116 physical servers, 271TB storage, 7,25TB RAM

•  60B indexed documents, 160TB data •  1.5B new documents each month: mix of Web

pages, forums and social media posts

DEC. 2014: THE MYSQL NIGHTMARE

•  Cross clusters queries on 3 massive Galera clusters •  Up to 50M rows fetched from a massive 4B rows

reference table •  Then a cross cluster joint on a 20TB, 35B records

monolithic MySQL database •  Poor performances, frequent timeouts

JAN. 2015: CLIPPING REVOLUTION

JAN. 2015: CLIPPING REVOLUTION

•  1 global index, 512 shards, 5B documents •  1000 new documents / second •  47 servers running ElasticSearch 1.3.2 then 1.3.9 •  Capacity : 37TB, 24TB data, 2.62TB RAM

CLUSTER TYPOLOGY

•  2 HTTP NODES: VIRTUAL MACHINES, 4 CORE, 8GB RAM EACH

•  3 MASTER NODES : VIRTUAL MACHINES, 4 CORE, 8GB RAM •  42 DATA NODES : PHYSICAL SERVERS, 6 CORE XEON

E5-1650 V2, 3*900GB SSD IN RAID 0, 64GB RAM

CLIPPING REVOLUTION DATA MODEL

•  ROUTING ON A MONTHLY BASIS

•  EACH CRAWLED DOCUMENT IS INDEXED WITH NESTED DASHBOARD IDS.

•  QUERIES ON TIME PERIOD + DASHBOARD ID

{ "document": { "dashboards": { "dashboard_id": 1, "dashboard_id": 2 } } }

PROBLEMS

•  TOO MANY SHARDS (WAS MEANT TO BE A WEEKLY ROUTING)

•  500GB TO 900GB SHARDS (!!!) GROWING AFTER THE MONTH IS OVER. 3 HOURS FOR A REALLOCATION

•  GARBAGE COLLECTOR NIGHTMARE, CONSTANTLY FLAPPING CLUSTER

•  A ROLLING RESTART TAKES 3 FULL DAYS (IF WE’RE LUCKY)

MORE PROBLEMS

•  IMMUTABLE, MONOLITHIC MAPPING: NEW FEATURE BLOCKED UNTIL WE FIX IT

•  IMPOSSIBLE TO DELETE A DASHBOARD WITHOUT REINDEXING A WHOLE MONTH

•  20% DELETED DOCUMENTS WASTING 3TB

IMMUTABLE MAPPING AND DELETED DATA

•  SEGMENTS : IMMUTABLE FILES USED BY LUCENE TO WRITE ITS DATA. UP TO 2500 / SHARD (!!!)

•  NO REAL DELETE: DELETED DOCUMENTS GET THE DELETED FLAG

•  ELASTICSEARCH _OPTIMIZE: MERGE A SHARD SEGMENTS IN 1 AND PURGE DELETED DOCS

•  BUT: REQUIRES 150% OF THE SHARD SIZE ON DISK

MERGE AND DELETE The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again.

SOLUTIONS

•  STORAGE MOVE FROM MMAPFS TO NIOFS •  GARBAGE COLLECTOR SWITCH FROM CMS TO G1GC •  FORCED EXPIRE ON THE FIELDDATA CACHE

MMAPFS VS NIOFS

•  MMAPFS : MAPS LUCENE FILES ON THE VIRTUAL MEMORY USING MMAP. NEEDS AS MUCH MEMORY AS THE FILE BEING MAPPED

•  NIOFS : APPLIES A SHARED LOCK ON LUCENE FILES AND RELIES ON THE FILE SYSTEM CACHE

CMS VS G1GC

•  CMS: SHARED CPU TIME WITH THE APPLICATION. “STOPS THE WORLD” WHEN TOO MANY MEMORY TO CLEAN UNTIL IT SENDS AN OUTOFMEMORYERROR

•  G1GC: SHORT, MORE FREQUENT, PAUSES. WON’T STOP A NODE UNTIL IT LEAVES THE CLUSTER

G1GC OPTIONS

JAVA_OPTS="$JAVA_OPTS -XX:+USEG1GC "

JAVA_OPTS="$JAVA_OPTS -XX:MAXGCPAUSEMILLIS=200"

JAVA_OPTS="$JAVA_OPTS -XX:GCPAUSEINTERVALMILLIS=1000"

JAVA_OPTS="$JAVA_OPTS -XX:INITIATINGHEAPOCCUPANCYPERCENT=35"

FIELD DATA CACHE EXPIRE

•  FORCES ELASTICSEARCH TO PERIODICALLY EMPTY ITS INTERNAL FIELDDATA CACHE

•  OVERLAPS THE GARBAGE COLLECTOR JOB •  PERFORMANCES ISSUES WITH FREQUENTLY ACCESSED DATA •  ELASTIC SAYS NEVER DO THIS!!! BUT IT FIXED OUR

BIGGER PROBLEM

FIELDDATA OPTIONS

FIELDDATA: BREAKER:

LIMIT: 80% CACHE:

SIZE: 30%

EXPIRE: 1M

DEC. 2015: BLACKHOLE, THE WELL NAMED

BLACKHOLE

•  36 INDEXES, 50B DOCUMENTS •  1,5B NEW DOCUMENTS EACH MONTH •  72 SERVERS RUNNING ELASTICSEARCH 1.7.5 •  CAPACITY: 209TB, 120TB DATA, 4,5TB RAM •  QUERIES ON THE WHOLE DATASET

CLUSTER TYPOLOGY

72 4 CORE XEON D-1520, 64GB RAM SERVERS •  2 HTTP NODES •  3 MASTER NODES •  67 DATA NODE : 4*800GB SSD IN RAID 0

INDEXING

•  A GO PROGRAM MERGES 3 GALERA CLUSTERS AND 1 ES CLUSTER INTO A KAFKA QUEUE: 30.000 DOCUMENTS / SECOND

•  8 GO INDEXERS MAP THE INDEX / DATA NODE DISTRIBUTION AND PUSH THE DATA DIRECTLY ON THE RIGHT DATA NODE: 60.000 DOCUMENTS / SECOND DURING 3 WEEKS WITH 120.000 / SECOND PEAKS

INDEXING PROCESS

PROBLEMS

•  THE XEON D CPU CAN’T COPE WITH OUR INDEXING PACE •  THE 50TB KAFKA QUEUE IS TOO SLOW

PROBLEM: WE’RE CPU BOUND

POST INDEXING PROBLEMS

•  QUERIES ON THE WHOLE DATA WORKS, FILTERED QUERIES CRASHES HALF OF THE CLUSTER

•  BIGGEST QUERIES ARE SLOW AS HELL

SOLUTIONS

•  ELASTICSEARCH CACHES THE RESULT OF FILTERED QUERIES: SET _CACHE TO FALSE.

•  UPGRADE TO 1.7.5: FILTERED QUERIES CACHE HAVE A MEMORY LEAK IN 1.7.4

•  DIVIDE THE GLOBAL QUERIES PER INDEX AND RUN IN PARALLEL

JAN. 2016: BLINK

JAN. 2016: BLINK

•  8200 indexes, 10B documents •  25000 new document / second •  70 physical servers running ElasticSearch 1.7.4,

then 1.7.5 •  Capacity: 60TB, 50TB data 2.62TB RAM

CLUSTERS TYPOLOGY

2 CLUSTERS RUNNING 4 CORE XEON D-1520, 64GB RAM SERVERS •  2 HTTP NODES •  3 MASTER NODES •  30 DATA NODE : 2*480GB SSD IN RAID 0

NEW PRODUCT DESIGN

•  1 INDEX / DASHBOARD, 1 SHARD / 5 MILLIONS DE DOCS •  VERSIONED MAPPING: MAPPING_ID__DASHBOARD_ID •  MULTIPLE MAPPING VERSIONS OF A DASHBOARD IN // •  MAPPING UPGRADE AND REINDEX WITHOUT INTERRUPTION •  BALDUR FOR DASHBOARDS ROUTING

BALDUR

BALDUR IN A NUTSHELL

1. THE API SERVER SENDS AN ELASTIC SEARCH QUERY 2. BALDUR INTERCEPTS THE QUERY AND GETS THE

DASHBOARD CLUSTER ID AND ACTIVE MAPPING VERSION 3. BALDUR ROUTES THE QUERY TO THE CLUSTER HOSTING

THE DASHBOARD DATA

ADDING A MAPPING VERSION

•  THE INDEXER CREATES NEW NEW_MAPPING_ID__DASHBOARD_ID INDEX

•  THE INDEXER ADDS A LINE IN BALDUR’S DATABASE WITH THE DASHBOARD AND MAPPING IDS

•  THE INDEXERS INDEXES BOTH MAPPING_ID__DASHBOARD_ID AND NEW_MAPPING_ID__DASHBOARD_ID

•  WHEN NEW_MAPPING_ID__DASHBOARD_ID HAS CAUGHT UP, BALDER SWITCHES THE ACTIVE MAPPING

PROBLEMS

•  EACH DATA NODE HOSTS 1000S LUCENE SEGMENTS •  75% OF THE HEAP IS USED FOR SEGMENTS MANAGEMENT •  WE CREATE MORE SEGMENTS THAN WE’RE ABLE TO OPTIMIZE •  5000 DOCS, RANDOM READS BASED, BULK INDEXING PUTS

MYSQL ON THEIR KNEES •  FREQUENT CONFIGURATION UPDATES DURING THE RAMP UP

ARE COMPLICATED

TOO MANY LUCENE SEGMENTS

•  CONTINUOUS OPTIMISATION SCRIPTS •  CONTINUOUS OLD INDEXES CLEANUP

CUT MYSQL A LITTLE SLACK

•  FETCH THE DOCUMENTS IN BLACKHOLE BY 5000 •  IF SOME DOCUMENTS ARE MISSING, FETCH IN MYSQL •  RESULT: 99.9% DOCUMENTS EXTRACTED FROM

BLACKHOLE, THROUGHPUT * 5

REPLACE MYSQL WITH BLACKHOLE

30 MINUTES FULL RESTART $FQDN: CURL -XPUT 'HTTP://ESMASTER01.ESCLUSTER02.PRODUCTION.INT:9200/_CLUSTER/SETTINGS' -D '{ "TRANSIENT" : { "CLUSTER.ROUTING.ALLOCATION.ENABLE": "NONE" } }' $FQDN: ANSIBLE-PLAYBOOK -I PRODUCTION -E RADCK_ID=PATRACK -L DATANODES ROLLING.YML $FQDN: CURL -XPUT 'HTTP://ESMASTER01.ESCLUSTER02.PRODUCTION.INT:9200/_CLUSTER/SETTINGS' -D '{ "TRANSIENT" : { "CLUSTER.ROUTING.ALLOCATION.ENABLE": "ALL" } }' WAIT FOR THE GREEN LIGHT, PROCESS THE OTHER RACK ID.

RACK AWARENESS

RACK AWARENESS IN A NUTSHELL

1.  DEFINE 2 VIRTUAL RACK IDS 2.  ASSIGN DATA NODES A RACK 3.  ENABLE RACK AWARENESS 4. PRIMARY SHARDS PICK UP A SIDE, REPLICA PICK UP THE

OTHER ONE

ADDING NEW DOCUMENTS TO A DASHBOARD

WHERE DO YOU GO, MY LOVELY?

•  PROBLEM: HOW DO WE KNOW IN WHICH DASHBOARD FITS A NEW DOCUMENT

•  STOP RELYING ON MYSQL AND SPHINX •  50M NEW DOCUMENTS TO PROCESS A DAY

SOLUTION : PERCOLATION

•  REVERSE DIRECTORY SYSTEM •  WE STORE QUERIES, NOT DOCUMENTS •  FOR EACH NEW DOCUMENT, WE MATCH THE DOCUMENT

AGAINST OUR QUERIES

PERCOLATION ISSUES

•  IT TRIES TO MATCH EVERY STORED QUERY •  SO FAR, WE HAVE 35000 STORE QUERIES •  RAW USE: 1,750,000,000,000 MATCHES A DAY •  CPU GREEDY

SOLUTIONS

•  ROUTING WITH THE DASHBOARD AND DOCUMENT LANGUAGES

•  FILTER AGAINST THE QUERY SECOND •  RESULT: UP TO 100,000 QUERIES / SECOND

QUESTIONS ?

@FDEVILLAMIL @SYNTHESIO

Recommended