Scaling Elasticsearch at Synthesio

Preview:

Citation preview

ELASTICSEARCH @SYNTHESIO

FRED DE VILLAMIL, DIRECTOR OF INFRASTRUCTURE @FDEVILLAMIL

BACKGROUND

• FRED DE VILLAMIL, 38 ANS, DIRECTOR OF INFRASTRUCTURE @SYNTHESIO

• LINUX / (FREE)BSD SINCE 1996

• OPEN SOURCE CONTRIBUTOR SINCE 1998

• RUNS ELASTICSEARCH IN PRODUCTION SINCE 0.17.6

ABOUT SYNTHESIO

• Synthesio is the leading social intelligence tool for social media monitoring & social analytics.

• Synthesio crawls the Web for relevant data, enriches it with sentiment analysis and demographics to build social analytics dashboards.

ELASTICSEARCH @SYNTHESIO, SEPTEMBER 2016

• 5 clusters, 163 physical servers, 400TB storage, 10.2TB RAM

• 75B indexed documents, 200TB data

• 1.8B indexed documents each month: mix of Web pages, forums and social media posts

POWERING 13000 DASHBOARDS WITH ELASTICSEARCH

DEC. 2014: THE MYSQL NIGHTMARE

• Cross clusters queries on 3 massive Galera clusters

• Up to 50M rows fetched from a massive 4B rows reference table

• Then a cross cluster joint on a 20TB, 35B records monolithic MySQL database

• Poor performances, frequent timeouts

JAN. 2015: CLIPPING REVOLUTION

• 1 global index, 512 shards, 5B documents

• 1000 new documents / second

• 47 servers running ElasticSearch 1.3.2 then 1.3.9

• Capacity : 37TB, 24TB data, 2.62TB RAM

CLUSTER TYPOLOGY

• 2 QUERY NODES: VIRTUAL MACHINES, 4 CORE, 8GB RAM EACH

• 3 MASTER NODES: VIRTUAL MACHINES, 4 CORE, 8GB RAM

• 42 DATA NODES: PHYSICAL SERVERS, 6 CORE XEON E5-1650 V2, 3*900GB SSD IN RAID 0, 64GB RAM

CLIPPING REVOLUTION DATA MODEL

• ROUTING ON A MONTHLY BASIS

• EACH CRAWLED DOCUMENT IS INDEXED WITH NESTED DASHBOARD IDS.

• QUERIES ON TIME PERIOD + DASHBOARD ID

{ "document": { "dashboards": { "dashboard_id": 1, "dashboard_id": 2 } } }

PROBLEMS

• TOO MANY SHARDS (WAS MEANT TO BE A WEEKLY ROUTING)

• 500GB TO 900GB SHARDS (!!!) GROWING AFTER THE MONTH IS OVER. 3 HOURS FOR A REALLOCATION

• A ROLLING RESTART TAKES 3 FULL DAYS (IF WE’RE LUCKY)

• GARBAGE COLLECTOR NIGHTMARE, CONSTANTLY FLAPPING CLUSTER

MMAPFS VS NIOFS

• MMAPFS : MAPS LUCENE FILES ON THE VIRTUAL MEMORY USING MMAP. NEEDS AS MUCH MEMORY AS THE FILE BEING MAPPED

• NIOFS : APPLIES A SHARED LOCK ON LUCENE FILES AND RELIES ON THE FILE SYSTEM CACHE

CMS VS G1GC

• CMS: SHARED CPU TIME WITH THE APPLICATION. “STOPS THE WORLD” WHEN TOO MANY MEMORY TO CLEAN UNTIL IT SENDS AN OUTOFMEMORYERROR

• G1GC: SHORT, MORE FREQUENT, PAUSES. WON’T STOP A NODE UNTIL IT LEAVES THE CLUSTER

G1GC OPTIONS

MAXGCPAUSEMILLIS=200: ENSURE LONGER GARBAGE COLLECTION

GCPAUSEINTERVALMILLIS=1000: BUT LESS FREQUENT

INITIATINGHEAPOCCUPANCYPERCENT=35: STARTS COLLECTING WHEN THE HEAP IS 35% USED

FIELD DATA CACHE EXPIRE

• FORCES ELASTICSEARCH TO PERIODICALLY EMPTY ITS INTERNAL FIELDDATA CACHE

• OVERLAPS THE GARBAGE COLLECTOR JOB

• PERFORMANCES ISSUES WITH FREQUENTLY ACCESSED DATA

• USE OF FIELD BREAKERS TO STOP GREEDY QUERIES

• ELASTIC SAYS NEVER DO THIS!!! BUT IT FIXED OUR BIGGEST PROBLEM

MORE PROBLEMS

• IMMUTABLE, MONOLITHIC MAPPING: NEW FEATURE BLOCKED UNTIL WE FIX IT

• IMPOSSIBLE TO DELETE A DASHBOARD WITHOUT REINDEXING A WHOLE MONTH

• 20% DELETED DOCUMENTS WASTING 3TB

IMMUTABLE MAPPING AND DELETED DATA

• SEGMENTS : IMMUTABLE FILES USED BY LUCENE TO WRITE ITS DATA. UP TO 2500 / SHARD (!!!)

• NO REAL DELETE: UPDATED AND DELETED DOCUMENTS GET THE DELETED FLAG

• ELASTICSEARCH _OPTIMIZE: MERGE A SHARD SEGMENTS IN 1 AND PURGE DELETED DOCS

• BUT: REQUIRES 150% OF THE SHARD SIZE ON DISK

MERGE AND DELETE

JAN. 2015: BLINK

• 13200 indexes, 12B documents

• 5500 new document / second

• 3 clusters, 75 physical servers running ElasticSearch 1.7.5

• Capacity: 187,5TB, 48TB data 4.7TB RAM

CLUSTERS TYPOLOGY

4 CORE XEON D-1520, 64GB RAM SERVERS

• 2 QUERY NODES

• 3 MASTER NODES

• 20 DATA NODE : 4*800GB SSD IN RAID 0

NEW PRODUCT DESIGN

• 1 INDEX / DASHBOARD, 1 SHARD / 5 MILLIONS DE DOCS

• VERSIONED MAPPING: MAPPING_ID__DASHBOARD_ID

• MULTIPLE MAPPING VERSIONS OF A DASHBOARD IN //

• MAPPING UPGRADE AND REINDEX WITHOUT INTERRUPTION

• BALDUR FOR DASHBOARDS ROUTING

BALDUR

BALDUR IN A NUTSHELL

1. THE API SERVER SENDS AN ELASTIC SEARCH QUERY 2. BALDUR INTERCEPTS THE QUERY AND GETS THE

DASHBOARD CLUSTER ID AND ACTIVE MAPPING VERSION 3. BALDUR ROUTES THE QUERY TO THE CLUSTER HOSTING

THE DASHBOARD DATA

ADDING A MAPPING VERSION

• THE INDEXER CREATES NEW NEW_MAPPING_ID__DASHBOARD_ID INDEX

• THE INDEXER ADDS A LINE IN BALDUR’S DATABASE WITH THE DASHBOARD AND MAPPING IDS

• THE INDEXERS INDEXES BOTH MAPPING_ID__DASHBOARD_ID AND NEW_MAPPING_ID__DASHBOARD_ID

• WHEN NEW_MAPPING_ID__DASHBOARD_ID HAS CAUGHT UP, BALDER SWITCHES THE ACTIVE MAPPING

TOO MANY LUCENE SEGMENTS

• EACH DATA NODE HOSTS 1000S LUCENE SEGMENTS

• 75% OF THE HEAP IS USED FOR SEGMENTS MANAGEMENT

• WE CREATE MORE SEGMENTS THAN WE’RE ABLE TO OPTIMIZE

• CONTINUOUS OPTIMISATION SCRIPTS, INDEXES WITH THE MOST DELETED DOCS FIRST

• CONTINUOUS OLD INDEXES CLEANUP

MYSQL CAN’T RESIST

• 5000 DOCS, RANDOM READS BASED, BULK INDEXING PUTS MYSQL ON THEIR KNEES

• FETCH THE DOCUMENTS IN BLACKHOLE BY 5000

• IF SOME DOCUMENTS ARE MISSING, FETCH IN MYSQL

• RESULT: 99.9% DOCUMENTS EXTRACTED FROM BLACKHOLE, THROUGHPUT * 5

REPLACE MYSQL WITH BLACKHOLE

RACK AWARENESS IN A NUTSHELL

1. DEFINE 2 VIRTUAL RACK IDS 2. ASSIGN EACH DATA NODES A RACK 3. ENABLE RACK AWARENESS 4. PRIMARY SHARDS PICK UP A SIDE, REPLICA PICK UP THE

OTHER ONE

FULL CLUSTER RESTART IN 20 MINUTES

• CONFIGURATION TUNING REQUIRES LOTS OF RESTART • RELY ON RACK AWARENESS TO RESTART HALF CLUSTER AT

ONCE • BLOCK SHARD ALLOCATION DURING SERVICE RESTART • GET GREEN • REPEAT

ADDING NEW DOCUMENTS TO A DASHBOARD

WHERE DO YOU GO, MY LOVELY?

• PROBLEM: HOW DO WE KNOW IN WHICH DASHBOARD FITS A NEW DOCUMENT

• STOP RELYING ON MYSQL AND SPHINX

• 50M NEW DOCUMENTS TO PROCESS A DAY

SOLUTION : PERCOLATION

• REVERSE DIRECTORY SYSTEM

• WE STORE QUERIES, NOT DOCUMENTS

• FOR EACH NEW DOCUMENT, WE MATCH THE DOCUMENT AGAINST OUR QUERIES

PERCOLATION ISSUES

• IT TRIES TO MATCH EVERY STORED QUERY

• SO FAR, WE HAVE 35000 STORE QUERIES

• RAW USE: 1.750.000.000.000 MATCHES A DAY • CPU GREEDY

SOLUTIONS

• ROUTING WITH THE DASHBOARD AND DOCUMENT LANGUAGES

• FILTER AGAINST THE QUERY SECOND

• RESULT: UP TO 100.000 QUERIES / SECOND

GENERATING DASHBOARDS WITH 3 YEARS OF DATA

BLACKHOLE V1

• 36 INDEXES, 40B DOCUMENTS

• 1,5B NEW DOCUMENTS EACH MONTH

• 72 SERVERS RUNNING ELASTICSEARCH 2.3

• CAPACITY: 209TB, 120TB DATA, 4,5TB RAM

• QUERIES ON THE WHOLE DATASET

CLUSTER TYPOLOGY

75 4 CORE XEON D-1520, 64GB RAM SERVERS

• 4 HTTP NODES

• 3 MASTER NODES

• 68 DATA NODE : 4*800GB SSD IN RAID 0

BEFORE ELASTICSEARCH

• RUN QUERIES AGAINST A SPHINX CLUSTER TO GET THE RIGHT DOCUMENTS ID

• FETCH THE DOCUMENTS FROM A GALERA CLUSTER AND THE METADATA FROM ANOTHER GALERA CLUSTER

• MERGE AND DISPLAY THE DOCUMENTS

SPHINX NIGHTMARE

• CAN’T SCALE HORIZONTALY: LIMITED TO 14 MONTHS OF DATA

• A COMPLEX QUERY AND THE WHOLE CLUSTER REACHES 400 LOAD

• MYSQL LOAD

INDEXING PROCESS

INDEXING

• A GO PROGRAM MERGES 3 GALERA CLUSTERS AND 1 ES CLUSTER INTO A KAFKA QUEUE: 30.000 DOCUMENTS / SECOND

• 8 GO INDEXERS MAP THE INDEX / DATA NODE DISTRIBUTION AND PUSH THE DATA DIRECTLY ON THE RIGHT DATA NODE: 60.000 DOCUMENTS / SECOND DURING 3 WEEKS WITH 200.000 / SECOND PEAKS

PROBLEM: WE’RE CPU BOUND

KAFKA IS TOO SLOW

• THE 72TB KAFKA QUEUE IS TOO SLOW: 10000 DOCUMENTS / SECOND / PARTITION ONLY BECAUSE SPINNING DISKS

MASSIVE QUERIES CRASH HALF THE CLUSTER

• ELASTICSEARCH CACHES THE RESULT OF FILTERED QUERIES: SET _CACHE TO FALSE.

• UPGRADE TO 1.7.5: FILTERED QUERIES CACHE HAVE A MEMORY LEAK IN 1.7.4

BIGGEST QUERIES ARE SLOW AS HELL

• DIVIDE THE GLOBAL QUERIES PER INDEX AND RUN IN PARALLEL

• PROCESS THE RESULTS POST QUERY ON THE API LEVEL

REINDEXING 40 BILLION DOCS IN 5 DAYS

UPGRADING A MAPPING, ON A LIVE CLUSTER

• CAN’T CHANGE A FIELD TYPE

• CAN’T UPDATE ANALYZERS LIVE

• CAN’T REORGANIZE THE MAPPING WITH EXISTING DATA

CLUSTER DESIGN

• 36 MONTHS DATA, 40 BILLION DOCUMENTS

• 70 DATA NODE, 3 MASTERS

• 1 INDEX PER DAY, 12 SHARDS, 1 REPLICA

REINDEXING

• USE LOGSTASH TO READ, TRANSFORM AND WRITE EXISTING DATA ON EACH DATA NODE

• EACH DATA NODE WRITES ON A DEFINED FRIEND IN THE OTHER PART OF THE CLUSTER

• 5000 DOCUMENTS SCROLL, 10 INDEXING WORKERS

• SCROLLS AGAINST A FULL DAY

LOGSTASH CONFIGURATION

INPUT {

ELASTICSEARCH {

HOSTS => [ "LOCAL ELASTICSEARCH NODE" ]

INDEX => "INDEX TO READ FROM"

SIZE => 5000

SCROLL => "20M" # 5 MINUTES INITIAL

DOCINFO => TRUE

QUERY => '{ "QUERY": { "RANGE": { "DATE": { "GTE": "2015-07-23T10:00.000+01:00", "LTE": "2015-07-23T11:00.000+01:00" } } } }'

}

}

OUTPUT {

ELASTICSEARCH {

HOST => "REMOTE ELASTICSEARCH NODE"

INDEX => "INDEX TO WRITE TO"

PROTOCOL => "HTTP"

INDEX_TYPE => "%{[@METADATA][_TYPE]}"

DOCUMENT_ID => "%{[@METADATA][_ID]}"

WORKERS => 10

}

STDOUT {

CODEC => RUBYDEBUG # BECAUSE REMOVING THE TIMESTAMP FIELD MAKES LOGSTASH CRASH

}

}

FILTER {

MUTATE {

RENAME => { "SOME FIELD" => "SOME OTHER FIELD" }

RENAME => { "ANOTHER FIELD" => "SOMEWHERE ELSE" }

REMOVE_FIELD => [ "SOMETHING", "SOMETHING ELSE", "ANOTHER FIELD", "SOME FIELD", "@TIMESTAMP", "@VERSION" ]

}

}

ES CONFIGURATION CHANGES

MEMORY:

INDEX_BUFFER_SIZE: 50% (INSTEAD OF 10%)

INDEX:

STORE:

THROTTLE:

TYPE : "NONE" (AS FAST AS YOUR SSD CAN GO)

TRANSLOG:

DISABLE_FLUSH: TRUE

REFRESH_INTERVAL: -1 (INSTEAD OF 1S)

INDICES:

STORE:

THROTTLE:

MAX_BYTES_PER_SEC: "2GB"

PROBLEMS

• MISSING DOCUMENTS AS SCROLL LOST ITS SEARCH CONTEXT

• SOMETIMES, THE INDEXING NODES CRASH

• LOGSTASH DOES NOT LIKE NETWORK ISSUES

• NEED TO REPLAY A FULL DAY TO CATCH UP WITH THE DATA

SOLUTIONS

• PLAY HOURLY QUERIES

• WRITE A SMALL ORCHESTRATOR

• INTRODUCING YOKO AND MOULINETTE

YOKO, THE REINDEXING ORCHESTRATOR

• SMALL PYTHON DAEMON TO QUERY A MYSQL DATABASE

• INDEX FROM

• INDEX TO

• LOGSTASH QUERY

• STATUS: TODO, PROCESSING, DONE, COMPLETE, FAILED

YOKO, THE REINDEXING ORCHESTRATOR

• CREATES THE DAILY INDEXES.

• COMPARES THE NUMBER OF DOCUMENTS FROM THE INITIAL INDEX RUNNING THE LOGSTASH QUERY ON “DONE" INDEXES.

• MOVES EACH SUCCESSFUL "DONE" LINE TO "COMPLETE" IF THE COUNT MATCHES OR "FAILED".

• DELETE EACH MONTHLY INDEX WHEN EVERY DAY OF A MONTH IS "COMPLETE".

MOULINETTE, THE REINDEXING SCRIPT

• SMALL BASH SCRIPT THAT QUERIES YOKO

• GENERATES THE LOGSTASH.CONF FILE FROM YOKO DATA

• RUNS LOGSTASH

• SWITCHES YOKO LINE TO DONE WHEN DONE

PROBLEMS

• LOGSTASH TRANSFORM FIELDS IS SLOW

• SHOULD RUN INDEXING ON LESS NODES

• SOMETIMES, LOGSTASH HANGS UP AND NEEDS TO BE FORCED KILLED

• YOKO SHOULD DETECT THIS AND RAISE AN ERROR

UPGRADING FROM 1.7 TO 2.3

BEFORE UPGRADING

• CHECK YOUR PLUGINS CAN RUN ON 2.X

• CHECK YOUR MAPPINGS ARE 2.X COMPLIANT

• CHECK FOR CONFIGURATION DEPRECATION

SEEMS EASY?

1. SHUTDOWN CLUSTER 2. UPGRADE ES TO 2.3 3. UPGRADE PLUGINS 4. START THE WHOLE CLUSTER, MASTERS FIRST

UNSUPPORTED PLUGINS AND ANALYZERS

• CAN’T UPDATE AN ANALYZER ON AN OPEN INDEX

• CLOSE ALL INDEXES

• APPLY A TEMPORARY DUMMY ANALYZER

• REOPEN INDEXES

DUMMY KOREAN ANALYZER

"ANALYZER": { "KR_ANALYZER": { "TOKENIZER": "STANDARD", "FILTER": [ "CJK_WIDTH", "LOWERCASE", "CJK_BIGRAM", "ENGLISH_STOP" ] } }

QUESTIONS ?

Recommended