Business Insight 2014 - Microsofts nye BI og database platform - Erling Skaale, Microsoft

Embed Size (px)

Citation preview

PowerPoint Presentation

Velkommen og Microsoft har en touchskrm :-D

Erling SkaaleTSPMicrosoft

[email protected]

Todays menuIn-memory tablesWho and whenWhat and howUpdateable columnstoreClusteredHybrid and the Cloud

Drive Real-Time Business with Real-Time Insights

Over 100x query speed and significant data compression withIn-Memory ColumnStore

Up to 30x faster transaction processing with In-Memory OLTP

Greater performancewith In-Memory Analysis ServicesBillions of rows per second with PowerPivot In-Memory for Excel4Faster InsightsIN-MEMORY ANALYTICS Faster QueriesIN-MEMORY DWFaster TransactionsIN-MEMORY OLTP

Now lets take a closer look at how we can impact your business with our in-memory technology. We are the only provider to date that can speed transactions as well as queries and insights with in-memory technology optimized for each workload: OLTP, data warehousing, and analytics.

With our new in-memory OLTP engine in SQL Server 2014, we have customers that have seen up to 30x faster transaction processing. I am not talking about query speed, but actual transaction write speed, up to 30x faster. I know many of you might be thinking, well Oracle and other database vendors are talking 100x. What they are talking about is query speed, not transactional speed. We are the only vendor that delivers an in-memory engine designed for OLTP transaction performance increase.

Theres also built-in In-Memory columnstore for data warehousing workloads to speed queries. We were already benchmarking over 100x performance gains with many customers in the SQL Server 2012 release of the in-memory columnstore. With SQL Server 2014, the in-memory columnstore gets even better, we will talk about that in just a few minutes. Again, we can also increase query speed by over 100x.

Finally we offer business users the ability to analyze data and data models much faster with built-in in-memory capabilities for Excel through PowerPivot, and Analysis Services. The benefit is that you can analyze billions of rows of data per second in Excel. Meaning a business user can analyze data of nearly any size with the tools they are most familiar with.

This is what we mean when we say driving real-time business with real-time insights. We can significantly speed your transaction business tied to your revenue stream. We can massively speed the process to analyze both real-time transaction data, along with historical and third party data from IT or business users. This is why we are already seeing in-memory technologies transforming the way businesses run.

2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/20144

Key Benefits Optimized table structuresNo locks or latches with 100% data durabilityUp to 30x transactional performance gainsIn-Memory for Increased Throughput & Speed16x faster transactionsTo describe Hekaton in two words, its wicked fast.Rick Kutschera, Bwin

BeforeAfter

Greater throughput with no locks or latches OLTPStored Procedures30x faster transactionsNatively compile stored procedures in-memoryApp

535x faster transactions

Now lets talk about how we increase both transactional speed and throughput by removing contention in the database. Many of you might be thinking, I could pin tables to memory in previous versions of SQL Server, how is they any different or better? The speed gains you have been hearing me talk about from SBI liquidity, Ferranti are all comparing to previous versions of SQL Server paging tables to memory. So why the massive speed gains? They key reason is the table structures are now optimized to run in-memory, there are no more paging of tables to memoryperiod. And there are no more locks and latches which removes contention in the database. This is how we can achieve transactional performance increases up to 30x.

In addition to speed, we can also improve throughput because our engineering team came up with an algorithm to remove locks and latches without compromising durability. This means massive reduction of contention in the database, which leads to increased throughput as well as speed.

Bwin is an ISV in the online-gaming industry and for them, transactions equates to revenue. With our unique in-memory OLTP design point of optimized tables structures and no locks and latches, they were able to improve transaction speed by 16x and increase player capacity by 20x on the same hardware. Because contention is significantly reduced, they were also able to cut player response times from 50 milliseconds to 3 milliseconds. In terms of business value that SQL Server 2014s in-memory OLTP technology provided Bwin, this meant increased revenue, significantly improved customer experience, and a greater number of customers on the same infrastructure! This is why we feel in-memory technology is transformational, its because of the significant impact it can have on your business.

2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/20145

Business Transformation with In-MemoryBusiness ChallengeUtility ISV wanted to increase the processing power of its MECOMS solution to handle a smart-meterdata floodKey Benefits Optimized utility rates based on consumptionNew smart-appliances tie to smart meter for cost effective energy utilizationIncreased database write speed 100%

BeforeWith In-Memory OLTP & Windows Azure HDInsight

5 million measurements per year500 million measurements per dayMechanical metersMechanical & smart-meters

HDInsight200M rowsin 15 minNon-relational dataSQL Server 2014In-Memoryrelational data

Now lets show you how in-memory can be transformational to a business. Ferranti computer systems designs software for utility companies that is helping to revolutionize how electricity is consumed and sold. In-memory technology is key to this transformation.

So why do I say revolutionize how electricity is consumed and sold? Imagine a world where your home appliancesdishwashers, washing machines, dryersautomatically knew when electricity was cheapest in your city and would turn on only during those hours. Imagine a world with no more power outages because electricity was so efficiently used. Thats great for consumers, and its also good for the environment.

This is what Ferranti software is working with utility companies to create. Utility companies are using smart meter data to help improve efficiency and control consumption. The world of mechanical meters to measure consumption is going away in favor of smart meters that continually feed power consumption and quality data back to the utility companies. But with these smart meters you have a new problem, massive amounts of data being created that needs to be quickly harnessed to set rates. This is where in-memory comes in to playMicrosofts in-memory OLTP solution not only speeds transactions but also removes contention to massively increase in write speeds. In this case, Ferranti needed to quickly write more than 200 million rows of relational data to a database, and with SQL Server 2014 in-memory OLTP technology removing database contention, they were able to do this in less than 15 minutes.

And because our in-memory technology was built-in, they could use additional Microsoft data platform services in parallel. Like Windows Azure HDInsight service to tackle non-relational big data to help them compute utility rates. This new optimized electricity consumption model based on smart meters is just the first step. It has opened the door to smart appliances that communicate with smart meters to determine the most cost effective times to run. Significantly increasing write speeds and removing contention in the database were key in helping Ferranti solve this big data problem.Server & Tools Business 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/20146

SQL Server IntegrationSame manageability, administration & development experienceIntegrated queries & transactionsIntegrated HA and backup/restoreMain-Memory OptimizedOptimized for in-memory dataIndexes (hash and range) exist only in memoryNo buffer poolStream-based storage for durability

High ConcurrencyMulti-version optimistic concurrency control with full ACID supportCore engine uses lock-free algorithmsNo lock manager, latches or spinlocksT-SQL Compiled to Machine CodeT-SQL compiled to machine code via C code generator and VCInvoking a procedure is just a DLL entry-pointAggressive optimizations @ compile-timeSteadily declining memory price, NVRAMMany-core processorsStalling CPU clock rateTCOHardware TrendsBusinessHybrid Engine and Integrated ExperienceHigh Performance Data OperationsFrictionless Scale-UpEfficient, Business-Logic ProcessingBenefitsIn-Memory OLTP Tech PillarsDriversIn-Memory OLTP arkitektur

7

Main-Memory OptimizedOptimized for in-memory dataIndexes (hash and range) exist only in memoryNo buffer poolStream-based storage for durabilitySteadily declining memory price, NVRAMHardware trendsTable ConstructsFixed schema no ALTER TABLE, must drop/recreate/reloadNo LOB datatypes; row size limited 8060No constraints support (PK only)No Identity or Calculated columns, CLR etcData and table size considerationsSize of tables = (row size * # of rows) Size of hash index = (bucket_count * 8-bytes) Max size SCHEMA_AND_DATA = 512 GBIO for Durability SCHEMA_ONLY vs. SCHEMA_AND_DATAMemory Optimized Filegroup Data and Delta files Transaction Log Database Recovery

Memory Optimized Tables Design Considerations8High performance data operationsBenefitsIn-Memory OLTP Tech PillarsDrivers

8

Memory optimeret table vs normal

TechReady 16 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/20149

Indexing Memory-Optimized Tables

Memory Optimized tables support two types of indexes:

Non-Clustered Hash Index: This is amemory optimizedindex, it does not support inequality operators as well as sort-order matching operations.

Non-Clustered Index: This is adisk basedindex, fully supports all normal index operations.

Thebucket_countindex parameter on a Non-Clustered Hash Index dictates the size of the Hash table allocated for the index. This parameter needs to be set carefully, as it will affect the performance of the Memory Optimized table. Higher bucket count could lead to larger memory utilization and longer scans, lower bucket count could lead to performance degredation on lookups and inserts. Microsoft recommendsthe bucket_count should be twice the maximum number of unique index keys.

Indexing Memory-Optimized TablesThere are a few stipulations for creating indexes on Memory-Optimized tables:

Only 8 indexesare allowed on a Memory Optimized table

Indexes cannot be added to a Memory Optimized table, instead the table has to be dropped and re-created with the new index.

Primary Key is a requirementfor Memory Optimized tables.

All indexes are covering, which means they include all columns in a table.

Indexes reference the (hashed) row directly, rather than referencing the Primary Key.

Statistics Maintenance on Memory-Optimized TablesStatistics behaves slightly differently on In-Memory tables compared to disk-based tables, there are a few things to remember here:

Statistics are not updated automatically, it is recommended to setup a manual regular statistics update operation on your Memory-Optimized table.

sp_updatestats always runs a statistics update, unlike disk-based table weresp_updatestats only updates statistics if there has been modifications since last run.

Statistics update must always be specified as FULLSCANupdate, rather than SAMPLED.Index key statistics are created when the table is empty, it is recommended to always update statistics after ETL.

Natively Compiled stored procedures need to be recreated when statistics are updated, this is because execution plans for Natively Compiled stored procedures consider statistics only once, when the SPs are created.

Native Compiled Stored Proc - Migration

TechReady 16 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/201413

Accessing Memory Optimized TablesNatively compiled stored proceduresAccess only memory optimized tablesMaximum performanceLimited T-SQL surface areaWhen to useOLTP-style operationsOptimize performance critical business logic

Interpreted T-SQL accessAccess both memory- and disk-based tables Less performantVirtually full T-SQL surface areaWhen to useAd hoc queriesReporting-style queriesSpeeding up app migration

TechReady 16 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/201414

Clustered Columnstore Index

Clustered Columnstore Index

Update

Clustered Columnstore Index

Cloud Backup (SQL Server 2014)Whats being deliveredAn agent that manages and automates SQL Server backup policyMain benefitLarge scale management and no need to manage backup policyContext-aware e.g. workload/throttlingMinimal knobs control retention periodManage whole instance, or particular DBsLeverage Backup to AzureInherently off-siteGeo-redundantMinimal storage costsZero hardware managementExample:EXEC smart_admin.sp_set_db_backup @database_name='TestDB', @storage_url=, @retention_days=30, @credential_name='MyCredential', @enable_backup=1

18

Hybrid Cloud SolutionsFast disaster recovery (low RTO)Easy to deploy and manageCloud burstingGreater global reachBetter isolation of internal assetsSimplified Cloud Backup Cloud Disaster Recovery Extend On-Premises Apps

Reduce CAPEX and OPEX with Cloud DROn-Premises Network

Secondary ReplicaDomain ControllerVPN Tunnel (Windows Azure Virtual Network)

Asynchronous Commit

Primary Replica

Secondary Replica

Synchronous Commit

Windows Azure Run backups Run BI reports Manual or automaticAt an instance level with point in time restoreMeasures DB usage patterns to set backup frequency

Talk track in slide 23Server & Tools Business 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/201419

An event can cause on-premises SQL Server to become unavailableTemporarily (e.g. gateway failure)Permanently (e.g. flooding)

A disaster recovery site is expensiveSite rent + maintenanceHardwareOpsWhy Do We Need Cloud DR for SQL Server?

TechReady 16 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.4/7/201420

2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.