top of page
Writer's pictureGeorge Lin

How CE Change Should Impact SQL Queries By Examples Part 1: Basic Concepts

Updated: Nov 16, 2021


Introduction


Microsoft introduced the first major redesign of the SQL Server Query Optimizer cardinality estimation process with SQL Server 2014 on top the old version 7.0. The target is to improve accuracy, consistency and supportability of key areas within the cardinality estimation process, ultimately affecting average query execution plan quality and associated workload performance.

This document provides an overview of the primary changes made to the cardinality estimator functionality, covering how to enable and disable the new cardinality estimator behavior, and showing how to troubleshoot plan-quality regressions if and when they occur.


The Role of CE in Query Optimizing


The SQL Server Query Optimizer is a cost-based Query Optimizer. For a specific query, there might be multiple ways to execute it. it's SQL Server Query Optimizer’s job is to find an efficient physical execution plan that fulfills a query request. Query optimizer attempts this by assigning estimated costs to various query execution plan alternatives and then choosing the one with the lowest estimated cost.

How to determine the operator cost? One key factor is the estimation of rows that will be processed for each operator within a query execution plan, and that's where the Cardinality Estimation (CE) comes to play. In addition to row count estimation, the cardinality estimator component is also responsible for providing information on:

  • The distribution of values.

  • Distinct value counts.

  • Duplicate counts as input for parent operator estimation calculations.

The Accurate CE from The Accurate/Sufficient Statistics


CE relies on the input from statistics associated with objects referenced in the query. Statistics objects used for estimation can be associated with an index or they can exist independently. Most statistics objects are generated automatically by the query optimization process, but they can also be created manually during the troubleshooting process.

The statistic object has the following three main components:

  • The header: provides information such as the last time statistics were updated and the number of sampled rows.

  • The density vector: shows the measurement of the uniqueness of a column(or set of columns). Lower density values meaning a higher uniqueness.

  • Histogram: this represents a column’s data distribution and frequency of occurrence for distinct values. Histograms are limited to 200 contiguous steps representing noteworthy boundary values.

The cardinality estimator component (CE) mainly use the density vector and histogram information to calculate estimates if this information exists. In the absence of supporting statistics or constraints, the CE process can provide the estimates using heuristics based on the provided filter and join predicates. Obviously, the results from using heuristics are much less accurate.

Basically, SQL Server query optimizer use CE to get the answers from the questions like this below:

  • How many rows needed to satisfy a single filter predicate or multiple filter predicates? For example, selectivity queries with WHERE and HAVING clauses

  • How many rows required to satisfy a join predicate between two tables? For example, any queries with JOIN condition of FROM clauses

  • How many distinct values should be expected from a specific column or a set of columns? For example, queries using DISTINCT keyword or GROUP BY clauses

Coding With CE Limits In Mind


There are some cases where it's very difficult for SQL Server to come up with an accurate cardinality estimation and inaccurate cost calculations may cause suboptimal query plans. Developer should always avoid the constructs (listed below) in queries. Sometimes, alternative query formulations may improve query performance a lot:

  • Queries with predicates that use comparison operators between different columns of the same table.

  • Queries with predicates that use operators, and any one of the following are true:

    • There are no statistics on the columns involved on either side of the operators.

    • The distribution of values in the statistics is not uniform, but the query seeks a highly selective value set. This situation can be especially true if the operator is anything other than the equality (=) operator.

    • The predicate uses the not equal to (!=) comparison operator or the NOT logical operator.

  • Queries that use any of the SQL Server built-in functions or a scalar-valued, user-defined function whose argument is not a constant value.

  • Queries that involve joining columns through arithmetic or string concatenation operators.

  • Queries that compare variables whose values are not known when the query is compiled and optimized.

The Performance Impact of Inaccurate CE results

SQL engine starting calculating cardinality estimates from the leaf level of a query execution plan all the way up to the plan root. The descendant operators provide estimates to their parents. Non-leaf level query execution plan operators then make use of descendent operator row estimates and apply additional estimation activities as required (such as filtering).

The individual operator cost models receive the estimates as input. The estimates are a major factor in deciding which physical operator algorithms and plan shapes (such as join orders) are chosen. They also determine the final query plan that executes. Given these critical plan choices, when the cardinality estimation process contains a significantly skewed assumption, this can lead to an inefficient plan choice. This can, in turn, result in degraded performance.

Impact of under estimating rows:

  • Memory spills where not enough memory was requested for sort or hash operations

  • selection of serial plan when parallelism would have been more optimal.

  • Inappropriate join strategies.

  • Inefficient index selection and navigation strategies.

Inversely, over estimating rows can lead to:

  • Selection of a parallel plan when a serial plan might be more optimal.

  • Inappropriate join strategy selection.

  • Inefficient index navigation strategies (scan versus seek).

  • Inflated memory grants.

  • Wasted memory and unnecessarily throttled concurrency.

Improving the accuracy of row estimates can improve the quality of the query execution plan and, as a result, improve the performance of the query.


How to Validate CE


The only way is to validate cardinality estimates against actual row counts in an actual query execution plan. The estimated query execution plan does not include run-time statistics. Therefore, using techniques like pulling the estimated plan from cache using sys.dm_exec_query_plan will only show estimated rows per operator. It's often not practical or accurate to use the estimated plan to directly diagnose large skews between estimated and actual rows. Unlike an estimated plan, the actual query execution plan parses and executes the query being analyzed. In general, CE skews can be determined by looking at the actual execution plan that reflects final row counts versus estimates. An exception to this is for the inner-side of Nested Loop Join operation. You must multiply the estimated executions by the per-iteration estimate to get the total estimated rows for the inner-side operator.

The actual plan can be captured using the following methods:

  • SET STATISTICS XML.

  • SET STATISTICS PROFILE (deprecated, but not removed from SQL Server).

  • Graphical Showplan (using the “Include Actual Execution Plan” option).

  • The query_post_execution_showplan XEvent (use with caution as it has significant performance overhead).

  • The “Showplan Statistics Profile” and “Showplan XML Statistics Profile” SQL Trace events (again, use with caution as these events have significant performance overhead).

  • sys.dm_exec_query_profiles dynamic management view. This view allows for real-time query execution progress monitoring. The query returns the estimated rows per operator and the actual rows returned by the operator at the time the DMV was queried. Information in the DMV is available while the target query is executing. Once the target query’s execution is complete, the information is no longer available.

Problematic CE Skews


Depending on the level of workload complexity and data distribution variation, cardinality estimates for query plan operators may sometimes not be 100% accurate, but this doesn't necessarily indicate an actionable CE problem. While looking at the difference between estimated and actual row counts, consider the following overarching factors:

  • Does the row estimate skew result in excessive resource consumption? For example, spills to disk because of underestimates of rows or wasteful reservation of memory caused by row overestimates.

  • Does the row estimate skew coincide with specific query performance problems (e.g., longer execution time than expected)?

If the answer is “yes” to either of the questions above, then further investigation of the cardinality estimate skew may be warranted.


Correlate Performance Degradation with CE Upgrade


When moving to the new CE, some query execution plans will remain the same and some will change. Neither condition inherently suggests an issue. The following diagram can help categorize which query execution plan changes might justify further actions related to the new CE:


0 comments

Recent Posts

See All

Azure SQL How To - SQL Managed Instance

Task 1: Copy encryption certificate on SQL Server to SQL MI On the source SQL Server machine, in a query window, run script below to...

Comments


bottom of page