1
0
Эх сурвалжийг харах

Uploaded IQP lab and fixed formatting in TempDB

Seems the platform auto formats width. If users would open the lab as-was it would be too narror and not expand properly.
pmasl 6 жил өмнө
parent
commit
22f919c837

+ 7 - 2
Sessions/Winter-Ready-2019/Lab-Containers.md

@@ -1,8 +1,13 @@
+---
+title: "SQL Server Containers Lab"
+date: "11/21/2018"
+author: Vin Yu
+---
 # SQL Server Containers Lab
 # SQL Server Containers Lab
-This is built for Ready 2018 July
 
 
 ## Pre Lab
 ## Pre Lab
 1. Install docker engine by running the following:
 1. Install docker engine by running the following:
+
 ```
 ```
 sudo yum install -y yum-utils device-mapper-persistent-data lvm2
 sudo yum install -y yum-utils device-mapper-persistent-data lvm2
 
 
@@ -11,7 +16,7 @@ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/dock
 sudo yum install http://mirror.centos.org/centos/7/extras/x86_64/Packages/pigz-2.3.3-1.el7.centos.x86_64.rpm
 sudo yum install http://mirror.centos.org/centos/7/extras/x86_64/Packages/pigz-2.3.3-1.el7.centos.x86_64.rpm
 
 
 sudo yum install docker-ce
 sudo yum install docker-ce
- ```
+```
 
 
 check status of docker engine:
 check status of docker engine:
 ```
 ```

+ 316 - 12
Sessions/Winter-Ready-2019/Lab-IQP.md

@@ -25,7 +25,7 @@ The following are requirements to run this lab:
 
 
 ## Lab
 ## Lab
 
 
-### Batch Mode Memory Grant Feedback (MGF)
+### Exercise 1 - Batch Mode Memory Grant Feedback (MGF)
 
 
 Queries may spill to disk or take too much memory based on poor cardinality estimates. MGF will adjust memory grants based on execution feedback, and remove spills to improve concurrency for repeating queries. In SQL Server 2017, MGF was only available for BatchMode (which means Columnstore had to be in use). In SQL Server 2019 and Azure SQL DB, MGF was extended to also work on RowMode which means it's available for all queries running on SQL Server Database Engine.
 Queries may spill to disk or take too much memory based on poor cardinality estimates. MGF will adjust memory grants based on execution feedback, and remove spills to improve concurrency for repeating queries. In SQL Server 2017, MGF was only available for BatchMode (which means Columnstore had to be in use). In SQL Server 2019 and Azure SQL DB, MGF was extended to also work on RowMode which means it's available for all queries running on SQL Server Database Engine.
 
 
@@ -103,7 +103,9 @@ Queries may spill to disk or take too much memory based on poor cardinality esti
 
 
 Note that different parameter values may also require different query plans in order to remain optimal. This type of query is defined as **parameter-sensitive**. For parameter-sensitive plans (PSP), MGF will disable itself on a query if it has unstable memory requirements over a few executions.
 Note that different parameter values may also require different query plans in order to remain optimal. This type of query is defined as **parameter-sensitive**. For parameter-sensitive plans (PSP), MGF will disable itself on a query if it has unstable memory requirements over a few executions.
 
 
-### Table Variable (TV) Deferred Compilation
+---
+
+### Exercise 2 - Table Variable (TV) Deferred Compilation
 
 
 Table Variables are suitable for small intermediate result sets, usually no more than a few hundred rows. However, if these constructs have more rows, the legacy behavior of handling a TV is prone to performance issues.    
 Table Variables are suitable for small intermediate result sets, usually no more than a few hundred rows. However, if these constructs have more rows, the legacy behavior of handling a TV is prone to performance issues.    
 
 
@@ -129,11 +131,11 @@ Starting with SQL Server 2019, the behavior is that the compilation of a stateme
     GO
     GO
     ```
     ```
 
 
-4. For the next steps, looking at the query execution plan is needed. Click on **Include Actual Plan** or press CTRL+M.
+3. For the next steps, looking at the query execution plan is needed. Click on **Include Actual Plan** or press CTRL+M.
 
 
     ![Include Actual Plan](./media/ActualPlan.png "Include Actual Plan") 
     ![Include Actual Plan](./media/ActualPlan.png "Include Actual Plan") 
 
 
-5. Execute the command below in the query window: 
+4. Execute the command below in the query window: 
 
 
     > **Note:**
     > **Note:**
     > This should take between 1 and 5 minutes.
     > This should take between 1 and 5 minutes.
@@ -160,9 +162,9 @@ Starting with SQL Server 2019, the behavior is that the compilation of a stateme
     GO
     GO
     ```
     ```
 
 
-6. Observe the shape of the query execution plan, that it is a serial plan, and that Nested Loops Joins were chosen given the estimated low number of rows.
+5. Observe the shape of the query execution plan, that it is a serial plan, and that Nested Loops Joins were chosen given the estimated low number of rows.
 
 
-7. Click on the **Table Scan** operator in the query execution plan, and hover your mouse over the operator. Observe:
+6. Click on the **Table Scan** operator in the query execution plan, and hover your mouse over the operator. Observe:
     - The ***Actual Number of Rows*** is 750000.
     - The ***Actual Number of Rows*** is 750000.
     - The ***Estimated Number of Rows*** is 1. 
     - The ***Estimated Number of Rows*** is 1. 
     This indicates the legacy behavior of misusing a TV, with the huge estimation skew.
     This indicates the legacy behavior of misusing a TV, with the huge estimation skew.
@@ -170,7 +172,7 @@ Starting with SQL Server 2019, the behavior is that the compilation of a stateme
     ![Table Variable legacy behavior](./media/TV_Legacy.png "Table Variable legacy behavior") 
     ![Table Variable legacy behavior](./media/TV_Legacy.png "Table Variable legacy behavior") 
 
 
 
 
-8. Setup the database to ensure the latest database compatibility level is set, by running the commands below in the query window:
+7. Setup the database to ensure the latest database compatibility level is set, by running the commands below in the query window:
 
 
     > **Note:**
     > **Note:**
     > This ensures the database engine behavior related to Table Variables is mapped to SQL Server 2019.
     > This ensures the database engine behavior related to Table Variables is mapped to SQL Server 2019.
@@ -184,17 +186,319 @@ Starting with SQL Server 2019, the behavior is that the compilation of a stateme
     GO
     GO
     ```
     ```
 
 
-9. Execute the same command as step 5. 
+8. Execute the same command as step 5. 
 
 
-10. Observe the shape of the query execution plan now, that it is a parallel plan, and that a single Hash Joins was chosen given the estimated high number of rows.
+9. Observe the shape of the query execution plan now, that it is a parallel plan, and that a single Hash Joins was chosen given the estimated high number of rows.
 
 
-11. Click on the **Table Scan** operator in the query execution plan, and hover your mouse over the operator. Observe:
+10. Click on the **Table Scan** operator in the query execution plan, and hover your mouse over the operator. Observe:
     - The ***Actual Number of Rows*** is 750000.
     - The ***Actual Number of Rows*** is 750000.
     - The ***Estimated Number of Rows*** is 750000. 
     - The ***Estimated Number of Rows*** is 750000. 
     This indicates the new behavior of TV deferred compilation, with no estimation skew and a better query execution plan, which also executed much faster (~20 seconds).
     This indicates the new behavior of TV deferred compilation, with no estimation skew and a better query execution plan, which also executed much faster (~20 seconds).
 
 
     ![Table Variable deferred compilation](./media/TV_New.png "Table Variable deferred compilation") 
     ![Table Variable deferred compilation](./media/TV_New.png "Table Variable deferred compilation") 
 
 
-### Batch Mode on Rowstore
+---
+
+### Exercise 3 - Batch Mode on Rowstore
+
+The query optimizer has (until now) considered batch-mode processing only for queries that involve at least one table with a Columnstore index. SQL Server 2019 is introducing **Batch Mode on Rowstore**, which means that use of columnstore indexes is not a condition to use batch mode processing anymore.    
+
+However, Batch Mode is especially useful when processing large number of rows such as analytical queries, which means users won't be seeing Batch Mode used on every query. A rough initial check involves table sizes, operators used, and estimated cardinalities in the input query. Additional checkpoints are used, as the optimizer discovers new, cheaper plans for the query. And if plans do not make significant use of batch mode, the optimizer will stop exploring batch mode alternatives.
+
+1. Open SSMS and connect to the SQL Server 2019 instance (default instance). Click on **New Query** or press CTRL+N.
+
+    ![New Query](./media/new_query.png "New Query") 
+
+2. Setup the database to ensure the latest database compatibility level is set, by running the commands below in the query window:
+
+    > **Note:**
+    > This ensures the database engine behavior related to Table Variables is mapped to SQL Server 2019.
+
+    ```sql
+    USE master;
+    GO
+
+    ALTER DATABASE [tpch10g-btree] 
+    SET COMPATIBILITY_LEVEL = 150;
+    GO
+
+    USE [tpch10g-btree];
+    GO
+    
+    ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
+    GO
+    ```
+
+3. For the next steps, looking at the query execution plan is needed. Click on **Include Actual Plan** or press CTRL+M.
+
+    ![Include Actual Plan](./media/ActualPlan.png "Include Actual Plan") 
+
+4. Execute the command below in the query window: 
+
+    > **Note:**
+    > The hint forces row mode and should take about 1 to 2 minutes.
+
+    ```sql
+    SELECT L_RETURNFLAG,
+        L_LINESTATUS,
+        SUM(L_QUANTITY) AS SUM_QTY,
+        SUM(L_EXTENDEDPRICE) AS SUM_BASE_PRICE,
+        COUNT(*) AS COUNT_ORDER
+    FROM LINEITEM
+    WHERE L_SHIPDATE <= dateadd(dd, -73, '1998-12-01')
+    GROUP BY L_RETURNFLAG,
+            L_LINESTATUS
+    ORDER BY L_RETURNFLAG,
+            L_LINESTATUS
+    OPTION (RECOMPILE, USE HINT('DISALLOW_BATCH_MODE'));
+    ```
+
+5. Observe the query execution plan and note there is no columnstore in use.     
+    Click on the **Clustered Index Seek** operator in the query execution plan, and hover your mouse over the operator. Observe:
+    - The ***Actual Number of Rows*** is over 59 Million.
+    - The ***Estimated Number of Rows*** is the same. This indicates no estimation skews.
+    - The ***Actual Execution Mode*** and ***Estimated Execution Mode*** show "Row", with ***Storage*** being "RowStore".
+
+    ![Batch Mode on Rowstore disabled](./media/BMRS_Row.png "Batch Mode on Rowstore disabled")  
+
+6. Execute the command below in the query window: 
+
+    ```sql
+    SELECT L_RETURNFLAG,
+        L_LINESTATUS,
+        SUM(L_QUANTITY) AS SUM_QTY,
+        SUM(L_EXTENDEDPRICE) AS SUM_BASE_PRICE,
+        COUNT(*) AS COUNT_ORDER
+    FROM LINEITEM
+    WHERE L_SHIPDATE <= dateadd(dd, -73, '1998-12-01')
+    GROUP BY L_RETURNFLAG,
+            L_LINESTATUS
+    ORDER BY L_RETURNFLAG,
+            L_LINESTATUS
+    OPTION (RECOMPILE);
+    ```
+
+7. Observe query execution plan and note there is still no columnstore in use.    
+    Click on the **Clustered Index Seek** operator in the query execution plan, and hover your mouse over the operator. Observe:
+    - The ***Actual Number of Rows*** and ***Estimated Number of Rows*** remain the same as before. Over 59 Million rows.
+    - The ***Actual Execution Mode*** and ***Estimated Execution Mode*** show "Batch", with ***Storage*** still being "RowStore". 
+    This indicates the new behavior of allowing eligible queries to execute in Batch Mode over Rowstore, whereas up to SQL Server 2017 this was only allowed over Columnstore. It also executed much faster (~3 seconds).
+
+    ![Batch Mode on Rowstore](./media/BMRS_Batch.png "Batch Mode on Rowstore") 
+
+---
+
+### Exercise 4 - Scalar UDF Inlining 
+
+User-Defined Functions that are implemented in Transact-SQL and return a single data value are referred to as T-SQL Scalar User-Defined Functions. T-SQL UDFs are an elegant way to achieve code reuse and modularity across SQL queries, and help in building up complex logic without requiring expertise in writing complex SQL queries.
+
+However, Scalar UDF can introduce performance issues in workloads. Here are a few reasons why:
+
+- **Iterative invocation**: Invoked once per qualifying row. Repeated context switching – and even worse for UDFs that execute SQL queries in their definition
+- **Lack of costing**: Scalar operators are not costed (realistically).
+- **Interpreted execution**: Each statement itself is compiled, and the compiled plan is cached. No cross-statement optimizations are carried out.
+- **Serial execution**: SQL Server does not allow intra-query parallelism in queries that invoke UDFs.
+
+In SQL Server 2019, the ability to inline Scalar UDFs means we can enable the benefits of UDFs without the performance penalty, for queries that invoke scalar UDFs where UDF execution is the main bottleneck. Using query rewriting techniques, UDFs are transformed into equivalent relational expressions that are “inlined” into the calling query with which the query optimizer can work to find more efficient plans.
+
+> **Note:**
+> Not all T-SQL constructs are inlineable, such as when the UDF is:
+> - Invoking any intrinsic function that is either time-dependent (such as `GETDATE()`) or has side effects (such as `NEWSEQUENTIALID()`)
+> - Referencing table variables or table-valued parameters
+> - Referencing scalar UDF call in its `GROUP BY` clause
+> - Natively compiled (interop is supported)
+> - Used in a computed column or a check constraint definition
+> - References user-defined types
+> - Used in a partition function
+
+> **Important:**
+> If a scalar UDF is inlineable, it does not imply that it will always be inlined. SQL Server will decide (on a per-query, per-UDF basis) whether to inline a UDF or not.
+
+1. Open SSMS and connect to the SQL Server 2019 instance (default instance). Click on **New Query** or press CTRL+N.
+
+    ![New Query](./media/new_query.png "New Query") 
+
+2. Setup the database to ensure the latest database compatibility level is set, by running the commands below in the query window:
+
+    > **Note:**
+    > This ensures the database engine behavior related to Table Variables is mapped to SQL Server 2019.
+
+    ```sql
+    USE master;
+    GO
+
+    ALTER DATABASE [tpch10g-btree] 
+    SET COMPATIBILITY_LEVEL = 150;
+    GO
+
+    USE [tpch10g-btree];
+    GO
+    
+    ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
+    GO
+    ```
+
+3. Create a UDF that does data access, by running the commands below in the query window:
+
+    ```sql
+    CREATE OR ALTER FUNCTION dbo.CalcAvgQuantityByPartKey
+        (@PartKey INT)
+    RETURNS INT
+    AS
+    BEGIN
+            DECLARE @Quantity INT
+
+            SELECT @Quantity = AVG([L_Quantity])
+            FROM [dbo].[lineitem]
+            WHERE [L_PartKey] = @PartKey
+
+            RETURN (@Quantity)
+    END
+    GO
+    ```
+
+4. For the next steps, looking at the query execution plan is needed. Click on **Include Actual Plan** or press CTRL+M.
+
+    ![Include Actual Plan](./media/ActualPlan.png "Include Actual Plan") 
+
+5. Execute the command below in the query window: 
+
+    > **Note:**
+    > The hint forcibly disables UDF inlining and should take between 20 and 30 seconds.
+
+    ```sql
+    SELECT TOP 1000
+        L_OrderKey,
+        L_PartKey,
+        L_SuppKey,
+        L_ExtendedPrice,
+        dbo.CalcAvgQuantityByPartKey(L_PartKey)
+    FROM dbo.lineitem
+    WHERE L_Quantity > 44
+    ORDER BY L_Tax DESC
+    OPTION (RECOMPILE,USE HINT('DISABLE_TSQL_SCALAR_UDF_INLINING'));
+    ```
+
+6. Observe the query execution plan shape and note:
+    - The overall elapsed time.
+    - The time spent on each operator. 
+    - The fact that the plan executed in serial mode.
+    - The **Compute Scalar** operator obfuscates the logic inside, and the estimated cost is low, as evidenced by the estimated cost of zero percent as it relates to the entire plan.
+
+    ![Scalar UDF not inlined](./media/UDF_NotInlined.png "Scalar UDF not inlined") 
+
+7. Now execute the command below in the query window: 
+
+    ```sql
+    SELECT TOP 1000
+        L_OrderKey,
+        L_PartKey,
+        L_SuppKey,
+        L_ExtendedPrice,
+        dbo.CalcAvgQuantityByPartKey(L_PartKey)
+    FROM dbo.lineitem
+    WHERE L_Quantity > 44
+    ORDER BY L_Tax DESC
+    OPTION (RECOMPILE);
+    ```
+
+8. Observe the query execution plan shape and note:
+    - The overall elapsed time dropped.
+        > **Note:**
+        > The metrics you observed are for a first time execution only and because the hint **RECOMPILE** is used.
+        > Removing the hint **RECOMPILE** and executing the same statements multiple times should yield lower execution times, while maintaining the relative performance difference. 
+
+    - The plan has inlined all the logic that was obfuscated by the UDF in the previous plan. 
+    - The fact that the plan executed in parallel.
+    - The Database Engine was able to identify a potentially missing index with a higher projected impact, precisely because it was able to inline the UDF.
+    - The inlined scalar UDF allowed us to see there is a **SORT** operator that is spilling. MGF can resolve this after a few executions if the hint **RECOMPILE** wasn't used.
+
+    ![Scalar UDF inlined](./media/UDF_Inlined.png "Scalar UDF inlined") 
+
+---
+
+### Exercise 5 - Approximate QP
+
+Obtaining row counts serves numerous dashboard-type scenarios. When these queries are executed against big data sets with many distinct values (for example, distinct orders counts over a time period) – and many concurrent users, this may introduce performance issues, increased resource usage such as memory, and blocking.
+
+For some of these scenarios, approximate data is good enough. For example for data scientists doing big data set exploration and trend analysis. There's a need to understand data distributions quickly but exact values are not paramount.
+
+SQL Server 2019 introduces the ability to do approximate `COUNT DISTINCT` operations for big data scenarios, with the benefit of high performance and a (very) low memory footprint.
+
+> **Important:**
+> Approximate QP does NOT target banking applications or any scenario where an exact value is required! 
+
+1. Open SSMS and connect to the SQL Server 2019 instance (default instance). Click on **New Query** or press CTRL+N.
+
+    ![New Query](./media/new_query.png "New Query") 
+
+2. Setup the database to ensure the latest database compatibility level is set, by running the commands below in the query window:
+
+    ```sql
+    USE master;
+    GO
+
+    ALTER DATABASE [tpch10g-btree] 
+    SET COMPATIBILITY_LEVEL = 150;
+    GO
+
+    USE [tpch10g-btree];
+    GO
+    
+    ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
+    GO
+    ```
+
+3. Now execute the commands below in the query window: 
+
+    ```sql
+    DBCC DROPCLEANBUFFERS;
+    GO
+    SELECT COUNT(DISTINCT [L_OrderKey])
+    FROM [dbo].[lineitem];
+    GO
+
+    DBCC DROPCLEANBUFFERS;
+    GO
+    SELECT APPROX_COUNT_DISTINCT([L_OrderKey])
+    FROM [dbo].[lineitem];
+    GO
+    ```
+
+4. Observe the query execution plan shape and note:
+    - The plans look exactly the same.
+    - Execution time is very similar.
+
+    ![Approximate Count Distinct](./media/ApproxCount.png "Approximate Count Distinct") 
+
+
+5. Right-click *Query 1* execution plan root node - the **SELECT** - and click on **Properties**.     
+    In the ***Properties*** window, expand **MemoryGrantInfo**. Note that:
+    - The property ***GrantedMemory*** is almost 3 GB. 
+    - The property ***MaxUsedMemory*** is almost 700 MB.
+
+    ![Normal Count Distinct - Properties](./media/CD_Properties.png "Normal Count Distinct - Properties") 
+
+6. Now right-click *Query 2* execution plan root node - the **SELECT** - and click on **Properties**.     
+    In the ***Properties*** window, expand **MemoryGrantInfo**. Note that:
+    - The property ***GrantedMemory*** is just under 40 MB. 
+    - The property ***MaxUsedMemory*** is about 8 MB.
+
+    ![Approximate Count Distinct - Properties](./media/ACD_Properties.png "Approximate Count Distinct - Properties") 
+
+    > **Important:**
+    > This means that for scenarios where approximate count is enough, with the much lower memory footprint, these types of queries can be executed often with less concerns about concurrency and memory resource bottlenecks.
+
+7. If you would execute again but removing the `DBCC DROPCLEANBUFFERS` command, because pages are cached now, the queries would be much faster, but still equivalent in the above observations.
+
+    ```sql
+    SELECT COUNT(DISTINCT [L_OrderKey])
+    FROM [dbo].[lineitem];
+    GO
+
+    SELECT APPROX_COUNT_DISTINCT([L_OrderKey])
+    FROM [dbo].[lineitem];
+    GO
+    ```
 
 
-TBA
+    ![Approximate Count Distinct with warm cache](./media/ApproxCount_Warm.png "Approximate Count Distinct with warm cache")

+ 30 - 77
Sessions/Winter-Ready-2019/Lab-Memory-OptimizedTempDB.md

@@ -5,24 +5,14 @@ author: Pam Lahoud
 ---
 ---
 # SQL Server 2019 Memory-Optimized TempDB Lab
 # SQL Server 2019 Memory-Optimized TempDB Lab
 
 
-1.  Connect to the Windows VM in Azure using the information provided in
-    the LODS portal instructions. If you are performing this lab in your
-    own environment, create a VM with a minimum of 32 cores and SQL
-    Server 2019 CTP 2.3.
+1.  Connect to the Windows VM in Azure using the information provided in the LODS portal instructions. If you are performing this lab in your own environment, create a VM with a minimum of 32 cores and SQL Server 2019 CTP 2.3.
 
 
-2.  Open SQL Server Management Studio (SSMS) and connect to the
-    **HkTempDBTestVM\\SQL2019\_CTP23** instance. Verify that the
-    AdventureWorks database exists. If it does not, perform the
-    following steps:
+2.  Open SQL Server Management Studio (SSMS) and connect to the **HkTempDBTestVM\\SQL2019\_CTP23** instance. Verify that the **AdventureWorks** database exists. If it does not, perform the following steps:
 
 
-    1.  Open and execute the script
-        `C:\Labs\Memory-OptimizedTempDB\00-AdventureWorks\_Setup.sql`.
-        This will restore the AdventureWorks database and make
-        configuration changes needed for the rest of the lab.
+    1.  Open and execute the script `C:\Labs\Memory-OptimizedTempDB\00-AdventureWorks\_Setup.sql`.    
+        This will restore the AdventureWorks database and make configuration changes needed for the rest of the lab.
 
 
-    2.  Open and execute the script
-        `C:\Labs\Memory-OptimizedTempDB\01-SalesAnalysis\_Optimized.sql`
-        to create the **SalesAnalysis\_Optimized** stored procedure.
+    2.  Open and execute the script `C:\Labs\Memory-OptimizedTempDB\01-SalesAnalysis\_Optimized.sql` to create the **SalesAnalysis\_Optimized** stored procedure.
 
 
 3.  Verify that there are no startup trace flags set:
 3.  Verify that there are no startup trace flags set:
 
 
@@ -35,22 +25,17 @@ author: Pam Lahoud
 
 
     4.  Click the ***Startup Parameters*** tab.
     4.  Click the ***Startup Parameters*** tab.
 
 
-    5.  Verify that there are no lines that being with "-T" in the list
-        of existing parameters:    
+    5.  Verify that there are no lines that being with "-T" in the list of existing parameters:    
 
 
         ![Startup Parameters No Flag](./media/StartupParametersNoFlag.png "Startup Parameters No Flag")
         ![Startup Parameters No Flag](./media/StartupParametersNoFlag.png "Startup Parameters No Flag")
 
 
         If any exist, highlight them and click "Remove".
         If any exist, highlight them and click "Remove".
 
 
-    6.  Click "OK" to close the *Properties* window, then click "OK" on
-        the Warning box that pops up.
+    6.  Click "OK" to close the *Properties* window, then click "OK" on the Warning box that pops up.
 
 
-    7.  Restart the SQL Server service by right-clicking on "SQL Server
-        (SQL2019\_CTP23)" and choosing **Restart**.
+    7.  Restart the SQL Server service by right-clicking on "SQL Server (SQL2019\_CTP23)" and choosing **Restart**.
 
 
-4.  Browse to the `C:\Labs\Memory-OptimizedTempDB` folder and
-    double-click the file `SQL2019_CTP23_PerfMon.htm` file to open it in
-    Internet Explorer.    
+4.  Browse to the `C:\Labs\Memory-OptimizedTempDB` folder and double-click the file `SQL2019_CTP23_PerfMon.htm` file to open it in Internet Explorer.    
     
     
     - Once the file opens, right-click anywhere in the white area of the window to clear the existing data.
     - Once the file opens, right-click anywhere in the white area of the window to clear the existing data.
     - You will receive a prompt warning *this action will erase the data in the graph*.    
     - You will receive a prompt warning *this action will erase the data in the graph*.    
@@ -63,44 +48,24 @@ author: Pam Lahoud
     - `02-get summary of current waits.sql`
     - `02-get summary of current waits.sql`
     - `02-get object info from page resource sql 2019.sql`
     - `02-get object info from page resource sql 2019.sql`
 
 
-    These will be used to monitor the server while the workload 
-    is running.
+    These will be used to monitor the server while the workload is running.
 
 
 6.  Start the workload:
 6.  Start the workload:
 
 
-    1.  Open a Command Prompt and browse to
-        `C:\Labs\Memory-OptimizedTempDB`
+    1.  Open a Command Prompt and browse to `C:\Labs\Memory-OptimizedTempDB`
 
 
-    2.  Go back to the Internet Explorer window that has the Performance
-        Monitor collector open and click the play button (green arrow) to start the
-        collection.
+    2.  Go back to the Internet Explorer window that has the Performance Monitor collector open and click the play button (green arrow) to start the collection.
 
 
-    3.  From the Command Prompt window, execute
-        `SQL2019_CTP21_Run_SalesAnalysis_Optimized.bat` by typing or
-        pasting the file name and hitting **Enter**.
+    3.  From the Command Prompt window, execute `SQL2019_CTP21_Run_SalesAnalysis_Optimized.bat` by typing or pasting the file name and hitting **Enter**.
 
 
-7.  While the workload is running, watch the counters in Performance
-    Monitor. You should see **Batch Requests/sec** around 500 and
-    there should be **Page Latch** waits throughout the workload.    
+7.  While the workload is running, watch the counters in Performance Monitor. You should see **Batch Requests/sec** around 500 and there should be **Page Latch** waits throughout the workload.    
     
     
-    You can then go to SSMS and run the various scripts to monitor the workload.   You should see several sessions waiting on `PAGELATCH`, and when using
-    the `02-get object info from page resource sql 2019.sql` you should
-    see the sessions are waiting on pages that belong to TempDB system
-    tables, most often `sysschobjs`.    
-    This is TempDB metadata contention and is the scenario that this 
-    SQL Server 2019 improvement is targeted to correct.    
+    You can then go to SSMS and run the various scripts to monitor the workload.   You should see several sessions waiting on `PAGELATCH`, and when using the `02-get object info from page resource sql 2019.sql` you should see the sessions are waiting on pages that belong to TempDB system tables, most often `sysschobjs`.    
+    This is TempDB metadata contention and is the scenario that this SQL Server 2019 improvement is targeted to correct.    
     
     
-    Feel free to run the workload a few more times to examine
-    the different scripts and performance counters. Each time you run
-    it, the runtime will be reported in the Command Prompt window. It
-    should take about 1 minute to run each time.
-
-8.  Once you have finished examining the contention, make sure the
-    Command Prompt scripts are complete and pause the Performance
-    Monitor collection. We'll use the same Performance Monitor window
-    for the next part of the lab, so it's a good idea to have at least
-    one collection of the workload on the screen when you pause it in
-    order to compare before and after the change.
+    Feel free to run the workload a few more times to examine the different scripts and performance counters. Each time you run it, the runtime will be reported in the Command Prompt window. It should take about 1 minute to run each time.
+
+8.  Once you have finished examining the contention, make sure the Command Prompt scripts are complete and pause the Performance Monitor collection. We'll use the same Performance Monitor window for the next part of the lab, so it's a good idea to have at least one collection of the workload on the screen when you pause it in order to compare before and after the change.
 
 
 9.  Turn on Memory-Optimized TempDB:
 9.  Turn on Memory-Optimized TempDB:
 
 
@@ -111,35 +76,23 @@ author: Pam Lahoud
     5. In the "Specify a startup parameter:" box, type "-T3895" and click the "Add" button.
     5. In the "Specify a startup parameter:" box, type "-T3895" and click the "Add" button.
     6. The "Existing parameters:" box should now look like this:
     6. The "Existing parameters:" box should now look like this:
     
     
-        ![Startup Parameters With Flag](./media/StartupParametersWithFlag.png "Startup Parameters With Flag")
+       ![Startup Parameters With Flag](./media/StartupParametersWithFlag.png "Startup Parameters With Flag")
+
     7. Click "OK" to close the *Properties* window, then click "OK" on the Warning box that pops up.
     7. Click "OK" to close the *Properties* window, then click "OK" on the Warning box that pops up.
-    8. Restart the SQL Server service by right-clicking on "SQL Serve (SQL2019\_CTP23)" and choosing **Restart**.
+    8. Restart the SQL Server service by right-clicking on "SQL Server (SQL2019\_CTP23)" and choosing **Restart**.
 
 
-10. Go back to the Performance Monitor collector and click play to start
-    the collection.
+10. Go back to the Performance Monitor collector and click play to start the collection.
 
 
 11. Start the workload the same way you did in Step 5.
 11. Start the workload the same way you did in Step 5.
 
 
-12. Again, watch the Performance Monitor counters. You should see **Batch Requests/sec**
-    higher this time, around 600, and there should
-    be no Page Latch waits.   
+12. Again, watch the Performance Monitor counters. You should see **Batch Requests/sec** higher this time, around 600, and there should be no Page Latch waits.   
 
 
     > **Note:**
     > **Note:**
-    > You may see a small bump of Page Latch waits the first
-    time you run the workload after the restart. This should disappear
-    the second time you run it.
-    
-    Running the scripts from step 6 during the workload should show that
-    no sessions are waiting for any resources. Again, feel free to run
-    the workload multiple times. It should run faster this time, around
-    52 seconds vs. 1 minute.
+    > You may see a small bump of Page Latch waits the first time you run the workload after the restart. This should disappear the second time you run it.
+
+    Running the scripts from step 6 during the workload should show that no sessions are waiting for any resources. Again, feel free to run the workload multiple times. It should run faster this time, around 52 seconds vs. 1 minute.
 
 
     > **Note:**
     > **Note:**
-    > The amount of improvement you will see on a real-world
-    workload will depend on how much contention is seen and the size of
-    the SQL Server (i.e. how many cores and how much memory). Small
-    servers without a high level of concurrency will not see much of an
-    improvement, if any at all. This improvement is designed to provide
-    greater scalability, so while a single run won't get much faster,
-    you should be able to run a lot more concurrent threads without
-    increasing the runtime of each batch.
+    > The amount of improvement you will see on a real-world workload will depend on how much contention is seen and the size of the SQL Server (i.e. how many cores and how much memory).     
+    > Small servers without a high level of concurrency will not see much of an improvement, if any at all.     
+    > This improvement is designed to provide greater scalability, so while a single run won't get much faster, you should be able to run a lot more concurrent threads without increasing the runtime of each batch.

BIN
Sessions/Winter-Ready-2019/media/ACD_Properties.png


BIN
Sessions/Winter-Ready-2019/media/ApproxCount.png


BIN
Sessions/Winter-Ready-2019/media/ApproxCount_Warm.png


BIN
Sessions/Winter-Ready-2019/media/BMRS_Batch.png


BIN
Sessions/Winter-Ready-2019/media/BMRS_Row.png


BIN
Sessions/Winter-Ready-2019/media/CD_Properties.png


BIN
Sessions/Winter-Ready-2019/media/UDF_Inlined.png


BIN
Sessions/Winter-Ready-2019/media/UDF_NotInlined.png