Quantcast
Channel: SQL Memory Archives - SQL Authority with Pinal Dave
Viewing all 93 articles
Browse latest View live

SQL SERVER – AWE (Address Windowing Extensions) Explained in Simple Words

$
0
0

I was asked question by Jr. DBA that “What is AWE?”. For those who do know what is AWE or where is it located, it can be found at SQL Server Level properties. AWE is properly explained in BOL so we will just have our simple explanation.

awe SQL SERVER   AWE (Address Windowing Extensions) Explained in Simple Words

Address Windowing Extensions API is commonly known as AWE.  AWE is used by SQL Server when it has to support very large amounts of physical memory. AWE feature is only available in SQL Server Enterprise, Standard, and Developer editions with of SQL Server 32 bit version.

Microsoft Windows 2000/2003 server supports maximum of 64GB memory. If we have installed SQL Server 32 bit version which can support maximum of 3 GB memory on Windows 2000/2003, we can enable AWE feature to use available physical memory of server to improve performance of SQL Server. In simple words, AWE provides memory management functions which lets windows to allow more than 3GB memory to standard 32 bit application.

There are many other modification needs to be done before AWE option can be used. Please refer SQL Server BOL Using AWE for additional details.

Reference : Pinal Dave (http://blog.sqlauthority.com)


Posted in Best Practices, Database, Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Memory

SQL SERVER – Queries Waiting for Memory Allocation to Execute

$
0
0

In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar case. Queries that do not have to wait on a memory grant will not appear in the resultset of following query.

SELECT TEXT, query_plan, requested_memory_kb,
granted_memory_kb,used_memory_kb, wait_order
FROM sys.dm_exec_query_memory_grants MG
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS
APPLY sys.dm_exec_query_plan(MG.plan_handle)

Please note that wait_order will give order of query waiting on memory to execute. This is a very important script, I suggest that you keep it in the permanent list of queries. If ever you notice that your queries are running slow and think that memory is the culprit, do run this query. If there are lots of rows in the result, please try to optimize the queries or increase the memory capacity.

Reference: Pinal Dave (http://blog.sqlauthority.com)


Posted in Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Memory

SQL SERVER – Minimum Maximum Memory – Server Memory Options

$
0
0

I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options.

The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB).

This was very interesting to me as I was not familiar with this details. This was one interesting detail for me. In reality I will never set up my max server memory to 16 MB, it will be right out suicide for the server looking at current systems capabilities.memsetting SQL SERVER   Minimum Maximum Memory   Server Memory Options

If you try to reset this to lower than 16 MB, SQL Server will automatically make it 16 MB and will not take lower number.

This information was new to me. How about you?

Reference: Pinal Dave (http://blog.SQLAuthority.com)

 


Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Memory

SQL SERVER – Plan Cache and Data Cache in Memory

$
0
0

I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post.

Plan Cache in Memory

USE AdventureWorks
GO
SELECT [text], cp.size_in_bytes, plan_handle
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE cp.cacheobjtype = N'Compiled Plan'
ORDER BY cp.size_in_bytes DESC
GO

Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script

Data Cache in Memory

USE AdventureWorks
GO
SELECT COUNT(*) AS cached_pages_count,
name AS BaseTableName, IndexName,
IndexTypeDesc
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT s_obj.name, s_obj.index_id,
s_obj.allocation_unit_id, s_obj.OBJECT_ID,
i.name IndexName, i.type_desc IndexTypeDesc
FROM
(
SELECT OBJECT_NAME(OBJECT_ID) AS name,
index_id ,allocation_unit_id, OBJECT_ID
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.TYPE = 1 OR au.TYPE = 3)
UNION ALL
SELECT OBJECT_NAME(OBJECT_ID) AS name,
index_id, allocation_unit_id, OBJECT_ID
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.partition_id
AND au.TYPE = 2
) AS s_obj
LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id
AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = DB_ID()
GROUP BY name, index_id, IndexName, IndexTypeDesc
ORDER BY cached_pages_count DESC;
GO

Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count

Reference: Pinal Dave (http://blog.SQLAuthority.com)


Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

SQL SERVER – SSMS: Memory Consumption Report

$
0
0

The next in line in this series of reports is the “Memory Consumption” Report from SQL Server Management Studio. This is a goldmine of a report in my humble opinion and lesser respected. When I used to be consulted or land into performance tuning exercises for customers in the past, there is one question that gets repeated and echoed every now and then – “My SQL Server is eating away my RAM and it is not releasing it back even in non-peak hours”. I always am smiling when this question comes up. SQL Server or for that matter any database system is highly memory oriented processes. If they had taken for some reason, then they are not going to release it because they assume at a later point in time they will require it again. So instead of depending on the OS to allocate, they go with the assumption of grabbing and never releasing even when it is not required in the interim.

Now that brings to the point the fact what is my SQL Server using this memory for? Well, if you search the internet you will be amazed by the plethora of scripts and it is overwhelming how people have killed this subject to death. But this hidden gem inside SQL Server Management Studio is never talked about. So in this blog post, let me take a tour of what this report contains and how one should read the sections.

This report can be launched by going to Server Node in SQL Server Management Studio (SSMS) right click > Reports > Standard Reports > Memory Consumption.

memconsumption1 SQL SERVER   SSMS: Memory Consumption Report

The report has multiple sections which we would discuss one by one.

Memory Related Counters

memconsumption2 SQL SERVER   SSMS: Memory Consumption Report

These three values can give us a rough indication of memory pressure on SQL Server Instance. These three values are retrieved from SQL Server Memory counters.

SELECT  OBJECT_NAME
,counter_name
,CONVERT(VARCHAR(10),cntr_value) AS cntr_value
FROM sys.dm_os_performance_counters
WHERE ((OBJECT_NAME LIKE '%Manager%')
AND(
counter_name = 'Memory Grants Pending'
OR counter_name='Memory Grants Outstanding'
OR counter_name = 'Page life expectancy'))

As per perfmon counters help, “Memory Grants Outstanding” shows counter shows the current number of processes that have successfully acquired a workspace memory grantgrant, whereas “Memory Grants Pending” counter shows the current number of processes waiting for a workspace memory grant. Page life expectancy is defined as “Number of seconds a page will stay in the buffer pool without references

Top Memory Consuming Components

This section of the report shows various memory consumers (called clerks) in a pie chart based on the amount of memory consumed by each one of them. In most of the situations, SQLBUFFERPOOL would be the biggest consumer of the memory. This output is taken from sys. dm_os_memory_clerks DMV, which is one of the key DMV in monitoring SQL Server memory performance.  We can use sys.dm_os_memory_clerks to identify where exactly SQL’s memory is being consumed.

memconsumption3 SQL SERVER   SSMS: Memory Consumption Report

Buffer Pages Distribution (# Pages)

This particular section of the report shows the state of buffer pages. Behind the scenes it uses DBCC MEMORYSTATUS to get the distribution of buffer in various states. Buffer Distribution can be one of the following as: ‘Stolen’, ‘Free’, ‘Cached’, ‘Dirty’, ‘Kept’, ‘I/O’, ‘Latched’ or ‘Other’. Interestingly, if we run the DBCC MEMORYSTATUS, we may not see all these states. This is because memory status output format has been constantly changing SQL 2000 (KB 271624) and SQL 2005 (KB 907877).

memconsumption4 SQL SERVER   SSMS: Memory Consumption Report

Memory Changes Over Time (Last 7 Days)

This section of the report shows data from default trace. One of the event which is captured by default trace is “Server Memory Change” (Event id 81). Behind the scene, this section reads default trace, looks for event ID 81 and adds a filter (datediff(dd,StartTime,getdate()) < 7) to display last 7 days records. My laptop doesn’t have much load that why we don’t see any memory change. Another reason, as quoted in the text, of no data could be that default trace are disabled.

memconsumption5 SQL SERVER   SSMS: Memory Consumption Report

I am sure in your production or active development boxes these values are not going to be zero for sure.

Memory Usage By Components

At the bottom, there is a table which shows the memory for each component.  This is also taken from the same DMV, which is used in “Top Memory Consuming Components”. The graph earlier shows top 5% consumers by name and the rest would be shown as others. It’s important to note that in SQL 2014, it would always show MEMORYCLERK_XTP which is used by In-Memory OLTP engine (even if it’s not a top consumer).

memconsumption6 SQL SERVER   SSMS: Memory Consumption Report

Here is the little description of various columns:

Allocated Memory Amount of memory allocated to sqlservr.exe
Virtual Memory (Reserved) Memory reserved in Virtual Address Space (VAS)
Virtual Memory (Committed) Memory committed in Virtual Address Space. Once memory is committed in VAS, it would have physical storage (RAM or Pagefile)
AWE Memory Allocated Amount of memory locked in the physical memory and not paged out by the operating system
Shared Memory (Reserved) Amount of shared memory that is reserved
Shared Memory (Committed) Amount of shared memory that is committed

To understand reserve and committed, I always quote this. Imagine that you need to fly to Mumbai on a certain date and you book a flight ticket. This is called reservation. There’s nothing there yet, but nobody else can claim that seat either. If you release your reservation the place can be given to someone else. Committing is actually grabbing the physical seat on the day of travel.

Hope this gives you a fair idea about various pieces of memory consumers. As I mentioned before, this is one of those hidden gem reports that never gets seen. One can learn and know about a current running system and who are using SQL Server Memory from this report easily.

I would be curious to know if in any of your systems if there is any other component apart from BufferPool or SOSNode as the top memory consumers?

Reference: Pinal Dave (http://blog.sqlauthority.com)


Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

$
0
0

At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.

This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.

If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.

In Memory 01 SQL SERVER   SSMS: Memory Usage By Memory Optimized Objects Report

Here is the query used to identify whether a database has memory-optimized file group or not.

SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'

Once we add filegroup using below command, we would see different version of report.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
GO

In Memory 02 SQL SERVER   SSMS: Memory Usage By Memory Optimized Objects Report

The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB]
ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO')
TO FILEGROUP [IMO_FG]
GO

You may have to change the path based on your SQL Server configuration. Below is the script to create the table.

USE MemoryOptimizedDB
GO
--Drop table if it already exists.
IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL
DROP TABLE dbo.SQLAuthority
GO
CREATE TABLE dbo.SQLAuthority
(
ID INT IDENTITY NOT NULL,
Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal',
CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID),
INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.

In Memory 03 SQL SERVER   SSMS: Memory Usage By Memory Optimized Objects Report

Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows

INSERT INTO SQLAuthority VALUES (DEFAULT)
GO 10000

Here is the same report after inserting 1000 rows into our InMemory table.

 In Memory 04 SQL SERVER   SSMS: Memory Usage By Memory Optimized Objects Report

 There are total three sections in the whole report.

  1. Total Memory consumed by In-Memory Objects
  2. Pie chart showing memory distribution based on type of consumer – table, index and system.
  3. Details of memory usage by each table.

The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query.

USE MemoryOptimizedDB
GO
SELECT OBJECT_NAME(OBJECT_ID), *
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID > 0
GO

This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)


Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

SQL SERVER – Beginning In-Memory OLTP with Sample Example

$
0
0

In-Memory OLTP is a wonderful new feature introduced in SQL Server 2014. My friend Balmukund Lakhani has written amazing series on A-Z of In-Memory on his blog. All serious learner should study it for deep understanding of the same subject. I will try to cover a few of the concepts in simpler word and often you may find me referring Balmukund’s site on this subject.

Why do we need In-Memory?

Here is the paragraph from Balmukund’s blog (published with approval):

Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM, but data should be back in the same state as it was before the failure. To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in a unique manner.

Before we start on the subject, let us see a few of the reasons, why you want to go for high-performance memory optimized OLTP operation.

  • It naturally integrates with SQL Server relational database
  • It supports Full ACID properties
  • It helps with non-blocking multi-version optimistic concurrency control, in other words, no locks or latches

Well, let us start with a working example. In this example, we will learn a few things – please pay attention to the details.

  1. We will create a database with a file group which will contain memory optimized data
  2. We will create a table with setting memory_optimized set to enabled
  3. We will create a stored procedure which is natively compiled

The procedure of our test is very simple. We will create two stored procedures 1) Regular Stored Procedure 2) Natively Compiled. We will compare the performance of both the SP and see which one performs better.

Let’s Start!

Step 1: Create a database which creates a file group containing memory_optimized_data

CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

Step 2: Create two different tables 1) Regular table and 2) Memory Optimized table

USE InMemory
GO
-- Create a Simple Table
CREATE TABLE DummyTable (ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL)
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

Step 3: Create two stored procedures 1) Regular SP and 2) Natively Compiled SP

Stored Procedure – Simple Insert
-- Simple table to insert 100,000 Rows
CREATE PROCEDURE Simple_Insert_test
AS
BEGIN
SET NOCOUNT ON
DECLARE
@counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
DummyTable VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [Simple_Insert in sec]
END
GO

Stored Procedure – InMemory Insert
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Step 4: Compare the performance of two SPs

Both of the stored procedure measures and print time taken to execute them. Let us execute them and measure the time.

-- Running the test for Insert
EXEC Simple_Insert_test
GO
EXEC ImMemory_Insert_test
GO

Here is the time taken by Simple Insert: 12 seconds

Here is the time taken by InMemory Insert: Nearly 0 second (less than 1 seconds)

inmemorysp SQL SERVER   Beginning In Memory OLTP with Sample Example

Step 5: Clean up!

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO

Analysis of Result

It is very clear that memory In-Memory OLTP improves performance of the query and stored procedure. To implement In-Memory OLTP there are few steps user to have follow with regards to filegroup and table creation. However, the end result is much better in the case of In-Memory OTLP setup.

Reference: Pinal Dave (http://blog.sqlauthority.com)


Filed under: PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: In-Memory, SQL Memory

SQL SERVER – Location of Natively Compiled Stored Procedure and Naming Convention

$
0
0

Yesterday I wrote about SQL SERVER – Beginning In-Memory OLTP with Sample Example. One of the questions I received right after I published a blog post was why do I call stored procedure natively coded stored procedure when the entire code is in T-SQL. Indeed a very good question. The answer is very simple, the reason we call it natively compiled stored procedure as soon as we execute create statement compiler will convert interpreted T-SQL, query plans and expressions into native code.

You can execute the following query in your SSMS and find out the location of the natively compiled stored procedure.

SELECT name,
description
FROM   sys.dm_os_loaded_modules
WHERE description = 'XTP Native DLL'
GO

To see this DMV in action execute the code from this blog post on your SQL Server.

— Create database
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

— Create table
USE InMemory
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

— Create stored procedure
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Now let us execute our script as described.

inmemnativelocation SQL SERVER   Location of Natively Compiled Stored Procedure and Naming Convention

Now we can see in our result, there are two different dll files.  From the image above I have explained various parts of the dll file.

As per the image, our database id is 11 and if we check it is same as what we have created few seconds ago. Similarly the name of the object id can be found as well.

inmemnativelocation1 SQL SERVER   Location of Natively Compiled Stored Procedure and Naming Convention

If we open up the folder where we have created this object we will see two sets of file information. One for stored procedure and one for table.

inmemnativelocation2 SQL SERVER   Location of Natively Compiled Stored Procedure and Naming Convention

My friend Balmukund explains this concept very well on his blog over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)


Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: In-Memory, SQL Memory

SQL SERVER – Knowing Which Database is Consuming My Memory

$
0
0

I have been fortunate enough to be at conferences around the world and it is always refreshing to see how people come up with some really common yet never answered questions from time to time. The classic questions I get asked ever since I started working with databases and SQL Server is – why does SQL Server take all the memory and not return it? Even more bizarre is the question – Can I know how much memory is my databases using?

I always tell them, memory is a big topic and we need to use a number of commands like DBCC MEMORYSTATUS to know the internal working. The much more interesting way is to find out what are the pages in our buffer pool for our various databases. This can be got using DMVs as shown below:

--List the Number of pages in the buffer pool by database and page type
SELECT DB_NAME(database_id),
page_type,
COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE database_id! = 32767
GROUP BY database_id, page_type
ORDER BY number_pages DESC
GO
--List the number of pages in the buffer pool by database
SELECT DB_NAME(database_id),
COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE database_id! =32767
GROUP BY database_id
ORDER BY database_id
GO

os buffer descriptors 01 SQL SERVER   Knowing Which Database is Consuming My Memory

As you can see in the above output, we can see the amount of data pages and index pages that are loaded into our SQL Server memory.

A small variation of the above query can be to scan the buffer pool based on the type of pages that are loaded into memory. Below is a typical query fired against the same DMV.

--List the number of pages in the buffer pool by page type
SELECT page_type, COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
GROUP BY page_type
ORDER BY number_pages DESC
GO
--List the number of dirty pages in the buffer pool
SELECT COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE is_modified = 1
GO

os buffer descriptors 02 SQL SERVER   Knowing Which Database is Consuming My Memory

In the above query, I have also shown the dirty pages that are in memory and are yet to be flushed out.

This DMV is super useful when you have a number of databases that are running on our server and want to find out who is consuming the server memory. Do let me know your thoughts and what output are you seeing in your environment. Is there anything strange that you fine? Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)


Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

SQL SERVER – Error: Msg 701, Level 17, State 103. There is insufficient system memory in resource

$
0
0

Talking and exploring In-Memory topics in SQL Server 2014 has been interesting to me. When I wrote the blog around table variable not being just an In-Memory structure, one of my course listener (SQL Server 2014 Administration New Features) pinged me on twitter to ask, if In-Memory OLTP was really In-Memory? Wouldn’t SQL Server like to swap the data or memory data to pagination file when there is memory pressure? I told them the concept of In-Memory is that data always resides in memory and the reason for a feature name “In-Memory OLTP”. Let us see how we can fix errors related to insufficient system memory.

The post SQL SERVER – Error: Msg 701, Level 17, State 103. There is insufficient system memory in resource appeared first on Journey to SQL Authority with Pinal Dave.

SQL SERVER – How to Setup Delayed Durability for SQL Server 2014?

$
0
0

Yesterday we discussed about the Basics of Delayed Durability in SQL Server 2014.

There are three methods to setup the this feature in SQL Server. Let us see each of them in detail.

Method 1: Database Level

You can enable, disable or force delayed durability at database level. Here is how you can do the same.

USE [master]
GO
-- Enable Delayed Durability for the database
ALTER DATABASE [AdventureWorks2014] SET DELAYED_DURABILITY = ALLOWED
GO

If you want your changes immediately take action, you can additionally use an option WITH NO_WAIT which will be implemented immediately.

USE [master]
GO
-- Enable Delayed Durability for the database
ALTER DATABASE [AdventureWorks2014] SET DELAYED_DURABILITY = ALLOWED
GO

Currently there are three different options with the SET DELAYED_DURABILITY.

  • Disabled: This is the default setting and very similar to full transaction durability.
  • Allowed: This option will allow each transaction to decide the delayed durability. Once this enables each transactions’s durability will be based on the transaction level level settings which will see later in this post.
  • Forced: This option will force each transaction to follow this feature.

There is one more thing we need to understand before we continue further down. When we set SET DELAYED_DURABILITY = ALLOWED it does not mean that each of the transactions are going to follow this durability. Allowed is just simply enabling the capability of the database to work with transactions which will have delayed durability. If you want each of your transactions to follow delayed durability you will have to execute the following statement.

USE [master]
GO
-- Enable Delayed Durability for the database
ALTER DATABASE [AdventureWorks2014] SET DELAYED_DURABILITY = FORCED
GO

You can disable delayed durability by executing the following statement.

USE [master]
GO
-- Enable Delayed Durability for the database
ALTER DATABASE [AdventureWorks2014] SET DELAYED_DURABILITY = DISABLED
GO

You can also change these values from SSMS as displayed in the image below.

SQL SERVER - How to Setup Delayed Durability for SQL Server 2014? delayeddurability

Method 2: Transaction Level

Now that we have enabled the database level transactions, we can now use transaction level settings for delayed durability. Remember, if you have not enabled database level transaction, specifying transaction level durability will have no impact. You can specify the transaction level durability on the commit statement as following.

COMMIT TRANSACTION nameoftransaction WITH (DELAYED_DURABILITY = ON);

Method 3: Natively Compiled Stored Procedure

You can use the similar settings for natively compiled stored procedures as well. Here is the example of the syntax.

CREATE PROCEDURE <procedureName> …
 WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
 AS BEGIN ATOMIC WITH
 (
 DELAYED_DURABILITY = ON,
 ...
 )
 END

Well, that’s it for today.

Reference: Pinal Dave (http://blog.sqlauthority.com)

The post SQL SERVER – How to Setup Delayed Durability for SQL Server 2014? appeared first on Journey to SQL Authority with Pinal Dave.

SQL SERVER – Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes

$
0
0

This is one of the most interesting stories written by my friend Syrovatchenko Sergey. He is an expert on SQL Server and works at Devart. Just like me he also shares his passion for Wait Stats and new features of the SQL Server. In this blog post he talks about one of the most interesting feature about Delayed Durability. I strongly encourage that you find sometime during your day to read this blog post and discover more about this topic.


SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric1

I’ve recently started helping with a new project TMetric, which is developed as a free web-service for tracking working hours. The technology stack was originally selected to be Microsoft with SQL Server 2014 as data repository. One of the first tasks assigned to me was to study the opportunity to accelerate auto-tests.

Before I got into gear, the project had existed for a long time and had gathered a fair number of tests (at that time I reckoned 1300 of auto-tests). On a build machine with SSD, tests ran for 4-5 minutes, and on a HDD – not more than 11-12 minutes. The whole team could be equipped with SSD, but the essence of the problem would not be solved. Especially that they were soon planning to expand the functionality and the number of tests would become even greater.

All tests were grouped, and before running each of the groups, the old data were purged from the database. Previously, purging was performed by recreating the database, but this approach proved to be very slow in practice. It would be much faster just to clean all the tables from data and reset the IDENTITY value to zero, so that future inserts would form correct test data. So, my starting point was the script with such an approach:

EXEC sys.sp_msforeachtable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' 
DELETE FROM [dbo].[Project] 
DBCC CHECKIDENT('[dbo].[Project]', RESEED, 0) 
DBCC CHECKIDENT('[dbo].[Project]', RESEED) 
DELETE FROM [dbo].[RecentWorkTask] 
... 
EXEC sys.sp_msforeachtable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL' 

As such, an idea came straight into my mind to use dynamic SQL to generate a query. For example, if a table has foreign keys, then use the DELETE operation as before. Otherwise, you can delete data with minimal logging using the TRUNCATE command.

As a result, the query for data deletion will look as follows:

DECLARE @SQL NVARCHAR(MAX)
      , @FK_TurnOff NVARCHAR(MAX)
      , @FK_TurnOn NVARCHAR(MAX)

SELECT @SQL = (
    SELECT CHAR(13) + CHAR(10) +
        IIF(p.[rows] > 0,
            IIF(t2.referenced_object_id IS NULL, N'TRUNCATE TABLE ', N'DELETE FROM ') + obj_name,
            ''
        ) + CHAR(13) + CHAR(10) +
        IIF(IdentityProperty(t.[object_id], 'LastValue') > 0,
            N'DBCC CHECKIDENT('''+ obj_name + N''', RESEED, 0) WITH NO_INFOMSGS',
            ''
        )
    FROM (
        SELECT obj_name = QUOTENAME(s.name) + '.' + QUOTENAME(o.name), o.[object_id]
        FROM sys.objects o
        JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
        WHERE o.is_ms_shipped = 0
            AND o.[type] = 'U'
            AND o.name != N'__MigrationHistory'
    ) t
    JOIN sys.partitions p ON t.[object_id] = p.[object_id] AND p.index_id IN (0, 1)
    LEFT JOIN (
        SELECT DISTINCT f.referenced_object_id
        FROM sys.foreign_keys f
    ) t2 ON t2.referenced_object_id = t.[object_id]
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')

SELECT @FK_TurnOff = CAST(x.query('off/text()') AS NVARCHAR(MAX))
     , @FK_TurnOn = CAST(x.query('on/text()') AS NVARCHAR(MAX))
FROM (
    SELECT [off] = CHAR(10) + 'ALTER TABLE ' + obj + ' NOCHECK CONSTRAINT ' + fk
         , [on] = CHAR(10) + 'ALTER TABLE ' + obj + ' CHECK CONSTRAINT ' + fk
    FROM (
        SELECT fk = QUOTENAME(f.name)
             , obj = QUOTENAME(SCHEMA_NAME(f.[schema_id])) + '.' + QUOTENAME(OBJECT_NAME(f.parent_object_id))
        FROM sys.foreign_keys f
        WHERE f.delete_referential_action = 0
            AND EXISTS(
                    SELECT *
                    FROM sys.partitions p
                    WHERE p.[object_id] = f.parent_object_id
                        AND p.[rows] > 0
                        AND p.index_id IN (0, 1)
                )
    ) t
    FOR XML PATH(''), TYPE
) t(x)

IF @SQL LIKE '%[a-z]%' BEGIN

    SET @SQL = ISNULL(@FK_TurnOff, '') + @SQL + ISNULL(@FK_TurnOn, '')

    PRINT @SQL
    --EXEC sys.sp_executesql @SQL

END

Initially, the auto-tests ran for 11 minutes on my machine:

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric2

But after I rewrote the query, all tests began to run 40 seconds faster:

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric3

Of course, I could be happy about it and set resolved status for the task, but the basic problem remained:

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric4

The disk was heavily loaded when tests were executed. I decided to see what waits are on the server. To do this, I first cleared sys.dm_os_wait_stats:

DBCC SQLPERF("sys.dm_os_wait_stats", CLEAR)

I ran autotests once again and then executed the query:

SELECT TOP(20)
      wait_type
    , wait_time = wait_time_ms / 1000.
    , wait_resource = (wait_time_ms - signal_wait_time_ms) / 1000.
    , wait_signal = signal_wait_time_ms / 1000.
    , waiting_tasks_count
    , percentage = 100.0 * wait_time_ms / SUM(wait_time_ms) OVER ()
    , avg_wait = wait_time_ms / 1000. / waiting_tasks_count
    , avg_wait_resource = (wait_time_ms - signal_wait_time_ms) / 1000. / [waiting_tasks_count]
    , avg_wait_signal = signal_wait_time_ms / 1000.0 / waiting_tasks_count
FROM sys.dm_os_wait_stats
WHERE [waiting_tasks_count] > 0
    AND max_wait_time_ms > 0
    AND [wait_type] NOT IN (
        N'BROKER_EVENTHANDLER', N'BROKER_RECEIVE_WAITFOR',
        N'BROKER_TASK_STOP', N'BROKER_TO_FLUSH',
        N'BROKER_TRANSMITTER', N'CHECKPOINT_QUEUE',
        N'CHKPT', N'CLR_AUTO_EVENT',
        N'CLR_MANUAL_EVENT', N'CLR_SEMAPHORE',
        N'DBMIRROR_DBM_EVENT', N'DBMIRROR_EVENTS_QUEUE',
        N'DBMIRROR_WORKER_QUEUE', N'DBMIRRORING_CMD',
        N'DIRTY_PAGE_POLL', N'DISPATCHER_QUEUE_SEMAPHORE',
        N'EXECSYNC', N'FSAGENT',
        N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'FT_IFTSHC_MUTEX',
        N'HADR_CLUSAPI_CALL', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
        N'HADR_LOGCAPTURE_WAIT', N'HADR_NOTIFICATION_DEQUEUE',
        N'HADR_TIMER_TASK', N'HADR_WORK_QUEUE',
        N'KSOURCE_WAKEUP', N'LAZYWRITER_SLEEP',
        N'LOGMGR_QUEUE', N'ONDEMAND_TASK_QUEUE',
        N'PWAIT_ALL_COMPONENTS_INITIALIZED',
        N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP',
        N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP',
        N'REQUEST_FOR_DEADLOCK_SEARCH', N'RESOURCE_QUEUE',
        N'SERVER_IDLE_CHECK', N'SLEEP_BPOOL_FLUSH',
        N'SLEEP_DBSTARTUP', N'SLEEP_DCOMSTARTUP',
        N'SLEEP_MASTERDBREADY', N'SLEEP_MASTERMDREADY',
        N'SLEEP_MASTERUPGRADED', N'SLEEP_MSDBSTARTUP',
        N'SLEEP_SYSTEMTASK', N'SLEEP_TASK',
        N'SLEEP_TEMPDBSTARTUP', N'SNI_HTTP_ACCEPT',
        N'SP_SERVER_DIAGNOSTICS_SLEEP', N'SQLTRACE_BUFFER_FLUSH',
        N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
        N'SQLTRACE_WAIT_ENTRIES', N'WAIT_FOR_RESULTS',
        N'WAITFOR', N'WAITFOR_TASKSHUTDOWN',
        N'WAIT_XTP_HOST_WAIT', N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG',
        N'WAIT_XTP_CKPT_CLOSE', N'XE_DISPATCHER_JOIN',
        N'XE_DISPATCHER_WAIT', N'XE_TIMER_EVENT'
    )
ORDER BY [wait_time_ms] DESC

The biggest delay occurs with WRITELOG.

wait_type wait_time waiting_tasks_count percentage
———————– ———— ——————– ———–
WRITELOG 546.798 60261 96.07
PAGEIOLATCH_EX 13.151 96 2.31
PAGELATCH_EX 5.768 46097 1.01
PAGEIOLATCH_UP 1.243 86 0.21
IO_COMPLETION 1.158 89 0.20
MEMORY_ALLOCATION_EXT 0.480 683353 0.08
LCK_M_SCH_S 0.200 34 0.03
ASYNC_NETWORK_IO 0.115 688 0.02
LCK_M_S 0.082 10 0.01
PAGEIOLATCH_SH 0.052 1 0.00
PAGELATCH_UP 0.037 6 0.00
SOS_SCHEDULER_YIELD 0.030 3598 0.00

“This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG.” (Reference: SQLAuthority – WRITELOG)

And what do I need to find out now? Yes, each running autotest records something in the database. One of the solutions to the problem with WRITELOG waits could be inserting large chunks of data, rather than row by row. But SQL Server 2014 has a new Delayed Durability option on the database level, i.e. the ability to omit recording of data to disk upon committing transactions.

How is data modified in SQL Server? Suppose we are inserting a new row. SQL Server calls the Storage Engine component that, in turn, accesses Buffer Manager (that works with buffers in memory and the disk) and informs that I want to change the data.

After that, Buffer Manager accesses Buffer Pool (the cache in memory for all of our data, which stores information by page – 8 Kb per page), and then modifies the necessary pages in memory. If there are no such pages, it will load them from the disk. At the moment when a page is changed in memory, SQL Server cannot yet tell that the query was executed. Otherwise, one of the ACID principles, namely Durability, would be violated, when the end of modification ensures that the data will be written to disk.

After the page is modified in memory, Storage Engine accesses Log Manager, which writes data to the log. But it does not happen at once, but through Log Buffer having a size of 60Kb, which is used to optimize performance when working with the log. Data reset from the buffer to the log file occurs in the situation when:

  1. The buffer is filled and data is stored in the log.
  2. A user executed sys.sp_flush_log.
  3. A transaction was committed, and the entire Log Buffer was recorded to the log.

When the data are stored in the log, the data modification is confirmed, and SQL Server informs the client about it.

According to this logic, the data does not get into the data file. SQL Server uses an asynchronous mechanism for recording data to the files. There are two such mechanisms:

  1. Lazy Writer, that runs on a time basis and checks whether there is sufficient memory for SQL Server. If there is not, the pages are forced out of the memory and are recorded to the data file. And those that have been modified are flushed and thrown out of memory.
  2. Checkpoint, that scans dirty pages once a minute, flushes them and leaves in memory.

Suppose that a lot of small transactions are running in the system, for example, that modify data by row. After each modification, the data is transmitted from Log Buffer to the transaction log. Remember that all modifications get synchronously into the log and other transactions should wait for their turn.

Let me illustrate:

USE [master]
GO
SET NOCOUNT ON

IF DB_ID('TT') IS NOT NULL BEGIN
    ALTER DATABASE TT SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE TT
END
GO

CREATE DATABASE TT
GO
ALTER DATABASE TT
    MODIFY FILE (NAME = N'TT', SIZE = 25MB, FILEGROWTH = 5MB)
GO
ALTER DATABASE TT
    MODIFY FILE (NAME = N'TT_log', SIZE = 25MB, FILEGROWTH = 5MB)
GO

USE TT
GO

CREATE TABLE dbo.tbl (
      a INT IDENTITY PRIMARY KEY
    , b INT
    , c CHAR(2000)
)
GO

IF OBJECT_ID('tempdb.dbo.#temp') IS NOT NULL
    DROP TABLE #temp
GO

SELECT t.[file_id], t.num_of_writes, t.num_of_bytes_written
INTO #temp
FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL) t

DECLARE @WaitTime BIGINT
      , @WaitTasks BIGINT
      , @StartTime DATETIME = GETDATE()
      , @LogRecord BIGINT = (
              SELECT COUNT_BIG(*)
              FROM sys.fn_dblog(NULL, NULL)
          )

SELECT @WaitTime = wait_time_ms
     , @WaitTasks = waiting_tasks_count
FROM sys.dm_os_wait_stats
WHERE [wait_type] = N'WRITELOG'

DECLARE @i INT = 1

WHILE @i < 5000 BEGIN

    INSERT INTO dbo.tbl (b, c)
    VALUES (@i, 'text')

    SELECT @i += 1

END

SELECT elapsed_seconds = DATEDIFF(MILLISECOND, @StartTime, GETDATE()) * 1. / 1000
     , wait_time = (wait_time_ms - @WaitTime) / 1000.
     , waiting_tasks_count = waiting_tasks_count - @WaitTasks
     , log_record = (
          SELECT COUNT_BIG(*) - @LogRecord
          FROM sys.fn_dblog(NULL, NULL)
       )
FROM sys.dm_os_wait_stats
WHERE [wait_type] = N'WRITELOG'

SELECT [file] = FILE_NAME(o.[file_id])
     , num_of_writes = t.num_of_writes - o.num_of_writes
     , num_of_mb_written = (t.num_of_bytes_written - o.num_of_bytes_written) * 1. / 1024 / 1024
FROM #temp o
CROSS APPLY sys.dm_io_virtual_file_stats(DB_ID(), NULL) t
WHERE o.[file_id] = t.[file_id]

Inserting 5 thousand rows took about 42.5 seconds, and the delay upon inserting into the log was 42 seconds.

elapsed_seconds wait_time waiting_tasks_count log_record
—————- ———- ——————– ———–
42.54 42.13 5003 18748

SQL Server physically accessed the log 5000 times and has recorded a total of 20Mb.

file num_of_writes num_of_mb_written
——- ————– ——————
TT 79 8.72
TT_log 5008 19.65

Delayed Durability is the right choice for these situations. When activated, an entry is made to the log only when Log Buffer is full. You can enable Delayed Durability for the entire database:

ALTER DATABASE TT SET DELAYED_DURABILITY = FORCED 
GO 

or for individual transactions:

ALTER DATABASE TT SET DELAYED_DURABILITY = ALLOWED 
GO 
BEGIN TRANSACTION t 
... 
COMMIT TRANSACTION t WITH (DELAYED_DURABILITY = ON) 

Let’s enabled for the database and execute the script once again.

The waits disappeared and the query ran for 170ms on my machine:

elapsed_seconds wait_time waiting_tasks_count log_record
—————- ———- ——————– ———–
0.17 0.00 0 31958

Due to the fact that the records were made to the log less intensely:

file num_of_writes num_of_mb_written
——- ————– ——————
TT 46 9.15
TT_log 275 12.92

Of course, there is a fly in the ointment. Before the data physically gets into the log file, the client is informed that the changes are recorded. In case of failure, we can lose data equal to the buffer size and damage the database.

In my case, the safety of the test data is not required. The DELAYED_DURABILITY was set to FORCED for the test database on which the TMetric autotests run, and the next time all tests ran for 2.5 minutes.

SQL SERVER - Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes tmetric5

All the delays associated with logging have a minimal impact on performance:

wait_type wait_time waiting_tasks_count percentage
——————– ———– ——————– ———–
PAGEIOLATCH_EX 16.031 61 43.27
WRITELOG 15.454 787 41.72
PAGEIOLATCH_UP 2.210 36 5.96
PAGEIOLATCH_SH 1.472 2 3.97
LCK_M_SCH_M 0.756 9 2.04
ASYNC_NETWORK_IO 0.464 735 1.25
PAGELATCH_UP 0.314 8 0.84
SOS_SCHEDULER_YIELD 0.154 2759 0.41
PAGELATCH_EX 0.154 44785 0.41
LCK_M_SCH_S 0.021 7 0.05
PAGELATCH_SH 0.011 378 0.02

Let’s summarize the results on Delayed Durability:

  1. Available in all editions starting from SQL Server 2014.
  2. It can be used if you have a bottleneck when writing to the transaction log (lazy commit in large blocks may be more effective than many small ones).
  3. Concurrent transactions will less likely compete for IO operations upon logging.
  4. When activated, the COMMIT operation does not wait for entries in the transaction log and we can get a significant performance boost in OLTP systems.
  5. You can go ahead and enable Delayed Durability, if you are ready to play Russian roulette and upon “fortuitous” combination of circumstances lose approximately 60Kb of data in case of failure.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Delayed Durability, or The Story of How to Speed Up Autotests From 11 to 2.5 Minutes

PowerShell – SQL Server Paging of Memory Identification

$
0
0

In one of my recent consultation visits to a customer, there was deep performance related problems. They were unclear to what was happening and what was the actual problem. But these are some of the challenges that I love to take head-on too. In this quest to learn what the problem would be, I used a number of tools and during that time I figured out it was a memory pressure that was creating the problem. Let us learn about SQL Server Paging of Memory Identification.

After the engagement got over, the DBA from the organization wrote to me to understand how this can be easily identified when working with a number of their servers in the infrastructure. He wanted something that can be run to understand if the SQL Server pages were being paged out and that could be a possible cause of memory pressure. He wanted some guidance or cheat sheet to play with.

This blog and powershell script was a fall out of that engagement.

param (
    [string]$SqlServerName = "localhost"
)

Add-Type -Path "C:\Program Files\Microsoft SQL Server\130\SDK\Assemblies\Microsoft.SqlServer.Smo.dll"

$SqlServer = New-Object Microsoft.SqlServer.Management.Smo.Server($SqlServerName)

foreach ($LogArchiveNo in ($SqlServer.EnumErrorLogs() | Select-Object -ExpandProperty ArchiveNo)) {
    $SqlServer.ReadErrorLog($LogArchiveNo) |
        Where-Object {$_.Text -like "*process memory has been paged out*"}
} 

The output of this script would look like below:

PowerShell - SQL Server Paging of Memory Identification sql-memory-paged-out-01-800x110

Why is this important?

If there is excessive memory pressure on SQL Server’s memory allocations causing memory to get paged out to disk, that could be a potentially large performance impact as it invites I/O latency to memory access. It is best practice to ensure that there is enough physical memory on the machine, as well as a well-designed memory infrastructure from SQL Server so that there isn’t overcommitting of memory in order to ensure that paging is not excessive. It is recommended that reevaluation of memory allocations and/or available physical memory is taken into account in order to relieve memory pressure for the current SQL Server instance.

This shows how there has been some memory pressure to our SQL Server instance and this is available from our log records. Have you in ever used such simple scripts to figure out pressures of memory on your servers? How did you use them? Let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on PowerShell – SQL Server Paging of Memory Identification

SQL SERVER – Understanding Basic Memory Terms of Physical Memory and Virtual Memory

$
0
0

Recently I had been to an institute to talk about some database concepts to computer science students. Most of these academic engagements get me closer to young minds and this is an awesome way for me to bring some of the industry concepts in a digestible simple format. Trust me, this takes double the preparation I normally do for professional sessions because now I need to make the concepts so simple that even kids of any age and with no knowledge can still get the concepts being explained. A lot of times this is an exercise for me to show how self-learning can be much more rewarding and unique when compared to what text book teaching would look like. In this blog, let me take up the concept around Physical memory and Virtual memory which took some considerable time for me using a whiteboard. I am sure, you would have also learnt these concepts a while back, but this is a refresher in my opinion for all. Let us learn about Understanding Basic Memory Terms of Physical Memory and Virtual Memory.

SQL SERVER - Understanding Basic Memory Terms of Physical Memory and Virtual Memory memory-800x541

Physical Memory decoded

Physical memory refers to a volatile hardware main memory module installed in the system (Dynamic Random Access Memory (DRAM)). Physical memory is the intermediate storage location that is used to store, preserve, and recall data. Access to physical memory is much faster than access to non-volatile disk storage. The digital divide and innovations on some of the hardware’s are surely making the hard disks powerful and super-fast too. We will deal with that for a future post.

Many of the current designs in the memory management module of operating systems and features provided by the processors are influenced by the inherent properties of physical memory. There is always a compromise between the size and speed of memory and its cost.

The higher the physical memory requirements of the system, the costlier it is. In the initial days of computers, memory was quite expensive. Here, this was one of the factors that forced the system software programmers and processor manufacturers to come up with different techniques and features in the operating systems and processors. This made it possible to load and run more software on the system with a limited amount of memory.

Even today, these designs exist in operating systems and processors in evolved forms and are continuing to evolve, despite modern systems coming with gigabytes of memory at a comparatively lower price.

Physical Address Space on a system includes the RAM and IO space of devices. Every single byte in physical RAM is addressed by a physical address. With these concepts loaded, let us move to the next stage of this evolution – Virtual Memory.

Virtual Memory Decoded

As mentioned earlier, the RAM costs influenced the developers and processor manufacturers to come up with new techniques and designs in the software and hardware. One such innovation was the concept of virtual memory. A system that implements virtual memory provides an illusion to the applications running on the system by having more memory space available than the actual size of physical memory. This makes it possible to run more applications simultaneously irrespective of the amount of physical memory.

The whole process of providing such a mechanism is transparent to the software. Further, this is completely handled by the Memory Manager component of the OS along with support from the processor. This implies that the total memory requirement of every running application could exceed the total amount of physical memory on the system.

Windows provides a paged virtual memory model on top of flat memory addressing model provided by the hardware, and provides a consistent address range for every single process.

The virtual memory makes it possible for application software to be independent of the underlying physical memory. A range of virtual addresses belonging to a process can reside in any part of the RAM at any time without affecting the application itself.

For an application to execute, only the parts of the image that are needed at the time of execution needs to be resident, while certain other portions of the image need not be resident in the physical memory.

These are some of the basics one needs to know when reading about Physical Memory and Virtual Memory.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Understanding Basic Memory Terms of Physical Memory and Virtual Memory

SQL SERVER – Error – Disallowing page allocations for database ‘DB’ due to insufficient memory in the resource pool

$
0
0

Keeping SQL Server up-to-date is something I recommend my customers from time to time. One of the tasks I undertake is to check the current SQL Server version information as soon as I get started to work. Though this recommendation looks trivial at the first look, this is often something people don’t take it seriously. Almost in every environment that I have done this exercise, I see them being behind on the Service pack updates majority of the times. Let us learn in this blog post how to fix the error- Disallowing page allocations for database ‘DB’ due to insufficient memory in the resource pool.

I am currently running the latest version of SQL Server 2016 and I installed the SP1. In the recent past, I have seen the following message on my error logs. I have been ignoring this, but thought to investigate the same. The output is:

Message: Disallowing page allocations for database ‘AdventureWorks2016’ due to insufficient memory in the resource pool ‘default’. See ‘http://go.microsoft.com/fwlink/?LinkId=510837’ for more information.

A sample of how it looks inside SQL Server Error Logs is:

SQL SERVER - Error - Disallowing page allocations for database 'DB' due to insufficient memory in the resource pool insufficent-memory-01

On first look this looked like I had something wrong and I was not aware of what to do. I did exactly what the error message said. I went to MSDN for more information.

As suggested by the documentation, I went about Enabling Resource Governor and the error message disappeared. You can also enable the resource Governor capability using the SSMS UI as shown below.

SQL SERVER - Error - Disallowing page allocations for database 'DB' due to insufficient memory in the resource pool insufficent-memory-02

Do let me know if you error seen this error message on your servers? I am not quite sure why this is happening, but I am glad the solution for this is simple and well documented. Thought to share the same with you as I learnt something new recently.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Error – Disallowing page allocations for database ‘DB’ due to insufficient memory in the resource pool


SQL SERVER – How to Find the In-Memory OLTP Tables Memory Usage on the Server

$
0
0

When I presented at the SQLPASS this year there were several learning that I found it interesting. Every year, this presentation preparation is something I take it seriously. I know many them turn up to learn some new tricks every single year. Hence, I invest considerable amount of time to prepare. This year I showcased several tips and tricks involving SQL Server In-Memory OLTP capability. I personally feel this feature is lesser known and never appreciated. As I was doing the session, one of the DBA asked how to find out the memory utilization of various In-Memory OLTP tables.

During the break session, I showed how the DMVs can be used to collect this important information. These have been around for a while, but not known. Here is a simple script to show the same:

SELECT object_name(object_id) AS Name, *  
FROM sys.dm_db_xtp_table_memory_stats
GO

As you can see, this is run against the database and the script will list all the tables and their memory utilization on the server. The output will look like below:

SQL SERVER - How to Find the In-Memory OLTP Tables Memory Usage on the Server adworks_InMemory-Steps-01-800x275

You will need to add the memory used by the table and the memory allocated for indexes to get a clear idea on what the overall memory utilization is for the given table.

As I try to wrap up this simple blog, please let me know how you are using In-Memory OLTP features? What is the largest table on your production environments using this feature? Do let us know via comments and share your experience with all on using this feature.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – How to Find the In-Memory OLTP Tables Memory Usage on the Server

What is Memory Grants Pending in SQL Server? – Interview Question of the Week #103

$
0
0

Question: What is Memory Grants Pending in SQL Server?

Answer: Very interesting question and this subject is so long, it will be difficult to cover in a single blog post. I will try to answer this question in just 200 words as usual in the interview, we have only a few moments to give the correct answer.

What is Memory Grants Pending in SQL Server? - Interview Question of the Week #103 memorygrants

Memory Grants Pending displays the total number of SQL Server processes that are waiting to be granted workspace in the memory.

In the perfect world the value of the Memory Grants Pending will be zero (0). That means, in your server there are no processes which are waiting for the memory to be assigned to it so it can get started. In other words, there is enough memory available in your SQL Server that all the processes are running smoothly and memory is not an issue for you.

Here is a quick script which you run to identify value for your Memory Grants Pending.

SELECT object_name, counter_name, cntr_value
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Memory Manager%'
AND [counter_name] = 'Memory Grants Pending'

Here is the result of the above scripts:

What is Memory Grants Pending in SQL Server? - Interview Question of the Week #103 memorygrant

If you have consistently value of this setting higher than 0, you may be suffering from memory pressure and your server can use more memory. Again, this is one of the counters which indicates that your server can use more memory.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on What is Memory Grants Pending in SQL Server? – Interview Question of the Week #103

SQL SERVER – Using dm_db_stats_properties With InMemory OLTP Tables

$
0
0

The whole concept of InMemory OLTP has been around for a while and still there are areas that I try to go back to learn every single time. If you are new to InMemory OLTP, I would highly recommend searching this blog for more content. For a starter, the blow blog is a great start. SQL SERVER – Beginning In-Memory OLTP with Sample Example

In exploring the DMVs which are available in standard tables, I stumbled upon a great addition. I found the db_stats_properties DMV, which was able to give me information about the number of rows that have been modified.

Getting curious, I wanted to know if this worked with InMemory OLTP tables. The amount of changes and modification to a memory-optimized table is now reflected in the Dynamic Management Function “sys.dm_db_stats_properties” which returns a record per stats object on the table. The DMF is now behaving equally for memory-optimized and disk-based table and the column – “modification_counter” reflects the row modification counter that is used for determining whether an auto – update of stats is needed.

Consider the following query to analyze statistics and the modification_counter value:

USE [MyDatabase]
GO
SELECT
    sp.stats_id, name, filter_definition, last_updated, rows, rows_sampled, steps,
unfiltered_rows, modification_counter
FROM sys.stats AS stat
CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp
WHERE stat.object_id = object_id('Schemaname.TableName');

One of the things to keep in mind is that the data on the DMF resets on DB restart or failover. A typical output would look like:

SQL SERVER - Using dm_db_stats_properties With InMemory OLTP Tables db_stats_properties-01-800x231

It is worth to know that the modification_counter can be more than the number of rows. The logic is, if the same rows are modified multiple times, then this counter can be higher than the actual number of rows.

Do let me know if you find this interesting. Where would you use this capability? Let me know via comments below.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Using dm_db_stats_properties With InMemory OLTP Tables

SQL SERVER Management Studio – Exception of type ‘System.OutOfMemoryException’ was thrown. (mscorlib)

$
0
0

I was trying to help my client in generating a report large data set. After spending some time and understanding the schema, I could provide them the query to get the results. Now, he wanted to save the results in excel sheet. So, he ran the query in SQL Server Management Studio (SSMS), got a lot of rows as output and he hit Ctrl+C in the grid. Let us learn about System.OutOfMemoryException error.

SQL SERVER Management Studio - Exception of type 'System.OutOfMemoryException' was thrown. (mscorlib) SSMS-OOM-800x196

Here is the text of the message (copied using the Copy icon at the bottom left of the message windows)

Exception of type ‘System.OutOfMemoryException’ was thrown. (mscorlib)

If we click on the technical detail icon, we can see below.

Program Location:
at System.Number.FormatInt32(Int32 value, String format, NumberFormatInfo info)
at System.Int32.ToString(String format, IFormatProvider provider)
at System.DateTimeFormat.FormatCustomized(DateTime dateTime, String format, DateTimeFormatInfo dtfi, TimeSpan offset)
at System.DateTimeFormat.Format(DateTime dateTime, String format, DateTimeFormatInfo dtfi, TimeSpan offset)
at System.DateTimeFormat.Format(DateTime dateTime, String format, DateTimeFormatInfo dtfi)
at Microsoft.SqlServer.Management.UI.Grid.StorageViewBase.GetCellDataAsString(Int64 iRow, Int32 iCol)
at Microsoft.SqlServer.Management.QueryExecution.QEResultSet.GetCellDataAsString(Int64 iRow, Int32 iCol)
at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.GridResultsGrid.GetTextBasedColumnStringForClipboardText(Int64 rowIndex, Int32 colIndex)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.GetClipboardTextForCells(Int64 nStartRow, Int64 nEndRow, Int32 nStartCol, Int32 nEndCol)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.GetClipboardTextForSelectionBlock(Int32 nBlockNum)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.GetDataObjectInternal(Boolean bOnlyCurrentSelBlock)
at Microsoft.SqlServer.Management.UI.Grid.GridControl.GetDataObject(Boolean bOnlyCurrentSelBlock)
at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.GridResultsTabPageBase.OnCopyWithHeaders(Object sender, EventArgs a)

Based on my understanding, we read stack from bottom to top. So, if I build a stack by ignoring parameters, it would be like below.

FormatInt32
ToString
FormatCustomized
Format
Format
GetCellDataAsString
GetCellDataAsString
GetTextBasedColumnStringForClipboardText
GetClipboardTextForCells
GetClipboardTextForSelectionBlock
GetDataObjectInternal
GetDataObject
OnCopyWithHeaders

As we can see “Clipboard” – I would assume that its due to copy we are seeing out of memory because we are copying many rows from grid.

WORKAROUND/SOLUTION

As we discovered above, I explained to them that SQL Server Management Studio is not design to handle such kind of requirement. If we want to save the result set into the file, we should save the query output directly to file rather than grid or text in SSMS (and then doing Ctrl + C and Ctrl + V).

Other option would be to follow steps given in one of my earlier blogs

SQL SERVER – Automatically Store Results of Query to File with sqlcmd

SQL SERVER – SSMS Trick – Generating CSV file using Management Studio

I also found that if result is very large, even query execution can fill SSMS buffer and raise same error.

Hope you would be able to work around the issue by using this blog.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER Management Studio – Exception of type ‘System.OutOfMemoryException’ was thrown. (mscorlib)

How to Find SQL Server Memory Use by Database and Objects? – Interview Question of the Week #121

$
0
0

Question: How to Find SQL Server Memory Use by Database and Objects?

How to Find SQL Server Memory Use by Database and Objects? - Interview Question of the Week #121 memoryusage

Answer: The answer of this question is very simple, we can do this by going to sys.dm_os_buffer_descriptors.

Here is the first query which I use all the time to see which particular database is using the most memory in the SQL Server.

SELECT
[DatabaseName] = CASE [database_id] WHEN 32767
THEN 'Resource DB'
ELSE DB_NAME([database_id]) END,
COUNT_BIG(*) [Pages in Buffer],
COUNT_BIG(*)/128 [Buffer Size in MB]
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id]
ORDER BY [Pages in Buffer] DESC;

Here is the result of the script listed above, which lists all the databases cached in memory.

How to Find SQL Server Memory Use by Database and Objects? - Interview Question of the Week #121 memoryusagescript1

Now let us see another query which returns us details about how much memory each object uses in a particular database.

SELECT obj.name [Object Name], o.type_desc [Object Type],
i.name [Index Name], i.type_desc [Index Type],
COUNT(*) AS [Cached Pages Count],
COUNT(*)/128 AS [Cached Pages In MB]
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT object_name(object_id) AS name, object_id
,index_id ,allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT object_name(object_id) AS name, object_id
,index_id, allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.partition_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
INNER JOIN sys.indexes i ON obj.[object_id] = i.[object_id]
INNER JOIN sys.objects o ON obj.[object_id] = o.[object_id]
WHERE database_id = DB_ID()
GROUP BY obj.name, i.type_desc, o.type_desc,i.name
ORDER BY [Cached Pages In MB] DESC; 

The above query will list all the objects and their type along with how much space they take in the memory.

How to Find SQL Server Memory Use by Database and Objects? - Interview Question of the Week #121 memoryusagescript2

If you ever wondered which object is taking the most memory in your database, you can use the above script for additional details.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on How to Find SQL Server Memory Use by Database and Objects? – Interview Question of the Week #121

Viewing all 93 articles
Browse latest View live