Database Administration SQL Server 2014

Published on December 22nd, 2014 | by Alejandro Cordero


SQL Server 2014 New Feature: Buffer Pool Extensions

SQL Server 2014 ships with exiting new features, buffer pool extensions is one of these and it promise to be a very useful addition to your DBA toolbelt. In an initial SQL Server 2014 new features article we mentioned this topic and is time to describe in detail how to use it.

What is BPE?

The idea behind a buffer pool extension is very simple, it is a way to tell SQL Server memory where to page memory if needed, this means when SQL Server runs out of memory it will normally page into disk, which is extremely slow, instead you can create a buffer pool extension and point to a high performance device to decrease the impact when you hit limits of your memory usage.

This is a workaround for when SQL Server constantly runs out of memory and start having performance issues, when this happens normally you try to get some budget approved and add some more memory, start doing some performance tuning or plan for a migration, all these are the recommended way to go, however none of these happen easily in a organization, so you need a band aid in the mid time so you can get your application functional and shiny!

If by any chance you have some SSD, Fusion-IO or any other device available that you can plug into your database server and make it available to SQL Server 2014, then you can enable a buffer pool extension. It works just like adding a new datafile.

You can enable a new buffer pool extension and place it where you want, the only requirement is that the path you are using is available to SQL Server and that the SQL Server Service user has full read/write permission over the specified disk path.


So Start by checking your memory and check your limits

USE master

EXEC sp_configure 'show advanced options', 1

EXEC sp_configure 'max server memory (MB)'

Then enable your buffer extension by running below command

USE master

(FILENAME = F:\MY_Cache\SQL2014_newFeature.BPE', SIZE = 15 GB);

The obvious idea here is to point to a device that will have a good performance, SQL Server won’t validate that in fact you are using the proper device, so basically you can point to any disk, if you do so the magic can disappear as your performance will still suffer if you start paging into old spinning disks.

Having this option enabled and pointing to a high performance device is a lot better than not having it at all, if you ever run out of memory and don’t have anything in place you will suffer from memory paging, memory pressure and all sorts of nasty things, however this is NOT better than having enough Memory  from the first place.

Doing the old capacity planning and estimate in advance system usage and growth is still better, try to be proactive about this and be sure to allocate proper amounts of memory in advance, but still having a backup plan like BPE won’t hurt.


How to Modify BPE?

You can turn off BPE just by running below command:

USE master


If you want to modify the running value of BPE, you need to turn it off, modify the physical file size or location and turn it on:

USE master


    (FILENAME = 'F:\MY_Cache\SQL2014_newFeature.BPE', SIZE = 10 GB);


As a summary please follow below checklist when using this feature:

1-Enable Buffer Pool Extension and point into a high performance device , SSD drives, Fusion-IO and alike.

2-Buffer Pool Extension is not a substitute of Memory, having enough memory is always better

3-Do not point your BPE into a drive that contains datafiles, logfiles, tempdata, backups, so on. If possible reserve this location only for this purpose.

4- Test this feature in pre-prod/QA first, also do not just enable/disable during production hours as you might get odd behaviors depending on your memory usage and system specifics, check this MS best practice:

Test the buffer pool extension thoroughly before implementing in a production environment. Once in production, avoid making configuration changes to the file or turning the feature off. These activities may have a negative impact on server performance because the buffer pool is significantly reduced in size when the feature is disabled. When disabled, the memory used to support the feature is not reclaimed until the instance of SQL Server is restarted. However, if the feature is re-enabled, the memory will be reused without restarting the instance.

5- Do not just size your BPE to a huge random size, follow below MS recommendation(this is important, following this ratio does impact the throughput of your BPE):

The buffer pool extension size can be up to 32 times the value of max_server_memory. We recommend a ratio between the size of the physical memory (max_server_memory) and the size of the buffer pool extension of 1:16 or less. A lower ratio in the range of 1:4 to 1:8 may be optimal.

6-BPE does have an important restriction, the allocated size must be greater than the current memory allocation threshold. Otherwise you will get below error

Msg 868, Level 16, State 1, Line 8
Buffer pool extension size must be larger than the current memory allocation threshold 8104 MB. Buffer pool extension is not enabled.

In the example above , SQL Server max memory was set to 8Gb and I tried to only add 2GB as BPE.

Hope you like this feature, remember to follow best practices and like most of SQL Server Features, test, test, test. Cheers!

About the Author

SQL Server DBA with over 8 years of experience, developer for .Net , Certified Scrum Master and Entrepreneur.

8 Responses to SQL Server 2014 New Feature: Buffer Pool Extensions

  1.' Matt says:

    Do you know if it’s possible to have the BPE size be dynamic? For instance, I have VM’s that I want to scale down over the weekend, but SQL won’t start because the newly allocated amount of RAM throw’s off the “rules”

  2. Warner Chaves says:

    Hi Matt, that’s a pretty good question. Natively, BPE doesn’t support any type of auto-scaling, you need to manually make it bigger or smaller and making it smaller requires a restart.

    So, this is what I would do:

    – Create a powershell script that turns OFF the BPE and then shuts down SQL Server. You can then scale down the VM and bring SQL back up if you want or leave it off for the weekend.

    Once you’re ready to add more RAM after the weekend, you could add a startup trigger on SQL Server so that when it comes back up you can reenable BPE to a proper size. You could even do some scripting where you read the server’s memory and use a percentage of that.

    Some type of auto-scaling would be nice to see on a future version but for now you’ll need to do some customization like the above. Cheers!

  3.' Kurt says:

    Hi Alejandro Cordero,

    Why only clean page can be stored in BPE? If the BPE is lost, is it possible to recover dirty page from transaction log?


    • Alejandro Cordero says:

      Hi Kurt,

      A dirty page by definition is not yet committed to the transaction log, so it is not possible to recover from tlogs. Also a transaction log will have a list of full transactions, which eventually could represent a group of data pages, so in any event that you would attempt to recover a record from the transaction log, it means you are going to recover not just one page, but a group of them that fully represents a transaction. Which also leads us to the fact that being able to identify exactly what data page is missing is a very difficult troubleshooting to attempt. Normally in a case like this and using “clean” pages you can try to run dbcc checkdb and see if you can fix the missing data page, however most likely dbcc will choose to just remove that missing page, depending on the option provided. Bottom line, if you are reading dirty pages, it means that you have identified that those can be lost and also could not be actual committed data, so if your design is correct then you shouldn’t be worrying about this scenario. Cheers!

      •' Kurt says:

        thank you very much for detailed explanation.

        In MSDN ( , it said that
        ‘Only clean pages are written to the L2 cache, which helps maintain data safety. The buffer manager handles the movement of clean pages between the L1 and L2 caches.’ The ‘L2 cache’ means BPE.

        In other words, what will happen if store dirty pages in BPE. And what’s the dfferenece between store dirty pages in memory and in BPE, why store dirty page in BPE is not safe, but store in memory is safe?


        • Alejandro Cordero says:

          Hi Kurt,

          BPE is used when sql server reaches maximum levels of memory usage, then SQL Server starts doing paging(page movement) from L1 to L2 when there is no more space in RAM, this means that by definition when SQL Server use BPE it means your database is under memory stress at some level. Like the article says, the buffer manager handles this process “under the hood” meaning you don’t have control of what goes and what not to the BPE area. Dirty pages are sets of info that haven’t being committed to the transaction log and stored in disk, they reside ONLY in memory, this makes the information volatile, so a dirty page is volatile and may or may not become actual data stored into your datafile. Some people call this ghost data, because one minute you see it, and the other one it can disappear …depending on your application setup. Since dirty pages are volatile, they don’t have a recovery mechanism, in the event of a SQL Server restart or any interruption of the transaction they will be deleted, so only CLEAN pages are stored in BPE, because in the event of a failure they do have a recovery mechanism and are already stored in disk. Also SQL Server allows dirty pages reading, so the access will be faster somehow, you can read dirty pages for example using a NOLOCK hint and the query should be faster, so by design it should reside in L1 as L1 is faster than L2, there are a number of reason of why dirty pages can’t be in BPE. What will happen if you store dirty pages in BPE? I guess we don’t know exactly what can happen, but basically it doesn’t makes sense as the BPE is used to help on memory stress, and if your server is under stress you need to prioritize which pages to “help” and dirty pages falls at the bottom of the priority list. Hope this makes it clear. Thanks!

  4. Warner Chaves says:

    Kurt, think of it this way.

    – Dirty pages are in memory. If your server loses access to RAM then by all means it’s going to crash. Nothing Microsoft or anybody can do there. When you recover, SQL will restart and run recovery and use the t-log to make those dirty pages consistent by rolling them back or forward.

    – Let’s assume Microsoft allowed dirty pages in BPE. BPE is permanent storage, susceptible to failure like any type of permanent storage. Dirty pages are in BPE and suddenly your BPE drive crashes.

    If BPE allowed dirty pages then if the drive crashed you would have to run recovery on the database to replay the transaction log and make 100% sure everything is consistent. In other words, your database is DOWN because the BPE became a single point of failure.

    The way it’s configured now, with only clean pages, the SSD drive can crash and SQL will keep chugging along until you recover the drive because it knows that all the data in there was already committed and it doesn’t affect consistency.

    If you are REALLY REALLY interested in this topic, here you can find the actual academic research paper from Microsoft that mentions the different policies on having SSD backed DBMS caches:

    The paper goes into great length on clean vs dirty pages and the implications that having dirty pages would have had on the SQL engine (rewriting checkpoint and recovery logic for example, very risky).

Leave a Reply to Matt Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Back to Top ↑
  • Sign Up For Our Newsletter

    Stay up to date on the latest from SQL Server products and Features.

    Sign Up Now

    Sign up for mailing list and receive new posts directly to your inbox.

    We promise never to share or sell any of your personal information.

  • Recent Posts

  • Categories

  • Archives

  • Tags

  • Topics

  • Subscribe to our RSS Feed A SQL Server Web Blog
    Canada, Ontario, Ottawa.

    The Authors