Beginner’s Guide to Microsoft Azure SQL Database

UPDATE: I’ve published a new course on Pluralsight “Azure SQL Database for the SQL Server DBA”, please find it HERE. Cheers!

It’s no secret that Microsoft has become a very aggressive company regarding their cloud strategy. Many DBA’s usually ask me about what should be the starting steps to getting comfortable working with a Cloud environment and I usually answer two things.

  1. Become familiar with SQL Server running on Infrastructure as a service. This just means the same old SQL Server we all know running on Virtual Machines in a cloud environment. Most DBA’s are now familiar with either VMWare or Hyper-V and have no problem understanding this.
  2. Become familiar with Database as a service. Part of what is known as ‘platform as a service’, Azure DB provides a fully managed SQL Database that lives in the Azure Cloud.

The topic for today’s guide is going to be Azure SQL Database and doing a comprehensive tour of it’s main components, features and things you need to know.

What is it?

It’s a fully managed database in the Cloud. This means that you don’t take care of the OS, you don’t have to deal with any hardware, all this is managed by Azure. You get a server name, a database name, you connect and off you go.

It is NOT a full instance of SQL Server nor is it trying to be that. If you need a full instance of SQL Server then you can go with Infrastructure as a Service (and a guide for this is coming in another blog post!). Because Azure SQL db is a managed service, you will have to get used to not having access to instance settings, storage configuration, tempdb configuration, etc. Your focus should be on the database only.

What do you pay for?

In the beginning of the service, you paid for the size of your database. There were two tiers: Web and Business and each one had different size limits. This is now GONE.

The current service is divided into three service tiers: Basic, Standard and Premium. These are subdivided as well into S0-S1-S2-S3 for Standard and P1-P2-P3 for Premium.

Each service tier provides different recovery, DR and performance characteristics and this is what you’re paying for.

Let me go into the fundamental characteristics of each service tier:

(Note that there is a concept called DTU to measure performance, I will go into more detail on that later in this post)


This Tier is targeted for very small, very few user OLTP databases or very small test/dev databases.

  • Size Limit: 2GB.
  • Point-in-time Restore: 7 days.
  • DR: Geo-Restore to another Azure region.
  • Auditing allowed.
  • Performance: 5 DTU.


This tier is targeted for small to medium size OLTP databases with up to a few hundred users.

  • Size Limit: 250GB.
  • Point-in-time Restore: 14 days.
  • DR: Geo-Restore, Standard Geo-replication.
  • Auditing allowed.
  • Performance: 10-20-50-100 DTU.


This tier is targeted for large databases and can sustain thousands of transactions per second. Also, with the latest release (V12, in preview right now) it is also the tier targeted to create Datamarts in the cloud. The reason for that is that on this latest release it will support Partitioning, Clustered Columnstore Indexes as well as execution plans with Parallelism.

  • Size Limit: 500GB.
  • Point-in-time Restore: 35 days.
  • DR: Geo-Restore, Standard or Active Geo-replication.
  • Auditing allowed.
  • Performance: 100-200-800 DTU.

For all up-to-date pricing information, refer to the official Azure pricing page:

Note that the sizes specified are TOP BOUNDARIES. This means that it doesn’t matter how much data you use as long as you’re below the boundary, the amount paid will remain the same. So, for example, you can have a P1 database that is only 30GB big and you would pay the same as having another P1 database that is 300GB big.

Also note that each tier comes with a different Point-in-time Restore retention period and different DR strategy. Let’s look into the DR strategies now.


This the basic Disaster Recovery strategy and it’s available in all tiers. This basically means that you will have the capability of doing a RESTORE of your database in any other Azure Region. However, you have to know that a Geo-Restore copy can have up to 24 hours of data loss. So, a very important factor when planning a DR strategy.

Standard Geo-Replication

This is a more robust DR strategy and is available for Standard and Premium only. Standard Geo-Replication will keep an offline warm standby copy of your database on another Azure region. This copy has an RTO (Recovery Time Objective) of 2 hours or less and an RPO (Recovery Point Objective) of 30 minutes or less. Because it is an offline copy, the database is charged at 50% of the normal rate. Also, you can’t failover and failback at will, in the event of an incident Microsoft will enable the failover capability. At any point in time you can break the replication and use the secondary copy if you want but you’ll have to re-initialize it afterward.

Active Geo-Replication

This is the most advanced DR strategy for Azure SQL databases and is available only for Premium. Active Geo-Replication will keep an online copy that is also read-only in another Azure region. This means that you can actually use this copy of region-based scale out. So you can see, Active Geo-Replicatio is not only a DR tool but also a performance scalability tool! This copy has an RTO (Recovery Time Objective) of 1 hour or less and an RPO (Recovery Point Objective) of 5 minutes or less (a big difference to Standard Geo-Replication).

Because the secondary is also read-only, it is charged at the same rate as any other database and as of this moment you can create up to 4 of these secondaries. Unlike Standard Geo-Replication, the Active one allows failover and failback initiated by the client.

OK now that we’re done with the DR differences, let’s move on to the DTU concept that I mentioned early.

What is a DTU?

DTU stands for a Database Throughput Unit. It’s an abstraction of the CPU, Memory and IO power of the database and is used as a unit of comparison of performance between the different service tiers. Basically, the Microsoft engineers want to abstract the hardware away from database as a service and thus came up with this measurement to show one number that represents your database performance.

So, let’s do an example

Let’s assume that you run a specific workload on the Basic tier and you’re able to run 20 transactions per minute of this given workload. We know that Basic corresponds to 5 DTUs so you know that for your workload you get 20 transactions per minute for 5 DTUS. This means that if you upgraded to S1, you would get 20 DTUs, or 4 TIMES more throughput than what you’re getting at Basic. We would expect then to be able to process 80 transactions per minute or the equivalent 1.3 transactions per second.

The same relationship would hold when moving up to 50-100-200 DTUs.

As a way to illustrate the expected performance, Microsoft created an internal benchmark and published the numbers they received on each tier. You can also see below the amount of threads and sessions that each tier will allow:

Tier DTU Max Threads Max Sessions Benchmark Throughput
Basic 5 30 300 16600 trx/hour
S0 10 60 600 521 trx/min
S1 20 90 900 934 trx/min
S2 50 120 1200 2570 trx/min
P1 100 200 2400 105 trx/sec
P2 200 400 4800 228 trx/sec
P3 800 1600 19200 735 trx/sec

This chart can provide some guidance as to which service tier you should start with when you create or migrate a database to Azure SQL. Remember that the nice thing about the cloud is that at any time you can move between the different tiers and you will be billed on each one BY THE HOUR.

In conclusion, I hope this guide can quickly give you an idea of the capabilities of the Azure SQL Database and provides an overview of what capabilities to expect.

Once you’re ready to try all this out, go ahead and get your free trial of Azure and start playing with cloud SQL power!

Warner Chaves
Author: Warner Chaves
SQL Server Certified Master and MVP. DBA for MS technologies for over 10 years.

Leave a Reply