© Pranab Mazumdar, Sourabh Agarwal, Amit Banerjee 2016

Pranab Mazumdar, Sourabh Agarwal and Amit Banerjee, Pro SQL Server on Microsoft Azure , 10.1007/978-1-4842-2083-2_6

6. SQL Hybrid Solutions

Pranab Mazumdar, Sourabh Agarwal1 and Amit Banerjee1

(1)Bangalore, Karnataka, India

With all the business dynamics available in today’s IT industry, there is a huge paradigm shift. You need to constantly innovate and grow your business by reducing the Total Cost of Ownership (TCO) and maximizing the return on investment. Microsoft Azure provides you with a perfect platform to accomplish this by integrating the public cloud resources and private cloud resources together. The classic case of hybrid cloud usage is to store your sensitive data within your on-premises data center and connect to the public cloud where other data resides.

The hybrid cloud provides a bridge between your on-premises data center and your public cloud. Microsoft’s approach to the cloud is quite unique as there is consistency across the cloud in terms of the servers that are being used in the data centers. Any customer can optimize resources depending on their needs and can store critical and secure data in their on-premise data centers and leverage the storage cost by offloading the work to the cloud, thereby scaling up without actually impacting cost.

SQL Server integrates quite well with cloud services from Azure, providing end-to-end experiences, using the same interface as T-SQL or PowerShell that you are comfortable with. Hybrid environments are environments where the resources come from the cloud and the resources come from on-premises sites. The resources are physical machines, VMs, storage (SAN, Cloud Storage), DC (for ADs , logins policies, etc.), and databases. So you can think of couple ways to run your hybrid environment. Say you can have a SQL Server that runs on-premises and that uses cloud resources. You could have your SQL Server in the cloud using an on-premise resource. When we think of going hybrid with SQL Server running within our on-premises environment talking to the cloud, we use two resources: Azure storage and Azure Virtual Machines.

Microsoft Azure enables you to store a huge amount of data with minimal risk due to the multiple replicas that the data gets written to. To add to this, there is automated disk healing that runs a checksum to ensure that the data is logically consistent.

If the checksum doesn't match, we automatically take the disk out and then make another copy on the fly. This means having three copies of the data at any point in time. Cost is also low, which is one reason why customers are moving to Azure. Another important resource is virtual machines, which are available in Azure. These VMs are highly available and offer automatic VM healing with a plethora of sizes you can choose from and are a great cost-saver.

Figure 6-1 shows different scenarios we can leverage in the cloud infrastructure to save cost. For example, we can store backups to Azure storage, store SQL Server files in Azure storage, and use the AlwaysOn technology to extend the on-premises infrastructure and have a secondary site configured in the cloud. We will look at these in detail later in this chapter.

A371128_1_En_6_Fig1_HTML.jpg
Figure 6-1. On-premises integration using Azure Storage

Hybrid Model Snapshot

The hybrid model provides the following advantages:

  • Your business decision become agile and you can quickly make decisions about making resources available with this model.

  • You don't end up wasting time getting the machines ready (hardware/software); instead you can focus on more productive business logic or possibly serve the customer’s need.

  • You can now leverage the Microsoft Azure ecosystem of applications and build a highly scalable application.

However, when you go with this hybrid infrastructure, there are few factors to consider:

  • You may have to think over maintaining the hybrid infrastructure, as there are additional components to it like a firewall, virtual networks, routing devices, etc.

  • Managing and accessing the resources involves different sorts of applications, thus classic consolidation could be a challenge.

  • Before moving to the cloud, you should evaluate the complexity, perform risk assessment, and do a benchmark study on the feasibility.

The applications, tools, and services running on either side are not always interoperable. There are several applications that cannot quite cross these boundaries and thus it is important for you to evaluate the pros and the cons.

Backups to Azure Storage

With SQL Server 2014 BackuptoUrl was enhanced and can now perform the backups and store the files directly on to the Microsoft Azure Storage. This way, you can store database backups from on-premises, Azure VMs to store backups in Azure storage, and leverage the features of Azure storage (built-in redundancy and replication of data for designing your DR strategy massively scalable). This is accomplished through a REST API instead of HTTP to interact with the Azure storage. Basically, all you need are the following:

  • Microsoft Azure account

  • Azure storage account

  • Container inside the storage account

  • SAS key

You can read more about the SAS keys here: https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/ .

Figure 6-2 depicts the backup and restore feature, where the backups are stored within Azure storage . You can easily perform a restore from there.

A371128_1_En_6_Fig2_HTML.jpg
Figure 6-2. Backup in Azure storage

With SQL 2014, we used the Backup2Url or the Smart Backup to accomplish the backups. The backups used to be in page blobs. Page blobs are optimized for random reads and writes. With SQL 2016, this feature is enhanced and will be taken to the next level where you can backup to block blobs, which is supposed to be more economical for the customers. Apart from this, there are few advantages of this architecture. The Backup/Restore can be done to/from stripped sets, which was not possible with page blobs or Backup2Url. There have been some enhancements made with backup to blobs, which will further enhance the restore experience.

Figure 6-3 shows how you can leverage the hybrid environment and store the backups (Full Transactional here) of the databases in Azure storage, thereby helping business reduce total cost.

A371128_1_En_6_Fig3_HTML.jpg
Figure 6-3. Database backup files in Microsoft Azure Storage

The following script will help you back up the databases as shown. There are two main steps: you create credentials and then issue backup database command to Azure.

Step 1: Create credentials for backups to Azure

IF NOT EXISTS
(SELECT * FROM sys.credentials
WHERE credential_identity = 'mycredential')
CREATE CREDENTIAL mycredential WITH IDENTITY = 'cred1'
,SECRET = 'SAS Key' ;

You can get the SAS key from the Management portal:

Old/Classic Portal :-

    a) Click on the storage account
    b) Click on the Manage Access keys.
    c) You will find the primary and the secondary keys to access the storage account.
N.B:- Once you click on the storage account, click the “Containers” Tab, you can add a container and call it as testcontainer using the Add option here.


New Portal:-
    a) Create the Storage account
    b) Click on the storage account
    c) Under Settings , Click Access keys
    d) You will find the primary and secondary keys under the storage account name.

In the new portal, click on the storage account and then on the blob services, as shown in Figure 6-4. You’ll see different storage services.

A371128_1_En_6_Fig4_HTML.jpg
Figure 6-4. Storage services as shown in the new portal

You can add the container using the + sign shown in Figure 6-5. As you click on the Blobs service (see Figure 6-4), you will get the different options, as shown in Figure 6-5.

A371128_1_En_6_Fig5_HTML.jpg
Figure 6-5. Adding a container to the new portal

Now name the container testcontainer and choose the appropriate access type, as follows.

Step 2: Issue a backup database command for backups to Azure

BACKUP DATABASE DB1
TO URL ='https://storageaccountname.blob.core.windows.net/testcontainer/DB1_Full.bak'
WITH CREDENTIAL = 'cred1',
COMPRESSION,
STATS = 5;
GO
BACKUP DATABASE DB2
TO URL ='https://storageaccountname.blob.core.windows.net/testcontainer/DB2_Full.bak'
WITH CREDENTIAL = 'cred1',
COMPRESSION,
STATS = 5;
GO

Similarly, you can also back up the log files to the container in the Azure storage account.

SQL Server Files in Microsoft Azure Storage

SQL Server 2014 has a unique feature that helps native support for storing SQL Server database files in Microsoft Azure Storage. You can now create a database on-premises or in a Virtual Machine hosted in Microsoft Azure and store the data in Microsoft Azure blob storage. It is easy to move your databases between different environments by detaching and attaching the files stored in Microsoft Azure Storage.

While creating the data files, you need to do the following:

  1. Create a storage account and a container within it.

  2. Create a SQL Server credential that has the policy of the container.

  3. Access the container using Shared Access Signature.

While using the native feature of storing the files in the Microsoft Azure Storage, you need to:

  1. Create a policy on the container and generate the SAS key. To get the SAS key, download Microsoft Storage Explorer from https://azurestorageexplorer.codeplex.com/ .

  2. For each container you need a credential and its name should match the containers path.

Figure 6-6 shows that you can have your databases residing on-premises or on an Azure Virtual Machine in Microsoft Azure.

A371128_1_En_6_Fig6_HTML.jpg
Figure 6-6. Data files as stored in a Microsoft Azure Storage account

As you can see, we have DB3 created in an Azure Virtual Machine with its files residing on the Azure storage. On the other hand, DB6 is residing on the on-premises environment and has the files stored in the Azure storage. The advantages of using SQL Server data files on Windows Azure blob storage service are as follows:

  • Portability. It’s easy to detach a database from a Windows Azure Virtual Machine (IaaS) and attach the database to a different virtual machine in the cloud; this feature might be also suitable to implement an efficient disaster recovery mechanism, because everything is directly accessible in Windows Azure blob storage. To migrate or move databases, use a single CREATE DATABASE Transact-SQL query that refers to the blob locations, with no restrictions on storage account and compute resource location. You can also use rolling upgrade scenarios if extended downtime for operating system and/or SQL Server maintenance is required.

  • Database virtualization. When combined with the contained database feature in SQL Server 2012 and SQL Server 2014, a database can now be a self-contained data repository for each tenant and then dynamically moved to different virtual machines for workload rebalancing. For more information about contained databases, see the following topic in SQL Server Books Online:

  • High availability and disaster recovery. Because all database files are now externally hosted, even if a virtual machine crashes, you can attach these files from another hot standby virtual machine, ready to take the processing workload. A practical example of this mechanism is provided in the section called “Implementing a Failover Cluster Mechanism” later in this book.

  • Scalability. Using SQL Server data files in Windows Azure, you can bypass the limitation on the maximum number of Windows Azure disks you can mount on a single virtual machine. There is a limitation on the maximum number of I/O per second (IOPS) for each single Windows Azure disk.

Let’s look at the architecture and simplify this a bit. The enhancements have been made within the SQL Server engine layer itself. There are three layers where the integration has been done; Figure 6-7 will simplify this idea a bit.

A371128_1_En_6_Fig7_HTML.jpg
Figure 6-7. SQL Server Storage Engineer File IO
  • Manager layer. This includes a new component called XFCB Credential Manager, which manages the security credentials necessary to access the Windows Azure blob containers and provides the necessary security interface. Secrets are maintained encrypted and secured in the SQL Server built-in security repository in the master system database.

  • File Control layer. Contains a new object called XFCB, which is the Windows Azure extension to the file control block (FCB) used to manage IO against each single SQL Server data or log file on the NTFS file system; it implements all the APIs that are required for IO against Windows Azure blob storage.

  • Storage layer. At the Storage layer, the SQL Server I/O Manager is now able to natively generate REST API calls to Windows Azure blob storage with minimal overhead and great efficiency; in addition, this component can generate information about performance counters and extended events (xEvents).

Here is an example of creating a sample database using this concept:

-- Creating Databases with files in Windows Azure Storage
-- Step 1: Create a credential
CREATE CREDENTIAL [https://storageaccountName.blob.core.windows.net/testcontainer]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = 'SECRET KEY'

This secret key is generated using the Azure Storage Explorer.

  1. Choose testcontainer and click on the security button on the toolbar.

  2. Click the Shared Access Signatures tab.

  3. Click the Generate Signature button to generate the SAS signature (see Figure 6-8).

    A371128_1_En_6_Fig8_HTML.jpg
    Figure 6-8. Blob & Container Security Dialog: Storage Explorer

Once you get the string using the data in Figure 6-8, follow these steps.

  1. Use the string as SECRET, which is after the ? in the signature string.

  2. Create a database with data and log files in an Windows Azure container as follows:

CREATE DATABASE FirstHybridDB
ON
( NAME = FirstHybridDB_dat,
    FILENAME = 'https://storageaccountname.blob.core.windows.net/testcontainer/FirstHybridDB.mdf' )
LOG ON
( NAME = FirstHybridDB_log,
    FILENAME =  'https://storageaccountname.blob.core.windows.net/testcontainer/FirstHybridDB_Log.ldf')

Smart Backup

Smart Backup is one of the new features of SQL Server 2014 that uses the Azure infrastructure for taking intelligent backups. Following are a few of the key differences with traditional backups:

  • Backup is based on intelligence rather than on schedule

  • Completely managed by SQL Server

  • Backup retention is automatically managed

  • Backup retrieval is more reliable

Smart Backup can be configured at the instance level or at the DB level. The only input parameter required is the retention period, which ranges from 1 to 30 days. When Smart Backup is enabled at instance level, existing databases need to be added manually; however new databases will automatically be added to the schedule. Backups can be stored in encrypted or unencrypted form. To enhance security, keys can be regenerated at regular intervals. The following script needs to be run to enable this feature:

-- Enable Smart Backups

EXEC smart_admin.sp_set_db_backup
                @database_name='TestDB',
                @retention_days=30,
                @credential_name='cred1',
                @encryption_algorithm='NO_ENCRYPTION',
                @enable_backup=1
GO

If you want backups that are created using the smart backup feature to be encrypted, you need to create the key, certificate, and the credentials. You could have the key rotated at regular intervals for security purposes; this can be done by using the secondary key and regenerating it (see Listing 6-1).

Listing 6-1. Creating the Key for Smart Backup Encryption
Use master;
Go
--Create a master key
Create master key encryption by password='Password@123'
Go
--Create a certificate
Create certificate mycert1 With subject ='MySmartBackup'


If exist (select * from sys.credentials where name = 'cred1')

Drop credential cred1
--Create a Credential
Create credential cred1
With identity ='StorageAccountName',
Secret = 'SAS KEY'


--Smart backup enabled at Instance level.
Use msdb;
GO
EXEC smart_admin.sp_set_instance_backup
                --@database_name='TestDB'
                 @retention_days=30
                ,@credential_name='cred1'
                ,@encryption_algorithm ='AES_128'
                ,@encryptor_type= 'Certificate'
                ,@encryptor_name='mycert1'
                ,@enable_backup=1;
GO


-- Smart backup enabled at DB level.
Use msdb;
GO
EXEC smart_admin.sp_set_db_backup
                 @database_name='TestDB'
                ,@retention_days=30
                ,@credential_name='mycred1'
                ,@encryption_algorithm ='AES_128'
                ,@encryptor_type= 'Certificate'
                ,@encryptor_name='Mycert1'
                ,@enable_backup=1;
GO


--View managed backup configuration at the instance level

Select * from msdb.smart_admin.fn_backup_instance_config ()

--View configuration details

Select * from msdb.smart_admin.fn_backup_db_config ('')

For ad hoc backups, use the following, but note that it could break the chain and a fresh full backup would need to be triggered:

--On Demand backup to the cloud

exec msdb.smart_admin.sp_backup_on_demand 'TestDB','log'

exec msdb.smart_admin.sp_backup_on_demand 'TestDB','database'

The following provides more information on how to create backups for a database with SQL Server managed backups to Windows Azure.

https://msdn.microsoft.com/en-IN/library/dn451012.aspx

AlwaysOn Configuration on Azure VMs

AlwaysOn has evolved over the years. Many of our customers can’t afford a DR site. They can be costly in terms of purchasing the hardware, maintenance can be an overhead, and you need to have an operations team to run the site. So it adds to your total cost. For such customers, we wanted to provide a solution for disaster recovery and therefore, starting in 2012, we added replica in Microsoft Azure (VMs) . This is an IaaS offering where you can have SQL server running on virtual machines. As it stands today, we have quite a few customers who run their reporting workload on the Azure and take backups from there and can use it as a DR site. This way, they offload the backups and don’t touch the primary server. There are a few points to check before making such a setup :

  • The latency shouldn't be so much that the secondary can't catch up and ends up lagging far behind.

  • Confirm that the data that’s stored in the secondary replica on a different data center complies with the data protection and security norms.

You can have your main data center on-premises and add your secondary replica on Azure. The requirements needed for this setup are discussed shortly.

In 2014, we introduced a new wizard where you can add the replica in the cloud, end-to-end. Suppose you needed to configure an AlwaysOn Availability Group, and you know that you need machines. You would need to install Windows Failover Cluster, and then you would need to enable AlwaysOn, take a backup and restore it, and then post it to start synchronization. The environment is then validated using this new wizard.

This wizard also has built-in logic, whereby if there is a failure it will retry automatically. It has timeouts set as well.

Today, its easy to extend the Availability Group to Azure VM; it is cheap compared to keeping it on-premises. In this case, you need to pay for VM, storage, and the traffic going out. You get free ingress traffic and with no DR site cost in terms of hardware, so it works quite well for a DR site. You have your secondary replica in a different site, so during DR you manually failover to the secondary replica. You can thus offload your work to the primary site, i.e., reporting and BI to the readable secondary on Azure. You can take backups as well. The only thing that you need to do is configure site-to-site VPN tunnel between on-premises and Azure. After you do this, you can extend your AlwaysOn configuration.

The following are the requirements for extending your on-premises data center with AlwaysOn AG:

Once this is done, you can use the Add Azure Replica wizard, which will help you extend your existing AG by adding another replica to the Microsoft Azure as follows:

  1. From SQL Server Management Studio, go to AlwaysOn Availability Group and then Availability Groups to provide the name of the Availability Group.

  2. Right-click on the availability replicas and then click on Add Replica.

  3. The wizard shown in Figure 6-9 appears.

    A371128_1_En_6_Fig9_HTML.jpg
    Figure 6-9. Add replica wizard GUI
  4. Connect to your existing replicas by using the Connect or Connect All button.

  5. On the next screen, you will be presented with Figure 6-10.

    A371128_1_En_6_Fig10_HTML.jpg
    Figure 6-10. Add replica wizard: specifying the replicas
  6. From this screen, click on the Replica tab and then click the Add Azure Replica button.

  7. Now download the certificate or use an existing one (see Figure 6-11).

    A371128_1_En_6_Fig11_HTML.jpg
    Figure 6-11. Certificate download screen

    You should populate these fields, as this information is crucial and will be used in creating the new Azure replica.

  8. After this, you will be back to the Add Azure Replica page, where you need to validate/provide the information in the other tabs, such as backup preference and endpoints.

  9. Select the data synchronization type as per your business requirement. You can read more here: https://msdn.microsoft.com/library/hh231021.aspx .

Once you’re done, you can review the validation page, correct the issues identified, and re-run the wizard. Once you’re done, you will have a replica in Microsoft Azure. You should create a listener for the clients to connect to the replicas. The listener redirects the incoming requests to the primary or the secondary replica. Refer to the Azure documentation or the links provided next to explore this concept and its implementation more.

Summary

In today's world, managing resources more effectively by utilizing the power of Microsoft Azure platform is critical for your business. You can be more agile and manage your resources more efficiently. You can extend your on-premises data center with the Microsoft public cloud and leverage the so-called hybrid model, which gives you a chance to leverage the best of both environments. With this model you can easily move your business and workloads from different data centers. Today, your application is capable of taking advantage of the public cloud using Microsoft Azure storage, storing backups, and securely storing your data, all the while seamlessly working on your existing private cloud environment.

Additional References

Use the Add Azure Replica wizard

https://msdn.microsoft.com/en-in/library/dn463980.aspx

Use the Add Replica to Availability Group wizard

https://msdn.microsoft.com/en-us/library/hh213239.aspx

Protect SQL Server with SQL Server disaster recovery and Azure site recovery

https://azure.microsoft.com/en-in/documentation/articles/site-recovery-sql/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset