Veeam released the Backup & Replication 9.5 Update 4 on January, also with this release, Veeam now also have support for Cloud based Object Storage, such as Azure Blob Storage which means that we can have the last 30 days of data stored on a local repository and then move the archive or later up to a Azure Blob but Azure Blob is capacity tier storage, you cannot use it as local backup repository and backup data to it directly. Let’s follow steps to configure it.
Login to Azure portal and select Storage accounts.
On the Storage accounts page, click +Add.
On the Create storage account page, select Basics and configure as follow:
Subscription: select your Azure subscription.
Resource group: click create new and enter resource name and then click OK
Storage account name: enter a name for your storage account.
Location: select your location.
Performance: select Standard.
Account kind: select Storage V2 (general purpose v2)
Access tier (default): select Cool.
On the Create storage account page, select Advanced and configure as follow:
Secure transfer required: select Disabled.
Virtual Networks: select All networks.
Hierarchical namespace: select Disabled
On the Create storage account, select Review + create, make sure Validation passed and then click Create.
It may need few mins to create the new storage account.
Click the new storage account after the storage account is ready.
Select Blobs and then click +Container.
- On the new container page, enter name for your new container.
Select Private (no anonymous access) as Public access level and then click OK.
Select Access keys and then copy Storage account name and key of key1.
Login to Veeam server and open Backup & Replication console and then click Connect.
Select BACKUP INFRASTRUCTURE, right-click Backup Repositories and then select Add Backup Repository…
On the Add Backup Repository page, select Object storage.
On the Object Storage page, select Microsoft Azure Blob Storage.
On the Name page, enter name for Azure Blob repository, click Next.
On the Account page, click Add on Credentials item.
Paste Azure storage account name as Account and paste key1 as Shared key (you copied them from Azure storage account Access key, please check step 11 if you forgot them)
- On the Account page, select Azure Global (Standard) as Region.
Click Use the following gateway server and then select a gateway server as proxy server and then click Next.
On the Container page, select a Azure Blob container.
Click Folder Browse if you has existing folder, if not, you can click New Folder to create one and then click OK.
You also can select Limit object storage to help you control your cloud storage spend, but here I am going to keep the default settings (uncheck) and then click Next.
On the Summary page, click Finish.
Right-click Scale-out Repositories and select Add Scale-out Backup Repository…
Note: please make sure you are using Veeam Backup & Replication Enterprise or Enterprise Plus if you don’t see the settings.
On the Name page, enter name for Scale-out backup Repository, click Next.
On the Performance Tier page, click Add….
Select an existing backup repository (or repositories) as Extents and click OK and then click Next.
Note: Please make sure they are not the same as the target backup repository of backup jobs.
If the extent is using as a target repository of backup copy jobs, and it will pop extent will be automatically updated to point the scale-out backup repository warning message, click Yes.
On the Placement Policy page, select Data locality and click Next.
On the Capacity Tier page, click Extend scale-out backup repository capacity with object storage and select Azure Blob repository.
- On the Capacity Tier page, click Windows…..
On the Time Periods, please select working hours as Denied time Periods, because it will use full network bandwidth to offload file to Azure Blob.
If you need offload file to Azure Blob at working hours time periods, please use Global Network Traffic Rules to Throttle network traffic.
On the Capacity Tier page, select Move Backups to object storage.. and keep the default setting as 30 days, you can change it if you need to.
You also can click Override…. to get more options for moving oldest file to Azure Blob, click OK.
Note: Every 4 hours Veeam collects the backup data from the extents and transfers it to object storage according to policies that define how and when such data should be offloaded.
- You can also define a master password that will be used to encrypt data uploaded to object storage, click Apply.
Note: This password will be used for encrypting all data moved to the cloud object storage.
On the summary page, click Finish.
You don’t need to change anything for your existing backup copy jobs if they were using the extent (Performance tier) repository as target backup repository.
Every 4 hours Veeam collects the backup data from the extents and transfers it to object storage according to policies that define how and when such data should be offloaded. To offload the backup data, Veeam uses the SOBR Offload job.
You also can check the Archive folder structure at Azure Blob.
Hope you enjoy this post.
which Veeam Build is this ?
we are waiting to be able to do this but so far it is not possible ? (& we have the latest version of Veeam so far v220.127.116.114)
Thanks for the post! We just successfully added 2 ExaGrid’s to Azure Blob as our Scale-out repository.
HI, i tried to do this but instead of copying data to azure blob, it’s only copying the data to local storage.
Hi, i tried this, but instead of copying the data to azure blob, it is copying into the local storage.
same here! Do you have any suggestion if you solved?
hi, same here. Do you resolved?
Very useful post. Not sure if you’re answering questions, but I’m missing something: in the third to last step (labeled #2) you say: “You don’t need to change anything for your existing backup copy jobs if they were using the extent (Performance tier) repository as target backup repository.” But in step 28, you said, referring to the extent (Performance tier) repository landing zone “Note: Please make sure they are not the same as the target backup repository of backup jobs.”. Those statement seem to conflict for me. What am I missing? Should I be backing up to the Performance tier extent? Or create a new backup copy job to copy backups to the Performance tier extent from my backup target repository? Thanks!
@Dave Dorsey: First of all, I agreed with you that this is a very useful post and Cary did a great job!. Thanks Cary.
Now answer to your question, I see where you are coming from. However, at the time of this writing, Veeam doesn’t allow us to write to “archive tier” directly from neither backup job nor backup copy job. At present, the idea of archive tier from SOBR is to “extend” SOBR to unlimited storage backed by Azure or AWS, etc.So Veeam can only “move” your backup images from performance tier to archive tier through 2 conditional policies: (1) Through restricting restore points in performance tier and archive everything else OR (2) through reducing the usage of the performance tier by percentage (%).
The problem doesn’t stop there. Veeam can only write to hot or cool tier in Azure Blob storage and there’s no automated way of moving between access tiers in Azure yet, even though some guys on the block are saying there’s a public preview for access tier life cycle management though policies but we have yet to see how well it is going to work for us. Hope this helps a little.
I had a similar question as Dave. My scenario is that I have a single server/repository that I’d like to use as the target for my backup jobs. However, I’d also like to follow this article to create a SOBR with Azure BLOB for the capacity tier so I can store GFS copies in Azure long term. If I understand correctly it looks like I can’t use the same repository used for my backup jobs and I’ll have to create another repository to place in the SOBR performance tier, correct? If so, can I simply use create a 2nd repository pointed to a different folder on the same server as my first repository? It seems odd that I would them be using a backup copy job to copy backups to another folder located on the same server just to get them up to Azure. Is there a way to do this without unnecessarily increasing the amount of storage I need on my backup repository server?
Hello, very interesting article. As I understand it Veeam will move a full backup for image archiving only. Would it be possible to move incremental backups?
After the initial backup is up there it is forever incremental.
Is it possible to backup VMs directly from local VMware to Azure blob storage? Or do I need local backup storage also?
We have an MS Express Route so we have a fast connection to Azure.
You can do this with Veeam. It will work
what format are files copied inside the Azure Blob.
q1 Is it possible to create a VM in azure from these copied files? Or is this just a file copy of the files inside an existing vmdk veeam backup file?
q2 Would Veeam on-premise be able to create a new VM from just using the files that were stored in the azure blob?
Hello, it’s only copying the data to local storage even if the backup file is older than xx days.
Great article and I *believe* I was able to set it up successfully. Anyone have any thoughts on doing a test trigger, I went into Scale out repositories and set it for 1 day as a test, but if I dont want to wait and push it now is this possible?
One thought… the first diagram is misleading… and probably caused some confusion amongst readers. The process of moving data into an Object Storage bucket is not done through a “copy job”, but instead is policy based set at the repository level. Many long time Veeam users are expecting to use Copy Jobs and it’s not an option. I know this has thrown off many of my users trying to set it up for the first time.
Just a friendly suggestion… and maybe explain the difference between a copy job, and our move function. (leaving behind decompressed meta data files)
Excellent post! Thank you very much for sharing your knowledge! Regards!