Hey Checkyourlogs Fans,
Today I wanted to discuss 3 Veeam Backup Targets at a customer that I’m working with. All three of these backup targets are running Windows Server 2019 with the deduplication feature enabled. We are having constant issues with one of the backup targets running out of space and my team having to manually run garbage collection, scrubbing and optimization to free up space. This backup target, as we found out, is the victim of what I call a non-dedup optimized workload. This means that the Windows Server 2019 Deduplication engine is having a difficult time breaking up the data and placing it into the chunk store.
From Windows, what we see is a low disk space condition, and from within the Veeam management console, we see errors like this. Replica location ‘<Server> F:\Replicas’ is low on free disk space (268 KB left of 7.6 TB).
To view the deduplicated volume, you can use Windirstat.exe. A little trick, though, is that you must run it under the System Account in order to see the dedup chunk store. To do this, we grab PSExec.exe from Sysinternals and run it like this. PSexec.exe -i -s “c:\Program Files (x86)\Windirstat\windirstat.exe”
Once opened, choose the volume that you want to analyze.
In our case, we will look at the VeeamReplicas Volume.
At an initial glance, I can see that over half the Folder F:\Replicas hasn’t been optimized.
Here is what Backup Target #2 looks like that isn’t having the deduplication issue. We can see the percentage of .ccc (Chunk Store Files) vs. .VHDx and .AVHDx is much higher. Note: The extra 600 GB of BIN Files is because I have the Veeam WAN Accelerator on this volume as well at this location.
Here is what Backup Target #3 looks like; this one also is not having Deduplication issues.
All three of these Backup Targets have a very high deduplication ratio:
Backup Target #1 –
Backup Target #2
Backup Target #3
Let’s go back to Backup Target #1 and explore Windirstat and check the F:\Replicas Folder.
As you can see, 2 of the Veeam Replica Hyper-V VM’s are taking up approximately 2.5 TB of the 4 TB used in this volume. When I checked the other Backup Targets, we didn’t find any VM’s that had more than a few hundred GB.
Checkin the first VM that was consuming 34.4% of that folder I quickly found out that it was a file server.
The 2nd VM was a Production SQL Server with an extremely high rate of change.
To resolve this space issue, we decided to delete the Veeam Replica Copy of this SQL Server and move it to a new Target.
The proper way of doing this is to open the Veeam Management Console, find the replica and delete it from disk.
It can take a few minutes to delete the files from the Backup Target, so grab a coffee now.
After the job completed, the files were not deleted.
So, I decided to manually remove it as it appears Veeam didn’t clean up this VM as expected.
There was a file lock that was preventing the deletion of these files by Veeam. I ended taking the volume offline and bringing it back online to clear the locks.
Then I was able to delete the folder.
I still have to move the File Server replica, but that will take a few more days to prepare a new backup target. Below is a view of Windirstat after the manual deletion.
The files have been removed, but the free space won’t show up until you run the garbage collection and optimation jobs.
So we run our Volume Script
After freeing up the disk space from the SQL Server. The File Server.VHDx files were being processed by the Deduplication process (fsdmhost.exe)
Letting this run for a few hours, we can have another look via Windirstat.exe.
After the Deduplication Optimization Job had run to about 20%, have a look at the stats.
With the SQL Server off the Volume, the Deduplication engine is running much better and we can see a considerable improvement already.
In summary, I hope that you have learned a little bit about the Deduplication process and engine in Windows Server 2019. Primarily when used as a Veeam Backup or Replica Target.
Hi Dave, Nicely explained. Would you know how Veeam’s deduplication and compression works on Nutanix cluster, I mean I know about the integration where the Veeam proxy is being deployed onto the Nutanix cluster and takes a snapshot based backup. I guess my question is when Nutanix also provides dedupe and compression natively at the storage level – How Veeam trets it? Does Veeam rehydrate it and then applies its own dedupe & compression OR just backs up the already deduplicated data(Nutanix’ own dedupe). Could you please throw some light on it.
No sorry I haven’t worked on Nutanix Clusters in a while.