Hey Checkyourlogs Fans,

 

Had a ton of my Veeam Replica jobs fail at a customer site overnight. The issue presented itself with the following error messages:


 

Failed to process replication task Error: Job failed (“Domain Controller_replica’ failed to apply checkpoint. (Virtual machine ID 6FC739AB-3494-43E1-875F-81442AC2C3CB)’). Error code: ‘32768’. Failed to revert VM snapshot, snapshot path ‘\\<SERVER>\root\virtualization\v2:Msvm_VirtualSystemSettingData.InstanceID=”Microsoft:E48E5F9D-9DF2-4F7F-9700-224604157E69″‘.

 

It appears that something might be wrong with the services on the Target Hyper-V replica server. When I did some more checking this morning, I found that we ran out of disk space overnight.

 


This Server has been running as a replica target for the past 4 months, and I did target a new SQL Server with 1 TB of data to it last night, which filled up the volume before the deduplication post-process jobs could catch up.


 

So I pull out my trusty swiss army knife tool from fellow MVP Mikael Nystrom.

 

Function Wait-VIADedupJob
{
while ((Get-DedupJob).count -ne 0 )
{
Get-DedupJob
Start-Sleep -Seconds 30
}
}

foreach($item in Get-DedupVolume){
Wait-VIADedupJob
$item | Start-DedupJob -Type Optimization -Priority High -Memory 100
Wait-VIADedupJob
$item | Start-DedupJob -Type GarbageCollection -Priority High -Memory 100 -Full
Wait-VIADedupJob
$item | Start-DedupJob -Type Scrubbing -Priority High -Memory 100 -Full
Wait-VIADedupJob
}
Get-DedupStatus

 

Run it and wait.


 

We can see memory utilization for the dedup job up.


 

That is good. Let us check the progress on the Chunk Store.

We can see the fsdmhost.exe ripping through the F: and moving data into the chunk store with the .ccc extension


 

This should fix my issue, and at this point, we might we reaching the upper capacity of this target Hyper-V Host server with an 8 TB Volume and 23 TB Saved.

 

Thanks,


Dave