Recently I have been troubleshooting an interesting issue with some of my weekly synthetic full jobs. The incremental backup would run as per normal and then it would start the synthetic full which would take over 2 days. This job is quite large and I did expect it to take some time to complete but not this long. The synthetic operation itself doesn’t impact the production VM while it is being created so that is ok. My main issue is that I don’t want to miss the incremental run the next night.

Meanwhile I have other jobs configured to backup all of my management VMs in what I thought was the same way and they would run and complete without issue in a very short time frame.

In this environment I have created a forward incremental job with a synthetic full on the weekend for my management VMs which are just located in vCenter. This job itself was just created in the VBR console like you would any other job.

The second set of jobs and the one I am have having issues with was created via the Self-Service Portal in Enterprise Manager and is backing up vCloud Director VMs. Once created the job is displayed in the VBR console and can also be managed from here.

So the troubleshooting commenced with the first point of call being the storage array where the backup files are being written. With the help of support from the storage vendor we went through everything and made a few changes to the way the storage was being presented to help optimize the performance. This did help a bit, but overall didn’t resolve the time it took for the job to complete.

At this point I got in touch with Veeam support to try and make more sense of the issue. We checked through the configuration and then dived into the logs to see if anything stood out.

Below is an extract from one of the agent transform log files. The file name will have the following format – Agent.BackupJobName.Transform.Target.VMName and is located in the C:\ProgramData\Veeam\Backup\BackupJobName directory.

From the above example we can also use the following to determine how the job itself is progressing.

From here we can see it has processed 60% for that particular agent. The next entry 0;0;0;74;9;35 can be broken down in the following,

0 – Source Read busy % AKA “Source”

0 – Source Processing Busy % AKA “Proxy”

0 – Source Write Busy % AKA “Network”

74 – Target Read Busy %

9 – Target Processing Busy %

35 – Target Write Busy % AKA “Target”


What we found only applied to jobs created through the Enterprise Manager vCloud self-service portal and the vSphere self-service portal. By default every 10 seconds the target repository agent receives a quota of 512 MB. Once this quota is exceeded the target agent displays the message “Storage size quota exceeded. Waiting for quota increase.” So if the target writes faster than 51.2 MB/s the process will pause for 10 seconds while the job waits for the next 512 MB to be allocated.

So basically my problem is I am writing data down too fast then spending majority of the time waiting for another 512 MB to be allocated so the data can continue to be written. A good problem to have I guess.

Luckily there is a fix for this issue and it applies to Veeam Backup & Replication 9.5 Update 4, 4a and 4b. With the following registry key you can increase the default 512 MB to 2047 MB. Do not set this key any higher than 2047 otherwise you will see a drop in backup processing performance.

Open the registry editor on the Veeam Backup & Replication Server and browse to,

HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication

Then right click the folder and select New, then DWORD (32-bit) Value

Value Name – VcdBackupQuantSizeMb


Value Data – 2047 (Decimal)

After adding this registry entry in the next run of the synthetic job completed in just under half the time that it did on a previous run. This allowed for the incremental to complete the following night.

This issue should hopefully be resolved with the release of v10 by adding in dynamic storage quota assignment.

I hope this helps some of you out there that may be experiencing a similar issue with jobs created via the Self-Service Portal in Enterprise Manager!

As always use the subscribe box above for new post notifications and follow me on twitter @steveonofaro