Reactivate a broken IMAP service on Microsoft Exchange 2013/2016

And one day, suddenly, IMAP service stopped working properly on my Exchange 2013 mail server.

Usually by telneting to imap port, exchange server returns a banner.


When the IMAP service is broken, the telnet screen is blank and the connection aborts. The initial idea to restart the IMAP services on the exchange does not fix the problem. (In this case all the exchange roles reside on the same server)


Run the following command on a powershell window

Get-HealthReport <servername>  | where { $_.state -eq “Offline”}

It will return an offline IMAP.Proxy healthstate.


Usually this is a Microsoft bug, since it appears after installing a Cumulative Update on the Exchange Server. By setting online the IMAPProxy, the IMAP service comes into life. Run the following command:

Set-ServerComponentState -Identity <servername> -Component IMAPProxy -State Active -Requester HealthAPI


more at Microsoft


VDP Error “Operation failed due to existing snapshot”

Sometimes VMware’s Data Protection (VDP) appliance fails. Some VMs may not be backup up properly, when VDP thinks that there are ghost snapshots attached on these VMs.

In our case, VDP failed with the error “VDP: Operation failed due to existing snapshot”. Checking out the related datastore, we noticed that there was an old vmdk snapshot file in the VM datastore folder. The question was whether this snapshot file is linked to the base image or not.

First I tried to consolidate the files via the snapshot manager, I even tried to create a snapshot and then “deleted all” snapshots, but nothing fixed the problem.

Somehow, you may use “vmkfstools -e -v10” command from a ssh terminal session to verify the linking between the vmdk files but I didn’t figure out how to do it properly.

An easy way to verify whether the snapshot file is linked or not, is to migrate the VM to another datastore. By doing so, the VM related files move to the new datastore and the garbage files remain in the old datastore. Browsing the old datastore you may see the ghost snapshot file that produced the problem. You may delete the old datastore folder when you finish the entire troubleshooting session and everything works alright.


Even though I migrated the VM to another datastore the problem still remained, and VDP backup failed with the error “VDP: An unexpected error occurred with the following error code: 10058.”.



Finally, I migrated the VM to another host and VDP backup finished successfully.

VMware’s Horizon View 7 agent unreachable error

This was a mind twisting problem! I upgraded our Horizon View infrastructure to version 7 on our ESXi 6 U1 hosts. After that I was able to compose new VDIs but their DNS names on admin console had always the name of the master image! An unreachable agent error terminated the procedure.

I tried almost everything. I checked registry keys, reinstalled the horizon agent and the VMware tools, but the DNS name on the admin console still had the name of the master image. On our active directory everything was fine, the DNS names were right and DHCP was working normally. I checked the connectivity between the new VDIs and the connection servers and everything was exellent! The listening ports on connection servers were open wide and there was no firewall in between. I even reinstalled the composer servers and the connection servers.

I started freaking out, so I returned back to basics. I thought, that the composing phase was working fine, just the VDIs could not pass the DNS information to the connection servers. I started to think that the horizon agent or the VMware tools was the problem. The agent was the latest version, what about the tools? Then I came across the release notes of Horizon View 7, stating in red color

“Importantly note about installing VMware Tools

If you plan to install a version of VMware Tools downloaded from VMware Product Downloads, rather than the default version provided with vSphere, make sure that the VMware Tools version is supported. To determine which VMware Tools versions are supported, go to the VMware Product Interoperability Matrix, select the solution VMware Horizon View and the version, then select VMware Tools (downloadable only).”

So I went to VMware’s PIM page and I found this…

 Hell, my poor ESXi 6U1 install by default VMware Tools 9.6 on my VMs!

Finally, installing the latest 10.0.8 version solved the problem!

VMware vCenter error “Unable to access file since it is locked” during disk consolidation solved

My setup consists of a vCenter 6 appliance and VMware’s Data Protection (VDP) 6 appliance. Every day there is a scheduled job that backup differentially all the VMs in our organization. Unfortunately this does not work always smoothly, as I would like to. One day I came across the “Unable to access file since it is locked” error. The first encounter was a failed log of VDP appliance on my e-mail inbox, but I am used to logs like this since as I said VDPA does not work always smoothly. So I logged into vCenter web client, connected to Data Protection appliance and re-runned the failed job for the outdated VMs. Unfortunately the job failed again!

Then I noticed an error message on my vCenter client saying “virtual machine disks consolidation needed”.


ok, I said. It seemed that the backup job did not delete the backup snapshot, let’s wipe it out manually. I got into the snapshot manager window and pressed the “Delete All” button and after a few minutes a new error message appeared.


Pressing the “Consolidate” button did not solved the problem neither. It seemed that the problem is bigger than I first thought. The  events on the problematic VM matched with the whole picture of the problem.


My internet searches were not fruitful. I guessed that the VDP appliance was root of the problem, so I shut it down and tried to delete all the snapshots of the problematic VM. Voila, I was near the solution, since the virtual disks consolidated into one disk and the VM properties disk file linked to the right vmdk file.


Browsing the datastore of the VM I noticed that an old differential disk file was not deleted (it is the vmdk file that it’s filename ends with a number). This was the causing trouble virtual disk.


I wiped it out by right clicking the vmdk file on the datastore browsing window. I restarted the VM to check that everything is ok.

Then I powered on the Data Protection Appliance but it failed to boot up correctly with the following error:


This was the root of the problem, VDP appliance had mounted the problematic virtual disk, that’s why this file was locked on the VM. By editing the VDP appliance properties, I deleted the mounted virtual disk that belonged to my problematic VM and left only the four default virtual disks of the appliance.


The appliance powered on smoothly and I checked that the services were up and running via VDPA’s console.

Job automation is good when it does not fail! If it fails it causes only troubles. Good luck!