My master server went down today, and after I brought it back up MySQL got a couple hundred connections from my slave server. I looked on the slave server, and there were a couple hundred server.sh processes running. Apparently they were all in the wait loop waiting for the master to come back. MySQL on the master ran out of connections, so there was no choice but to kill the processes on the slave. So, after I did that everything was fine, but the server.sh processes started to build up again. there was a .ispconfig_lock file left behind because of the kills, and they were all waiting for it to clear, which of course it never will unless it is deleted manually. I killed the server.sh processes again, deleted the lock file, and now it's OK again. This problem could be solved by putting the PID of the server.php process in the lock file. Then the next process in line can read it, and check to see if that process still exists. If not, it can replace it with it's own PID file, and it can go ahead and run. If the other process is still running, really it should die and let the next cron pick up rather than wait - the script is going to run in cron in another minute regardless. Anyway, this is a potential headache every time a lock file is left behind, or the slave can not contact the master.