View Single Post
Old 13th July 2012, 09:42
till till is online now
Super Moderator
Join Date: Apr 2005
Location: Lüneburg, Germany
Posts: 36,994
Thanks: 840
Thanked 5,647 Times in 4,457 Posts

I should probably add that I (accidentally) answered yes to the question "will this server host the ispconfig interface." I'm not sure if that will cause any problem..?
Thats not a problem, as long as you dont login to the inteface on the slave server.

Then I installed apache and ftp to container one, email on container two and DNS on container 3. I had set up container one to be the master and the other containers to use that master. All containers use the controller's mysql server. Everything works so far with no problems.
ISPConfig uses the mysql instances as cache to speed up operations and to ensure that your services dont get down when the master mysql is down. So this configuration that you did is not recommended for several reasons:

1) It introduces a single point of failure, so your whole cluster will be down now when a single mysql database fails.
2) It is slower as you disabled the caching.
3) I might fail with connection errors under higher load.

Anyway, I see the OpenVZ hosts in the control panel now. The only thing that does not work is, well, every OpenVZ setting I change. So starting/stopping containers, creating them, et cetera. And now I also notice that all actions in the sys_remoteaction table of container one are stuck in the pending state, for all server_id's 1-4 i.e. http/ftp, email, DNS and OpenVZ.
Please use thed ebugging instructions from FAQ to find out whats wrong.

I would appreciate help with solving this. The server is not in production so if it's better to reinstall the whole thing I'll go and do that. But maybe it is something that can be easily fixed.
A Reinstall should not be nescessary.
Till Brehm
Get ISPConfig support and the ISPConfig 3 manual from
Reply With Quote