Go Back   HowtoForge Forums | HowtoForge - Linux Howtos and Tutorials > Linux Forums > Installation/Configuration

Do you like HowtoForge? Please consider supporting us by becoming a subscriber.
Reply
 
Thread Tools Display Modes
  #1  
Old 13th November 2008, 15:51
websissy websissy is offline
Junior Member
 
Join Date: Aug 2008
Posts: 12
Thanks: 3
Thanked 0 Times in 0 Posts
Unhappy Help! Why do I see message about Apache, CPanel & WHM. I don't run cpanel!

My dedicated web server has been configured and running fine for over 2 months now. I've made no configuration changes to the server in over a month and have had 17 sites running there since the end of September.

I do NOT run cpanel on my server and I'm not YET running ISPConfig either. Each of my sites were set up manually using the standard dDebian/Apache virtual host definition files in /etc/apache2/sites-available and sites-enabled setup

Then suddenly last night the sites started disappearing one by one and were replaced with a screen whose heading says (see attached):

Great Success! Apache is working on your cPanel® and WHM™ Server

As I said, I do NOT run cpanel on my server at all and I'm not YET running ISPConfig either. Each of my sites was set up manually using the standard Apache virtual host definition files and they've all been working fine for weeks.

So, why the hell am I suddenly getting this apache and cpanel message and how do I get rid of it? Although the files in /etc/apache2/sites-enabled and sites-available seem to be intact, I can't for the life of me figure out why I'm now getting this message?

Can anyone offer me a CLUE as to what would cause this to happen?

I have spot checked the individual domains and their files in the separate web accounts. From what I can tell, everything seems to be intact and present in the web accounts. It was just a cursory spot check, of course; but from what i can see, the web domains and their files all seem to be untouched.

So, what would cause apache to suddenly NOT be able to find the web domains it has been able to locate since August? Have I been hacked somehow? Or is the cause of this problem likely to less ominous than that?

If it helps, I'm running Debian Etch 4.0r3 and maradns is my DNS server. Mara is able to dig all of the sites fine and sees no problem. For all intents and purposes, I am basically my own dedicated server supplier. There is essentially noone upstream from me. The supplier who provides this server offers no support beyond the most basic of "Can you log into your server?". They answer no questions beyond that.

As I configured it, my server acts as its own DNS host/server with a domain on the server dedicated to that purpose. I use maradns rather than Bind9 because it required less system resources, was supposed to be MUCH easier to set up than Bind9, and wasn't subject to the same security exploits Bind9 was. Mara has worked well for 3 months now.

Other relevant details:

"dig domainname" reports no issues with any of the domains on my server at the moment.

I am able to "ping" both IP addressses on the server and ping gets a response. However, I did go to intodns.com and tried their tool to test two of my 17 domains. The only errors intodns reports are:
  1. Nameservers are lame ERROR: looks like you have lame nameservers. The following nameservers are lame: 219.18.180.89
    The other IP address involved shows no errors. and from what I can tell in my research this is not a fatal error on its own.

  2. Different subnets WARNING: Not all of your nameservers are in different subnets
    This is a yellow (cautionary) error

  3. Different autonomous systems WARNING: Single point of failure
    This is a yellow (cautionary) error

  4. SOA MNAME entry WARNING: SOA MNAME (domainname.org) is not listed as a primary nameserver at your parent nameserver!
    This is a yellow (cautionary) error
It reports the same errors for both of the domains I checked.

My research suggests none of these errors should be deal-killers on their own. Am I correct in reaching that conclusion?

Any thoughts or suggestions would be GREATLY appreciated! My server is down at the moment by my choice and all 17 of my domains are dead. As it stands, none of the 17 domains there can be accessed right now anyway until I figure out the cause of this problem.

Help!!! Can anyone advise me? I haven't a CLUE how to troubleshoot this issue. Thanks!
Attached Images
 

Last edited by websissy; 13th November 2008 at 16:27.
Reply With Quote
Sponsored Links
  #2  
Old 14th November 2008, 13:11
falko falko is offline
Super Moderator
 
Join Date: Apr 2005
Location: Lüneburg, Germany
Posts: 41,701
Thanks: 1,900
Thanked 2,721 Times in 2,562 Posts
Default

Any errors in Apache's error log?

Can you tell me one of the affected domains so that I can do some tests?
__________________
Falko
--
Download the ISPConfig 3 Manual! | Check out the ISPConfig 3 Billing Module!

FB: http://www.facebook.com/howtoforge

nginx-Webhosting: Timme Hosting | Follow me on:
Reply With Quote
The Following User Says Thank You to falko For This Useful Post:
websissy (14th November 2008)
  #3  
Old 14th November 2008, 19:08
websissy websissy is offline
Junior Member
 
Join Date: Aug 2008
Posts: 12
Thanks: 3
Thanked 0 Times in 0 Posts
Arrow How we analyzed and fought a DDOS attack

Quote:
Originally Posted by falko View Post
Any errors in Apache's error log?

Can you tell me one of the affected domains so that I can do some tests?
Thanks for the reply, falko. What turned out to be a DDOS attack has subsided and things are back to normal. I PMed you some domain names.

We concluded this was caused by a DDOS attack. Here is what we know now.

The problem first began to appear around 8pm server time on Wednesday. By 7:30pm Thursday night the problem had disappeared and things were back to normal.

At first we assumed we were seeing a hacker or spam attack and possibly a password cracking attack on our server. It took hours to realize it was actually a DDOS attack. At first, we did what made sense to combat a hack attack. Later our strategy became focused on stopping a DDOS attack. Here is our diary of events and actions taken over the period of the attack.
  • 10pm-12am Wed/Thurs - 2 hours after attack started and an hour before it reached its peak, changed all user passwords on server to 12 character alphanumerics. Average number of attempts per password required to crack these passwords: 1,106,838,475,932,200,000,000 -- just over 1 sextillion. If this was an attack on any of the server's domains or by email our enemies were in for some hard work.
  • 12am-1am Thur - Conducted a site-by-site security and stability check of all sites on server. All sites appeared intact and secure. Server's domains are still inaccessible due to what appear to be DNS timeouts. Server seems to be VERY busy handling traffic. DNS server is holding its own. Experienced 3 or 4 (puTTY/ssh) server connection losses during this period.
  • 1am-1:30am - With passwords secure and server sites stable, we shut down the most vulnerable and attackable apps on server -- including guestbooks, mysql, etc. Database passwords to all mysql apps were also changed. If domains can't be accessed, there's no sense inviting trouble. Still seeing 1 or 2 server connection losses per hour with (puTTY/ssh).
  • 1:30am-2:30am - Posted inquiries and requests to trusted Linux support sites seeking advice on how to analyze, diagnose and solve the problem.
  • 2:30am - Went to bed exhausted. This was going to take a while!
  • 5:30am-7:30am - Rebooted server and conducted review of overnight traffic and logs. 40,000 errors had occurred overnight in email system alone due to password failures. The locally-installed mailman listserv app also experienced many errors but no successful security breaches could be identified. All user domains still seem intact and undamaged. This was when we began to believe a DDOS attack was the cause of the problem. Domains on server are still inaccessible -- apparently due to heavy net traffic. Despite the server password changes, attack had not subsided. Other than full log files damage seemed limited. The security barriers were holding up well. Checked web advice requests. No responses yet.
  • 7:30am-11:30am - Posted a few more web advice requests. Responded to questions, suggestions and advice from hosting clients and web contacts. Conducted web-research on as-yet-unidentified server vulnerabilities, tools and methods to identify, analyze and fight DDOS attacks. Also researched server hardening strategies, techniques, tools and options. The news isn't good. Even the world's top DDOS experts say these attacks are tough to identify, hard to fight and can take many forms. The tools available to fight them are also limited and expensive. Ran a few tests, but made no server changes. There's no sense being caught with our pants down while we're under attack!
  • 11:30am-12:30pm - Domains on server still inaccessible due to what appears to be heavy traffic. A second error review showed thousands of new errors from email server. These guys weren't giving up easy! Decided to shutdown Apache, email server, mailman listserv and the DNS server.
  • 1:30pm-3:30pm - Traffic loads continued to fall. Server connection losses are not as frequent. Some local domain home pages do occasionally appear now. However, attack has not completely subsided. Left key server apps shutdown during this period.
  • 3:30pm-6:30pm - Either the attack mitigation strategies were successful or the attack was timed to last 24 hours. Over these hours the intensity of the attack gradually subsided. As it did, we brought more of the server's apps back online. By 5:00pm, 50% of local domain requests were successful in displaying the domain's home page. Those that were MOST successful were the ones routed to the IP address allocated to the second DNS. A server damage assessment conducted after 18 hours showed no visible user domain or server damage or penetration.
  • By 8:30pm, roughly 24 hours after the attack began, all domains were working again and the Apache/cpanel/WHM screen had disappeared.
Conclusion: We now believe the Apache/cpanel/WHM screen we were seeing was being displayed by our server supplier's upstream DNS server because our local DNS server was failing to respond fast enough. The fact that we heard nothing from our server supplier during this period suggests ours was not the only server in their datacenter that was under attack...

What say you, falko? Did we get the situational, strategic and tactical analysis and combat techniques right or did we screw up somewhere?

Thanks again for your comments, thoughts, insights and suggestions.

Last edited by websissy; 14th November 2008 at 19:26.
Reply With Quote
  #4  
Old 18th November 2008, 22:16
marpada marpada is offline
Senior Member
 
Join Date: Sep 2008
Posts: 139
Thanks: 2
Thanked 14 Times in 14 Posts
 
Default

Jus a few tips: fail2ban, mod_evasive, iptables rules for banning attacking IPs. There're many more tools, but I think these are the basic ones!
________
Oakland vaporizer
________
Shemale Dick

Last edited by marpada; 13th May 2011 at 01:57.
Reply With Quote
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux Apache htdocs permission at run time mk1336 Server Operation 1 18th November 2008 22:05
pop3 service alone is failed in "The Perfect Setup - Debian Sarge (3.1)" nandhu HOWTO-Related Questions 60 5th August 2008 15:15
CENTOS 5 Ping Problem gAnDo Server Operation 11 28th March 2008 20:58
Centos 4.4 32bit Hangs, High Server load 3cwired_com Server Operation 11 16th November 2006 15:47
Problem with the installation of Dokeos (LMS) in ISPConfig jofranco General 4 28th April 2006 00:45


All times are GMT +2. The time now is 20:19.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.