Switching off mail

Discussion in 'General' started by Toucan, Jan 8, 2013.

  1. Toucan

    Toucan New Member

    Server 1 master with all usual services debian lenny and ispconfig 3.0.4.6

    Server 2 is a slave vds currently running all the usual services but in reality provides mostly web sites.

    For a couple of days now at a certain time each day the load on the server 2 dramatically increases and causes it to come to a halt for a 30 minutes or so. Looking at the system log at around that time it would indicate it is running out of memory. I think, from looking a the logs this is down to a massive influx of post (or possible output). The deferred queue at this time seems to suddenly increase.

    I've moved all mailboxes over to my other physical server so i don't actually need mail services on vds server 2.

    My thoughts are to turn off postfix and assoicated services on server2 to free up the memory. Would anyone agree that this is probably the right course of action?

    If so, what's the best way to turn these services off please. I appreciate I can turn them off in the control panel, but will this actually switch off the services, or just disassociate them from ispconfig?



    Here is an extract from the syslog about the time issues cropped up:
    Code:
    Jan  8 14:11:46 badbison kernel: [2605308.971139]  [<ffffffff80281ae9>] handle_mm_fault+0x452/0x8de
    Jan  8 14:11:46 badbison kernel: [2605308.971236]  [<ffffffff802507a8>] do_futex+0xa6/0x78a
    Jan  8 14:11:46 badbison kernel: [2605308.971325]  [<ffffffff80221fbc>] do_page_fault+0x5d8/0x9c8
    Jan  8 14:11:46 badbison kernel: [2605308.971415]  [<ffffffff8042aaf9>] error_exit+0x0/0x60
    Jan  8 14:11:46 badbison kernel: [2605308.971502]
    Jan  8 14:11:46 badbison kernel: [2605308.971560] Mem-info:
    Jan  8 14:11:46 badbison kernel: [2605308.971623] Node 0 DMA per-cpu:
    Jan  8 14:11:46 badbison kernel: [2605308.971693] CPU    0: hi:    0, btch:   1 usd:   0
    Jan  8 14:11:46 badbison kernel: [2605308.971773] Node 0 DMA32 per-cpu:
    Jan  8 14:11:46 badbison kernel: [2605308.971845] CPU    0: hi:  186, btch:  31 usd: 158
    Jan  8 14:11:46 badbison kernel: [2605308.971926] Active:110286 inactive:122375 dirty:0 writeback:0 unstable:0
    Jan  8 14:11:46 badbison kernel: [2605308.971927]  free:1991 slab:5625 mapped:1 pagetables:11095 bounce:0
    Jan  8 14:11:46 badbison kernel: [2605308.972157] Node 0 DMA free:4008kB min:40kB low:48kB high:60kB active:4132kB inactive:3472kB present:10792kB pages_scanned:14707 all_unreclaimable? yes
    Jan  8 14:11:46 badbison kernel: [2605308.972346] lowmem_reserve[]: 0 994 994 994
    Jan  8 14:11:46 badbison kernel: [2605308.972427] Node 0 DMA32 free:3956kB min:4012kB low:5012kB high:6016kB active:437012kB inactive:486028kB present:1018016kB pages_scanned:1695293 all_unreclaimable? yes
    Jan  8 14:11:46 badbison kernel: [2605308.972670] lowmem_reserve[]: 0 0 0 0
    Jan  8 14:11:46 badbison kernel: [2605308.972746] Node 0 DMA: 0*4kB 1*8kB 0*16kB 1*32kB 0*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 4008kB
    Jan  8 14:11:46 badbison kernel: [2605308.972924] Node 0 DMA32: 133*4kB 14*8kB 1*16kB 1*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3956kB
    Jan  8 14:11:46 badbison kernel: [2605308.973106] 165 total pagecache pages
    Jan  8 14:11:46 badbison kernel: [2605308.973179] Swap cache: add 19562410, delete 19562410, find 26634828/29115419
    Jan  8 14:11:46 badbison kernel: [2605308.973321] Free swap  = 0kB
    Jan  8 14:11:46 badbison kernel: [2605308.973389] Total swap = 498004kB
    Jan  8 14:11:46 badbison kernel: [2605308.976187] 262144 pages of RAM
    Jan  8 14:11:46 badbison kernel: [2605308.976265] 4851 reserved pages
    Jan  8 14:11:46 badbison kernel: [2605308.976335] 77792 pages shared
    Jan  8 14:11:46 badbison kernel: [2605308.976403] 0 pages swap cached
    Jan  8 14:11:46 badbison kernel: [2605308.976474] Out of memory: kill process 21960 (apache2) score 341562 or a child
    Jan  8 14:11:46 badbison kernel: [2605308.976644] Killed process 23876 (php-cgi)
    Jan  8 14:11:59 badbison kernel: [2605323.173651] munin-node invoked oom-killer: gfp_mask=0x1201d2, order=0, oomkilladj=0
    Jan  8 14:11:59 badbison kernel: [2605323.173810] Pid: 4264, comm: munin-node Not tainted 2.6.26-2-amd64 #1
    Jan  8 14:11:59 badbison kernel: [2605323.173903]
    Jan  8 14:11:59 badbison kernel: [2605323.173904] Call Trace:
    Jan  8 14:11:59 badbison kernel: [2605323.174042]  [<ffffffff80273994>] oom_kill_process+0x57/0x1dc
    Jan  8 14:11:59 badbison kernel: [2605323.174133]  [<ffffffff8023b519>] __capable+0x9/0x1c
    Jan  8 14:11:59 badbison kernel: [2605323.174216]  [<ffffffff80273cbf>] badness+0x188/0x1c7
    Jan  8 14:11:59 badbison kernel: [2605323.174301]  [<ffffffff80273ef3>] out_of_memory+0x1f5/0x28e
    Jan  8 14:11:59 badbison kernel: [2605323.174394]  [<ffffffff80276c44>] __alloc_pages_internal+0x31d/0x3bf
    Jan  8 14:11:59 badbison kernel: [2605323.174490]  [<ffffffff802788fa>] __do_page_cache_readahead+0x79/0x183
    Jan  8 14:11:59 badbison kernel: [2605323.174586]  [<ffffffff802731a9>] filemap_fault+0x15d/0x33c
    Jan  8 14:11:59 badbison kernel: [2605323.174675]  [<ffffffff8027e728>] __do_fault+0x50/0x3e6
    
     
  2. till

    till Super Moderator

    A linux server should always run at least a minimal mail server instance, so i wont disable postfix. What you can do to free up memory is this:

    Disable mailscanning with amavisd in postfix, see article in ispconfig faq for detailed steps.
    Stop and disable amavisd, clamav, freshclam and dovecot or courier

    This should free most of the memory of the mail system and leaves just postfix on.
     
  3. Toucan

    Toucan New Member

    Thanks. Worked a treat a munin show a significant reduction in memory usage and web server certainly seems a lot snappier.
     
  4. Toucan

    Toucan New Member

    Apps creeping back up

    Ok so now a week on, I'm looking again at the Munin charts. Deferred mail has reduced dramatically and the load rarely creeps above 1.

    What does puzzle me though is how the memory is being managed..
    When amavis was switched off there was a definite dip in the amount of memory being used by apps, but now it's started to creep back up again. I see the cache has also increased into the free memory but I assumed this was a good thing.

    Can you explain what is happening with the apps?

    Code:
    total       used       free     shared    buffers     cached
    Mem:       1029428     969240      60188          0      13732      79880
    -/+ buffers/cache:     875628     153800
    Swap:       498004     498004          0
     
  5. till

    till Super Moderator

  6. Toucan

    Toucan New Member

    Perfect sense. Thank you.
     
  7. Toucan

    Toucan New Member

    Problem not yet solved

    Thanks Till

    I thought the problem was solved by freeing up this memory by switching off amavis, but again the server came to a stand still today.

    Having had a look at the syslog as below it keeps reporting problems of memory. At the time of these problems, apache stops responding for a good 20 minutes and I can't event get in through SSH, it just hangs.

    I've looked at the charts from munin and although the load does go high at this time, i'm assuming that that is caused by by the lack of memory. I'll try and attach the graphs below.

    If you wouldn't mind helping me sort this out it would be much appreciated.
     

    Attached Files:

  8. falko

    falko Super Moderator

Share This Page