• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue plesk premium email high memery usage

mohmede

Basic Pleskian
After installing plesk premium email , i have problem with memory usage and server get crashed

The process which consume the memory is
Subscription
-
User
root
Full command
/usr/lib64/erlang/erts-7.3.1.2/bin/beam.smp -Bd -- -root /usr/lib64/erlang/lib/kolab_guam-0.9.4 -progname usr/lib64/erlang/lib/kolab_guam-0.9.4/bin/guam -- -home /opt/kolab_guam/ -- -noshell -noshell -noinput -boot /usr/lib64/erlang/lib/kolab_guam-0.9.4/releases/0.9.4/guam -mode embedded -boot_var ERTS_LIB_DIR /usr/lib64/erlang/lib -config /usr/lib64/erlang/lib/kolab_guam-0.9.4/releases/0.9.4/sys.config -name [email protected] -setcookie kolab_guam -pa -- foreground
PID
29865
Priority
20
TTY
?
State
Sleeping
cpu_64.png

CPU usage
11.2 %
CPU time
00:07:50
ram_64.png

RAM usage
26.2 %
RAM virt
4.83 GiB
RAM res
4.07 GiB
disk_64.png

Disk I/O (read)
0 KiB/s
Disk I/O (write)
0


 
The problem still presence
usr/lib64/erlang/erts-7.3.1.2/bin/beam.smp -Bd -- -root /usr/lib64/erlang/lib/kolab_guam-0.9.4 -progname usr/lib64/erlang/lib/kolab_guam-0.9.4/bin/guam -- -home /opt/kolab_guam/ -- -noshell -noshell -noinput -boot /usr/lib64/erlang/lib/kolab_guam-0.9.4/releases/0.9.4/guam -mode embedded -boot_var ERTS_LIB_DIR /usr/lib64/erlang/lib -config /usr/lib64/erlang/lib/kolab_guam-0.9.4/releases/0.9.4/sys.config -name [email protected] -setcookie kolab_guam -pa -- foreground

How this process take this much of Ram
 
We don't have enough information. You might want to research how to troubleshoot high memory usage on linux, but to begin with:

please log in to the shell of your server and post the output of the command:
Code:
free

please also post the output of (the command will display top 30 processes that use the most of the resident memory, in kb):
Code:
ps axo pid,rss,comm=|sort -n -k 2 |tail -n 30

If your server needs to be restarted because of this issue or otherwise, running the above commands right after the restart and after some time has passed, would help.

Do you have the sysstat utility installed? This will display the memory usage history for the day:
Code:
sar -r

Is this a VPS or a dedicated server? Could you please tell us your exact OS and Plesk version?
 
OS : Centos7 on VPS with 16GB RAM Plesk Obsidian

Output of (Free)
Code:
[root@server ~]# free

              total        used        free      shared  buff/cache   available
Mem:       16265944    11665308     2789980      952732     1810656     3355724
Swap:             0           0           0

Output of ( ps axo pid,rss,comm=|sort -n -k 2 |tail -n 30 )

Code:
 8923 32676 python2.7

32726 32680 python2.7
 9364 32680 python2.7
 8921 32688 python2.7
32724 32692 python2.7
 8922 32700 python2.7
32727 32712 python2.7
32728 32720 python2.7
32729 32720 python2.7
18191 34464 php-cgi
18336 34472 php-cgi
18204 34496 php-cgi
18332 34508 php-cgi
18203 35916 php-cgi
 2405 36516 grafana-server
18134 37828 php-cgi
18097 38152 php-cgi
18101 38292 php-cgi
18297 38500 php-cgi
18288 41340 php-fpm
14056 60164 sw-engine-fpm
 5761 64536 sw-engine-fpm
25336 65900 /usr/bin/spamd
 3888 67696 sw-engine-fpm
26985 70804 sw-engine-fpm
 3305 73668 spamd child
 2282 74608 spamd child
 1483 248136 named
17740 1243712 mysqld
17276 8923056 beam.smp

Output of ( sar -r )

Code:
-bash: sar: command not found
 
To get the correct output from the "sar -r" command, install the sysstat utility first:
Code:
yum install sysstat

In any case, beam.smp seems to be using almost 9 GB of RAM. Your server has more than 3 GB free.

How many mailboxes are there? How many hosted domains?

Also, these commands should tell you of there were any out of memory occurrences lately:
Code:
grep -i 'out of memory' /var/log/messages
grep -i 'killed process' /var/log/messages
dmesg | grep -E -i 'killed process'
 
I have 4 domains only one working with plesk premium emails
total mail boxs 140 with total size 40 gb


Code:
[root@server ~]# grep -i 'out of memory' /var/log/messages
Sep 29 11:35:19 server kernel: Out of memory: Kill process 14663 (beam.smp) score 763 or sacrifice child
Sep 29 11:35:19 server kernel: Out of memory: Kill process 14663 (beam.smp) score 763 or sacrifice child
Sep 29 11:35:19 server kernel: Out of memory: Kill process 14663 (beam.smp) score 763 or sacrifice child
Sep 29 12:40:32 server kernel: Out of memory: Kill process 7010 (beam.smp) score 762 or sacrifice child
Sep 29 12:40:32 server kernel: Out of memory: Kill process 7010 (beam.smp) score 762 or sacrifice child
Sep 29 12:40:32 server kernel: Out of memory: Kill process 7112 (child_waiter) score 762 or sacrifice child
Sep 29 12:40:32 server kernel: Out of memory: Kill process 7113 (1_scheduler) score 762 or sacrifice child
Sep 29 12:40:32 server kernel: Out of memory: Kill process 7114 (2_scheduler) score 762 or sacrifice child
Sep 29 13:35:39 server kernel: Out of memory: Kill process 24470 (beam.smp) score 756 or sacrifice child
Sep 29 13:35:39 server kernel: Out of memory: Kill process 24470 (beam.smp) score 756 or sacrifice child
Sep 29 15:02:47 server kernel: Out of memory: Kill process 5844 (beam.smp) score 755 or sacrifice child
Sep 29 15:02:47 server kernel: Out of memory: Kill process 5844 (beam.smp) score 755 or sacrifice child
Sep 29 15:02:47 server kernel: Out of memory: Kill process 5919 (child_waiter) score 755 or sacrifice child
Sep 29 15:02:47 server kernel: Out of memory: Kill process 5924 (5_scheduler) score 755 or sacrifice child
Sep 29 15:32:50 server kernel: Out of memory: Kill process 29348 (beam.smp) score 750 or sacrifice child
Sep 29 15:32:50 server kernel: Out of memory: Kill process 29348 (beam.smp) score 750 or sacrifice child
Sep 29 16:01:56 server kernel: Out of memory: Kill process 4712 (beam.smp) score 749 or sacrifice child
Sep 29 16:01:56 server kernel: Out of memory: Kill process 4712 (beam.smp) score 749 or sacrifice child
Sep 29 16:24:40 server kernel: Out of memory: Kill process 15881 (beam.smp) score 756 or sacrifice child
Sep 29 16:24:40 server kernel: Out of memory: Kill process 15881 (beam.smp) score 757 or sacrifice child
Sep 29 18:37:26 server kernel: Out of memory: Kill process 3437 (beam.smp) score 764 or sacrifice child
Sep 29 18:37:26 server kernel: Out of memory: Kill process 3437 (beam.smp) score 764 or sacrifice child
Sep 29 18:37:26 server kernel: Out of memory: Kill process 3523 (1_scheduler) score 764 or sacrifice child
Sep 29 18:37:26 server kernel: Out of memory: Kill process 3524 (2_scheduler) score 764 or sacrifice child
Sep 29 18:37:26 server kernel: Out of memory: Kill process 3525 (3_scheduler) score 764 or sacrifice child
Sep 29 20:18:40 server kernel: Out of memory: Kill process 14891 (beam.smp) score 759 or sacrifice child
Sep 29 20:18:42 server kernel: Out of memory: Kill process 14891 (beam.smp) score 759 or sacrifice child
Sep 29 20:18:50 server kernel: Out of memory: Kill process 14891 (beam.smp) score 759 or sacrifice child
Sep 29 20:18:50 server kernel: Out of memory: Kill process 14891 (beam.smp) score 759 or sacrifice child
Sep 29 20:18:50 server kernel: Out of memory: Kill process 14891 (beam.smp) score 759 or sacrifice child
Sep 29 20:18:50 server kernel: Out of memory: Kill process 14891 (beam.smp) score 759 or sacrifice child
Sep 30 12:49:18 server kernel: Out of memory: Kill process 22158 (beam.smp) score 731 or sacrifice child
Sep 30 12:49:18 server kernel: Out of memory: Kill process 22158 (beam.smp) score 731 or sacrifice child
Sep 30 18:43:57 server kernel: Out of memory: Kill process 19378 (beam.smp) score 733 or sacrifice child
Sep 30 18:43:57 server kernel: Out of memory: Kill process 19378 (beam.smp) score 733 or sacrifice child
Sep 30 18:43:57 server kernel: Out of memory: Kill process 19456 (1_scheduler) score 733 or sacrifice child
Sep 30 18:43:57 server kernel: Out of memory: Kill process 19457 (2_scheduler) score 733 or sacrifice child
Oct  1 00:38:10 server kernel: Out of memory: Kill process 13866 (beam.smp) score 736 or sacrifice child
Oct  1 00:38:10 server kernel: Out of memory: Kill process 13866 (beam.smp) score 736 or sacrifice child
Oct  1 00:38:10 server kernel: Out of memory: Kill process 13946 (5_scheduler) score 737 or sacrifice child
Oct  1 11:59:54 server kernel: Out of memory: Kill process 23239 (beam.smp) score 738 or sacrifice child
Oct  1 11:59:54 server kernel: Out of memory: Kill process 23239 (beam.smp) score 738 or sacrifice child
Oct  1 12:25:03 server kernel: Out of memory: Kill process 28020 (beam.smp) score 730 or sacrifice child
Oct  1 12:25:03 server kernel: Out of memory: Kill process 28020 (beam.smp) score 730 or sacrifice child
 
---------------------------------------------------------------
Code:
[root@server ~]# grep -i 'killed process' /var/log/messages
Sep 29 11:35:19 server kernel: Killed process 14797 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 29 11:35:19 server kernel: Killed process 7001 (child_setup), UID 0, total-vm:4268kB, anon-rss:40kB, file-rss:104kB, shmem-rss:0kB
Sep 29 11:35:19 server kernel: Killed process 14663 (beam.smp), UID 0, total-vm:13653612kB, anon-rss:12785476kB, file-rss:120kB, shmem-rss:0kB
Sep 29 12:40:32 server kernel: Killed process 7120 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 29 12:40:32 server kernel: Killed process 7010 (beam.smp), UID 0, total-vm:13598916kB, anon-rss:12761436kB, file-rss:140kB, shmem-rss:0kB
Sep 29 12:40:32 server kernel: Killed process 7112 (child_waiter), UID 0, total-vm:13598916kB, anon-rss:12763220kB, file-rss:192kB, shmem-rss:0kB
Sep 29 12:40:32 server kernel: Killed process 7113 (1_scheduler), UID 0, total-vm:13598916kB, anon-rss:12763388kB, file-rss:192kB, shmem-rss:0kB
Sep 29 12:40:32 server kernel: Killed process 7114 (2_scheduler), UID 0, total-vm:13598916kB, anon-rss:12763528kB, file-rss:216kB, shmem-rss:0kB
Sep 29 13:35:39 server kernel: Killed process 24559 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 29 13:35:39 server kernel: Killed process 24470 (beam.smp), UID 0, total-vm:13492620kB, anon-rss:12660848kB, file-rss:860kB, shmem-rss:0kB
Sep 29 15:02:47 server kernel: Killed process 5928 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:116kB, file-rss:0kB, shmem-rss:0kB
Sep 29 15:02:47 server kernel: Killed process 5844 (beam.smp), UID 0, total-vm:13458812kB, anon-rss:12637800kB, file-rss:0kB, shmem-rss:0kB
Sep 29 15:02:47 server kernel: Killed process 5923 (4_scheduler), UID 0, total-vm:13458812kB, anon-rss:12639332kB, file-rss:24kB, shmem-rss:0kB
Sep 29 15:02:47 server kernel: Killed process 5924 (5_scheduler), UID 0, total-vm:13458812kB, anon-rss:12639344kB, file-rss:188kB, shmem-rss:0kB
Sep 29 15:32:50 server kernel: Killed process 29432 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 29 15:32:50 server kernel: Killed process 29348 (beam.smp), UID 0, total-vm:13403440kB, anon-rss:12562816kB, file-rss:264kB, shmem-rss:0kB
Sep 29 16:01:56 server kernel: Killed process 4796 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 29 16:01:56 server kernel: Killed process 4712 (beam.smp), UID 0, total-vm:13368608kB, anon-rss:12546932kB, file-rss:160kB, shmem-rss:0kB
Sep 29 16:24:40 server kernel: Killed process 15965 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 29 16:24:40 server kernel: Killed process 15881 (beam.smp), UID 0, total-vm:13620932kB, anon-rss:12668920kB, file-rss:0kB, shmem-rss:0kB
Sep 29 18:37:26 server kernel: Killed process 3531 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
Sep 29 18:37:26 server kernel: Killed process 3437 (beam.smp), UID 0, total-vm:13768092kB, anon-rss:12791120kB, file-rss:204kB, shmem-rss:0kB
Sep 29 18:37:26 server kernel: Killed process 3523 (1_scheduler), UID 0, total-vm:13768092kB, anon-rss:12792736kB, file-rss:120kB, shmem-rss:0kB
Sep 29 18:37:26 server kernel: Killed process 3524 (2_scheduler), UID 0, total-vm:13768092kB, anon-rss:12792940kB, file-rss:120kB, shmem-rss:0kB
Sep 29 18:37:26 server kernel: Killed process 3525 (3_scheduler), UID 0, total-vm:13768092kB, anon-rss:12792940kB, file-rss:80kB, shmem-rss:0kB
Sep 29 20:18:40 server kernel: Killed process 14976 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:116kB, file-rss:0kB, shmem-rss:0kB
Sep 29 20:18:42 server kernel: Killed process 4093 (sh), UID 0, total-vm:1240kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB
Sep 29 20:18:50 server kernel: Killed process 4098 (sh), UID 0, total-vm:113188kB, anon-rss:180kB, file-rss:56kB, shmem-rss:0kB
Sep 29 20:18:50 server kernel: Killed process 4100 (child_setup), UID 0, total-vm:296kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB
Sep 29 20:18:50 server kernel: Killed process 4103 (sh), UID 0, total-vm:1240kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB
Sep 29 20:18:50 server kernel: Killed process 14891 (beam.smp), UID 0, total-vm:13595464kB, anon-rss:12716016kB, file-rss:0kB, shmem-rss:0kB
Sep 30 12:49:18 server kernel: Killed process 22243 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
Sep 30 12:49:18 server kernel: Killed process 22158 (beam.smp), UID 0, total-vm:13324800kB, anon-rss:12244524kB, file-rss:84kB, shmem-rss:0kB
Sep 30 18:43:57 server kernel: Killed process 19464 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
Sep 30 18:43:57 server kernel: Killed process 19378 (beam.smp), UID 0, total-vm:13249360kB, anon-rss:12279608kB, file-rss:0kB, shmem-rss:0kB
Sep 30 18:43:57 server kernel: Killed process 19456 (1_scheduler), UID 0, total-vm:13249360kB, anon-rss:12281232kB, file-rss:0kB, shmem-rss:0kB
Sep 30 18:43:57 server kernel: Killed process 19457 (2_scheduler), UID 0, total-vm:13249360kB, anon-rss:12281460kB, file-rss:0kB, shmem-rss:0kB
Oct  1 00:38:10 server kernel: Killed process 13950 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
Oct  1 00:38:10 server kernel: Killed process 13866 (beam.smp), UID 0, total-vm:13303452kB, anon-rss:12332656kB, file-rss:0kB, shmem-rss:0kB
Oct  1 00:38:10 server kernel: Killed process 13946 (5_scheduler), UID 0, total-vm:13303452kB, anon-rss:12335008kB, file-rss:256kB, shmem-rss:0kB
Oct  1 11:59:54 server kernel: Killed process 23323 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
Oct  1 11:59:54 server kernel: Killed process 23239 (beam.smp), UID 0, total-vm:13464156kB, anon-rss:12358148kB, file-rss:0kB, shmem-rss:0kB
Oct  1 12:25:03 server kernel: Killed process 28103 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
Oct  1 12:25:03 server kernel: Killed process 28020 (beam.smp), UID 0, total-vm:13386304kB, anon-rss:12221204kB, file-rss:108kB, shmem-rss:0kB


-----------------------------------------------------------
Code:
[173447.709596] Killed process 4796 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
[173447.797391] Killed process 4712 (beam.smp), UID 0, total-vm:13368608kB, anon-rss:12546932kB, file-rss:160kB, shmem-rss:0kB
[174811.870454] Killed process 15965 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
[174811.995015] Killed process 15881 (beam.smp), UID 0, total-vm:13620932kB, anon-rss:12668920kB, file-rss:0kB, shmem-rss:0kB
[182777.259232] Killed process 3531 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
[182777.382674] Killed process 3437 (beam.smp), UID 0, total-vm:13768092kB, anon-rss:12791120kB, file-rss:204kB, shmem-rss:0kB
[182777.429079] Killed process 3523 (1_scheduler), UID 0, total-vm:13768092kB, anon-rss:12792736kB, file-rss:120kB, shmem-rss:0kB
[182777.482225] Killed process 3524 (2_scheduler), UID 0, total-vm:13768092kB, anon-rss:12792940kB, file-rss:120kB, shmem-rss:0kB
[182777.601990] Killed process 3525 (3_scheduler), UID 0, total-vm:13768092kB, anon-rss:12792940kB, file-rss:80kB, shmem-rss:0kB
[188851.567285] Killed process 14976 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:116kB, file-rss:0kB, shmem-rss:0kB
[188852.574426] Killed process 4093 (sh), UID 0, total-vm:1240kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB
[188858.586554] Killed process 4098 (sh), UID 0, total-vm:113188kB, anon-rss:180kB, file-rss:56kB, shmem-rss:0kB
[188859.112480] Killed process 4100 (child_setup), UID 0, total-vm:296kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB
[188860.755360] Killed process 4103 (sh), UID 0, total-vm:1240kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB
[188861.279928] Killed process 14891 (beam.smp), UID 0, total-vm:13595464kB, anon-rss:12716016kB, file-rss:0kB, shmem-rss:0kB
[248288.905181] Killed process 22243 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:104kB, file-rss:0kB, shmem-rss:0kB
[248289.148958] Killed process 22158 (beam.smp), UID 0, total-vm:13324800kB, anon-rss:12244524kB, file-rss:84kB, shmem-rss:0kB
[269567.857678] Killed process 19464 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
[269567.880999] Killed process 19378 (beam.smp), UID 0, total-vm:13249360kB, anon-rss:12279608kB, file-rss:0kB, shmem-rss:0kB
[269567.913674] Killed process 19456 (1_scheduler), UID 0, total-vm:13249360kB, anon-rss:12281232kB, file-rss:0kB, shmem-rss:0kB
[269567.983261] Killed process 19457 (2_scheduler), UID 0, total-vm:13249360kB, anon-rss:12281460kB, file-rss:0kB, shmem-rss:0kB
[290819.687866] Killed process 13950 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
[290820.593740] Killed process 13866 (beam.smp), UID 0, total-vm:13303452kB, anon-rss:12332656kB, file-rss:0kB, shmem-rss:0kB
[290820.694762] Killed process 13946 (5_scheduler), UID 0, total-vm:13303452kB, anon-rss:12335008kB, file-rss:256kB, shmem-rss:0kB
[331725.346334] Killed process 23323 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
[331725.621634] Killed process 23239 (beam.smp), UID 0, total-vm:13464156kB, anon-rss:12358148kB, file-rss:0kB, shmem-rss:0kB
[333234.542454] Killed process 28103 (inet_gethost), UID 0, total-vm:11592kB, anon-rss:100kB, file-rss:0kB, shmem-rss:0kB
[333234.584171] Killed process 28020 (beam.smp), UID 0, total-vm:13386304kB, anon-rss:12221204kB, file-rss:108kB, shmem-rss:0kB
 
Well, one thing is now certain: your server was indeed running out of RAM in the past. The log file information shows that processes were getting killed because of the RAM shortage.

It's hard for me to judge whether Plesk Premium Mail with 140 or so mailboxes could normally consume that much RAM, this should be investigated. Kolab can consume quite a bit of resources, though, it's much more than just mail.

As a temporary first step, I'd would add a swap file. Then I'd keep observing the resource consumption and investigate further.
 
Well, one thing is now certain: your server was indeed running out of RAM in the past. The log file information shows that processes were getting killed because of the RAM shortage.

It's hard for me to judge whether Plesk Premium Mail with 140 or so mailboxes could normally consume that much RAM, this should be investigated. Kolab can consume quite a bit of resources, though, it's much more than just mail.

As a temporary first step, I'd would add a swap file. Then I'd keep observing the resource consumption and investigate further.

I Create Swap file with 8 GB is that ok !
then i will test for 1 day
 
I am facing exactly these problems, also on a CentOS 7 VPS. In my case, I have 4GB ram and 16 active Premium email users.
upload_2019-10-25_20-30-19.png
The users also lose their IMAP connection once in a while.
The agenda's are shared on a couple of devices through the CalDav connectivity provided by Kolab.

Any ideas on how to fix this?
 
I had the same problem with Plesk Premium Email killing my server when it is switched on, but managed to solve the problem.
There are 2 issues:
1. When you switch on Plesk Premium Email (Kolab) - it re-indexes ALL the dovecot IMAP Indexes - forcing a redownload off ALL email to ALL my USERS - I had a few hundred pissed clients for a day, while it downloaded about 200GB of emails - there is unfortunately nothing to be done about this...
2. In the mail server settings, you need to lower the amount of concurrent connections to Dovecot and the amount of concurrent connections per user, to lower the MEMORY and CPU consumption to within tolerable levels. Mine is set to 2000 x 2000 now, to maintain a steady 15-20gb RAM consumption. Prior to Kolab I had 20 000 x 20 000 configured and were spiking to about 4gb RAM consumption on Monday mornings...

I will go back to a higher level of concurrent connections once the system is more stable, as 2000 is way too little for 300 email boxes, especially early morning when everyone logs in at the same time...
 
Back
Top