• The ImunifyAV extension is now deprecated and no longer available for installation.
    Existing ImunifyAV installations will continue operating for three months, and after that will automatically be replaced with the new Imunify extension. We recommend that you manually replace any existing ImunifyAV installations with Imunify at your earliest convenience.
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.

Issue Out of memory but plenty of memory

Franco

Regular Pleskian
Server operating system version
Almalinux 8.1
Plesk version and microupdate number
18.0.65 #2
Hello,

I had a very stable system with CentOs 8, but since I have migrated to Almalinux 8 I have had several crashes (average 1-2/month) with php-fpm (8.3) trying to kill itself with OOM signals. I then followed some advice to set vm.overcommit_memory = 2 and a couple of other minor changes such as disabling all website optimizations in the Plesk Perfomance Booster panel, etc., to be on the safe side.
However, things have gone worse, with virtually all my websites issuing errors such as:

FastCGI sent in stderr: "PHP message: PHP Fatal error: Out of memory (allocated 98566144 bytes) (tried to allocate 65536 bytes)

altough that particular website limit is 264MB and there's plenty of memory on the server (8GB total.

In some cases I see:

PHP message: PHP Warning: preg_match(): Allocation of JIT memory failed, PCRE JIT will be disabled. This is likely caused by security restrictions. Either grant PHP permission to allocate executable memory, or set pcre.jit=0

Btw, how do I grant that permission? Could not find it.

I am getting desperate and need some serious help to get out of this hole, please.
Of course, I have the latest Plesk version and all is up to date.
 
You can either allocate more memory to the PHP process or disable PCRE entirely by editing the PHP ini via the Plesk control panel (Tools and Setting > PHP Settings > 8.3 FPM handler > php.ini ) and look for the pcre.jit variable, changing it to the following and setting a semicolon:

pcre.jit=0
 
Hi, thank you for you suggestion, which I implemented. Unfortunately I still see some OOM messages in website logs and, more worrying:

vps mariadbd[3640]: 2025-01-02 17:45:16 36 [Warning] Aborted connection 36 to db: 'unconnected' user: 'unauthenticated' host: 'connecting host' (Out of memory.)
vps mariadbd[3640]: 2025-01-02 17:45:16 30 [ERROR] mariadbd: Out of memory (Needed 91368 bytes)

...

vps kernel: mariadbd[3585]: segfault at 0 ip 0000557dcb316b5c sp 00007fe4d0396d00 error 6 in mariadbd[557dca3b4000+15d1000]
Jan 2 17:44:55 vps kernel: Code: 0f 46 e0 41 83 fc 03 0f 86 f2 00 00 00 8b 01 44 89 e3 a9 80 80 80 80 74 0e eb 4b 0f 1f 00 8b 01 a9 80 80 80 80 75 3f 83 eb 04 <89> 07 48 83 c1 04 48 83 c7 04 83 fb 03 77 e5 85 db 75 29 41 c7 02
 
It sounds like your MariaDB setup is limited by memory -

What are the values of these two set to?

innodb_buffer_pool_size
innodb_log_file_size (generally 25% of your innodb_buffer_pool_size)

With 8 GB of total available memory, you might want to start with:

[mysqld]
innodb_buffer_pool_size=2GB
innodb_log_file_size=512M

This would give MariaDB up to a quarter of system memory. This can be added in your /etc/my.cnf.d directory in a custom.cnf file (ensure to restart the mariadb service once you add that file).
# service mariadb restart
 
I have created the file according to your instructions and restarted MariaDB; however, when I want to read the variables they are unchanged. Besides, my attempt via the Plesk db command line does not work either:

MariaDB [psa]> set global innodb_buffer_pool_size = 2147483648;
Query OK, 0 rows affected (0.000 sec)

Is my value wrong? Why it does not work with the custom.cnf file in the first place?

MariaDB [psa]> show variables like "innodb_buff%";
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| innodb_buffer_pool_chunk_size | 134217728 |
| innodb_buffer_pool_dump_at_shutdown | ON |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_dump_pct | 25 |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | ON |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 134217728 |
+-------------------------------------+----------------+
9 rows in set (0.001 sec)
 
in my /etc/my.cnf it says it will include all files under /etc/my.cnf.d directory but it is followed by an include for /etc/db-performance.cnf. I then edited that file with your suggested values but the database crashes. I rolled it back and it works again (i.e., badly).

I then looked at the Plesk performance booster and in there I can only set the innodb_buffer_pool_size to 128MB or 256MB. No way to select a custom value.
Screenshot 2025-01-02 233319.png
 
Those numbers are far too low -

Trying clicking the "Revert All" button in your screenshot and then add these changes to the custom file in /etc/my.cnf.d/custom.cnf file before restarting MariaDB.
 
I tried but the result is the database fails to start (cannot allocate memory):

Jan 3 09:37:45 vps systemd[1]: Starting MariaDB 10.6.18 database server...
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] Starting MariaDB 10.6.18-MariaDB source revision 887bb3f73555ff8a50138a580ca8308b9b5c069c as process 3540
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] InnoDB: Number of pools: 1
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] InnoDB: Using Linux native AIO
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] InnoDB: Initializing buffer pool, total size = 2147483648, chunk size = 134217728
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [ERROR] InnoDB: Cannot allocate memory for the buffer pool
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] InnoDB: Starting shutdown...
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [Note] Plugin 'FEEDBACK' is disabled.
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [ERROR] Unknown/unsupported storage engine: InnoDB
Jan 3 09:37:45 vps mariadbd[3540]: 2025-01-03 9:37:45 0 [ERROR] Aborting
Jan 3 09:37:45 vps systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 3 09:37:45 vps systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 3 09:37:45 vps systemd[1]: Failed to start MariaDB 10.6.18 database server.

Shall I try with lower values? 1GB, perhaps?
 
That "InnoDB: Cannot allocate memory for the buffer pool" error usually indicates that there isn't enough free memory so one option is to restart your server with the allocation below as your cached memory might be taking up almost all of the free memory:

[mysqld]
innodb_buffer_pool_size=2GB
innodb_log_file_size=512M

If that fails, then check to ensure that you have swap configured.
 
I was able to set the innodb parameters only after having rolled back vm.overcommit_memory to 0.
Soon after the mariaDB changes the memory situation is:

MiB Mem : 7665.7 total, 2131.7 free, 1833.2 used, 3700.8 buff/cache
MiB Swap: 2048.0 total, 1937.8 free, 110.2 used. 5033.2 avail Mem

I will keep following the situation for a day or two before closing this ticket.
 
And here we go again, less than 2 days later MariaDB issues its OOM:

Jan 6 06:09:10 vps kernel: mariadbd invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-900
Jan 6 06:09:10 vps kernel: CPU: 2 PID: 493772 Comm: mariadbd Kdump: loaded Not tainted 4.18.0-553.33.1.el8_10.x86_64 #1
Jan 6 06:09:10 vps kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
Jan 6 06:09:10 vps kernel: Call Trace:
Jan 6 06:09:10 vps kernel: dump_stack+0x41/0x60
Jan 6 06:09:10 vps kernel: dump_header+0x4a/0x1df
Jan 6 06:09:10 vps kernel: oom_kill_process.cold.33+0xb/0x10
Jan 6 06:09:10 vps kernel: out_of_memory+0x1bd/0x4e0
Jan 6 06:09:10 vps kernel: __alloc_pages_slowpath+0xbf0/0xcd0
Jan 6 06:09:10 vps kernel: __alloc_pages_nodemask+0x2e2/0x330
Jan 6 06:09:10 vps kernel: __alloc_pages_nodemask+0x2e2/0x330
Jan 6 06:09:10 vps kernel: pagecache_get_page+0xce/0x310
Jan 6 06:09:10 vps kernel: filemap_fault+0x6c8/0xa30
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? __mod_memcg_lruvec_state+0x4a/0xd0
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? page_add_file_rmap+0x99/0x150
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? alloc_set_pte+0xb6/0x420
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: __xfs_filemap_fault+0x6d/0x200 [xfs]
Jan 6 06:09:10 vps kernel: __do_fault+0x38/0xc0
Jan 6 06:09:10 vps kernel: handle_pte_fault+0x55d/0x880
Jan 6 06:09:10 vps kernel: __handle_mm_fault+0x552/0x6d0
Jan 6 06:09:10 vps kernel: ? apic_timer_interrupt+0xa/0x20
Jan 6 06:09:10 vps kernel: handle_mm_fault+0xca/0x2a0
Jan 6 06:09:10 vps kernel: __do_page_fault+0x1e4/0x440
Jan 6 06:09:10 vps kernel: do_page_fault+0x37/0x12d
Jan 6 06:09:10 vps kernel: ? page_fault+0x8/0x30
Jan 6 06:09:10 vps kernel: page_fault+0x1e/0x30
Jan 6 06:09:10 vps kernel: RIP: 0033:0x556229433aed
Jan 6 06:09:10 vps kernel: Code: Unable to access opcode bytes at RIP 0x556229433ac3.
Jan 6 06:09:10 vps kernel: RSP: 002b:00007f9a5884afb0 EFLAGS: 00010206
Jan 6 06:09:10 vps kernel: RAX: 0000000000000001 RBX: 00007f9a5884b2a0 RCX: 0000000000000053
Jan 6 06:09:10 vps kernel: RDX: 00007f9a002742cf RSI: 00007f9a00274270 RDI: 000055622a647160
Jan 6 06:09:10 vps kernel: RBP: 00007f9a5884b240 R08: 00007f9a5884b2a0 R09: 000055622bf513f8
Jan 6 06:09:10 vps kernel: R10: 00007f9a001fffb8 R11: 00007f9a5884b390 R12: 000055622bf514f8
Jan 6 06:09:10 vps kernel: R13: 000055622bf513f8 R14: 000055622a647160 R15: 00007f9a5884c270
Jan 6 06:09:10 vps kernel: Mem-Info:
Jan 6 06:09:10 vps kernel: active_anon:861084 inactive_anon:949631 isolated_anon:0#012 active_file:0 inactive_file:221 isolated_file:86#012 unevictable:0 dirty:0 writeback:0#012 slab_reclaimable:23364 slab_unreclaimable:32471#012 mapped:55001 shmem:79935 pagetables:39188 bounce:0#012 free:28735 free_pcp:1157 free_cma:0
Jan 6 06:09:10 vps kernel: Node 0 active_anon:3444336kB inactive_anon:3798524kB active_file:0kB inactive_file:884kB unevictable:0kB isolated(anon):0kB isolated(file):344kB mapped:220004kB dirty:0kB writeback:0kB shmem:319740kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 194560kB writeback_tmp:0kB kernel_stack:15040kB pagetables:156752kB all_unreclaimable? no
Jan 6 06:09:10 vps kernel: Node 0 DMA free:14336kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Jan 6 06:09:10 vps kernel: lowmem_reserve[]: 0 2658 7617 7617 7617
Jan 6 06:09:10 vps kernel: Node 0 DMA32 free:43156kB min:23540kB low:29424kB high:35308kB active_anon:1220696kB inactive_anon:1397372kB active_file:0kB inactive_file:1124kB unevictable:0kB writepending:0kB present:3129196kB managed:2756780kB mlocked:0kB bounce:0kB free_pcp:4624kB local_pcp:1184kB free_cma:0kB
Jan 6 06:09:10 vps kernel: lowmem_reserve[]: 0 0 4958 4958 4958
Jan 6 06:09:10 vps kernel: Node 0 Normal free:57448kB min:60292kB low:71268kB high:82244kB active_anon:2223640kB inactive_anon:2401152kB active_file:0kB inactive_file:344kB unevictable:0kB writepending:0kB present:5242880kB managed:5077564kB mlocked:0kB bounce:0kB free_pcp:4kB local_pcp:4kB free_cma:0kB
Jan 6 06:09:10 vps kernel: lowmem_reserve[]: 0 0 0 0 0
Jan 6 06:09:10 vps kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (M) 3*4096kB (M) = 14336kB
Jan 6 06:09:10 vps kernel: Node 0 DMA32: 14*4kB (UME) 1421*8kB (UME) 1210*16kB (UME) 328*32kB (UME) 28*64kB (UM) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 43072kB
Jan 6 06:09:10 vps kernel: Node 0 Normal: 1392*4kB (UME) 1281*8kB (UMEH) 1670*16kB (UMEH) 456*32kB (UME) 3*64kB (ME) 1*128kB (M) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 57448kB
Jan 6 06:09:10 vps kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jan 6 06:09:10 vps kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jan 6 06:09:10 vps kernel: 135214 total pagecache pages
Jan 6 06:09:10 vps kernel: 54989 pages in swap cache
Jan 6 06:09:10 vps kernel: Swap cache stats: add 1023479, delete 968452, find 19470929/19543715
Jan 6 06:09:10 vps kernel: Free swap = 0kB
Jan 6 06:09:10 vps kernel: Total swap = 2097148kB
Jan 6 06:09:10 vps kernel: 2097017 pages RAM
Jan 6 06:09:10 vps kernel: 0 pages HighMem/MovableOnly
Jan 6 06:09:10 vps kernel: 134591 pages reserved
Jan 6 06:09:10 vps kernel: 0 pages hwpoisoned

Again a memory issue, but no segfault messages this time, apparently. I noticed that the MariaDB memory usage is slowly increasing from an initial ~5% up to 25%, but perhaps that's normal? Not sure if it keeps going up till disaster. Is there a way to tell MariaDB to flush its memory or to know what's causing it to occupy that much?
 
@Franco,

It seems to be the case that your host server - upon which your vps resides - is causing issues.

I am not sure whether you installed Plesk on a VPS or just run MariaDB on a virtual server.

In theory and in practice, the solution from @pleskpanel should be sufficient :

[mysqld]
innodb_buffer_pool_size=2GB
innodb_log_file_size=512M

Additional tweaks are required, but this should be sufficient IF AND ONLY IF these settings are possible with your host / vps environment.

If these settings are not possible in the host / vps environment, then you should simply consider to get a dedicated server - in most cases where a vps server is used, there is not much control over the config that determines how the host deals with memory overusage.


As a final note, the current changes in settings will cause havoc, certainly if you work in a host / vps environment.

The SQL instructions are already lagged (read: host is not properly dealing with MariaDB related resource overusage) and any change to MariaDB / MySQL config will make these issues worse - it can even cause the host to stop assigning resources to vps completely.

You can reboot the VPS - with the new MariaDB / MySQL settings - in order to check whether the issue goes away.

If the issues do not disappear, then it is highly likely that the host / vps environment cannot deal properly with the resource usage from the vps.

In that case, you are at a loss and you should abandon the vps.


I can only recommend that you consider a dedicated server as an alternative.

It is always better to have full control over the server!


I hope the above helps a bit.....


Kind regards....
 
And here we go again, less than 2 days later MariaDB issues its OOM:

Jan 6 06:09:10 vps kernel: mariadbd invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-900
Jan 6 06:09:10 vps kernel: CPU: 2 PID: 493772 Comm: mariadbd Kdump: loaded Not tainted 4.18.0-553.33.1.el8_10.x86_64 #1
Jan 6 06:09:10 vps kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
Jan 6 06:09:10 vps kernel: Call Trace:
Jan 6 06:09:10 vps kernel: dump_stack+0x41/0x60
Jan 6 06:09:10 vps kernel: dump_header+0x4a/0x1df
Jan 6 06:09:10 vps kernel: oom_kill_process.cold.33+0xb/0x10
Jan 6 06:09:10 vps kernel: out_of_memory+0x1bd/0x4e0
Jan 6 06:09:10 vps kernel: __alloc_pages_slowpath+0xbf0/0xcd0
Jan 6 06:09:10 vps kernel: __alloc_pages_nodemask+0x2e2/0x330
Jan 6 06:09:10 vps kernel: __alloc_pages_nodemask+0x2e2/0x330
Jan 6 06:09:10 vps kernel: pagecache_get_page+0xce/0x310
Jan 6 06:09:10 vps kernel: filemap_fault+0x6c8/0xa30
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? __mod_memcg_lruvec_state+0x4a/0xd0
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? page_add_file_rmap+0x99/0x150
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? alloc_set_pte+0xb6/0x420
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: ? srso_alias_return_thunk+0x5/0xfcdfd
Jan 6 06:09:10 vps kernel: __xfs_filemap_fault+0x6d/0x200 [xfs]
Jan 6 06:09:10 vps kernel: __do_fault+0x38/0xc0
Jan 6 06:09:10 vps kernel: handle_pte_fault+0x55d/0x880
Jan 6 06:09:10 vps kernel: __handle_mm_fault+0x552/0x6d0
Jan 6 06:09:10 vps kernel: ? apic_timer_interrupt+0xa/0x20
Jan 6 06:09:10 vps kernel: handle_mm_fault+0xca/0x2a0
Jan 6 06:09:10 vps kernel: __do_page_fault+0x1e4/0x440
Jan 6 06:09:10 vps kernel: do_page_fault+0x37/0x12d
Jan 6 06:09:10 vps kernel: ? page_fault+0x8/0x30
Jan 6 06:09:10 vps kernel: page_fault+0x1e/0x30
Jan 6 06:09:10 vps kernel: RIP: 0033:0x556229433aed
Jan 6 06:09:10 vps kernel: Code: Unable to access opcode bytes at RIP 0x556229433ac3.
Jan 6 06:09:10 vps kernel: RSP: 002b:00007f9a5884afb0 EFLAGS: 00010206
Jan 6 06:09:10 vps kernel: RAX: 0000000000000001 RBX: 00007f9a5884b2a0 RCX: 0000000000000053
Jan 6 06:09:10 vps kernel: RDX: 00007f9a002742cf RSI: 00007f9a00274270 RDI: 000055622a647160
Jan 6 06:09:10 vps kernel: RBP: 00007f9a5884b240 R08: 00007f9a5884b2a0 R09: 000055622bf513f8
Jan 6 06:09:10 vps kernel: R10: 00007f9a001fffb8 R11: 00007f9a5884b390 R12: 000055622bf514f8
Jan 6 06:09:10 vps kernel: R13: 000055622bf513f8 R14: 000055622a647160 R15: 00007f9a5884c270
Jan 6 06:09:10 vps kernel: Mem-Info:
Jan 6 06:09:10 vps kernel: active_anon:861084 inactive_anon:949631 isolated_anon:0#012 active_file:0 inactive_file:221 isolated_file:86#012 unevictable:0 dirty:0 writeback:0#012 slab_reclaimable:23364 slab_unreclaimable:32471#012 mapped:55001 shmem:79935 pagetables:39188 bounce:0#012 free:28735 free_pcp:1157 free_cma:0
Jan 6 06:09:10 vps kernel: Node 0 active_anon:3444336kB inactive_anon:3798524kB active_file:0kB inactive_file:884kB unevictable:0kB isolated(anon):0kB isolated(file):344kB mapped:220004kB dirty:0kB writeback:0kB shmem:319740kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 194560kB writeback_tmp:0kB kernel_stack:15040kB pagetables:156752kB all_unreclaimable? no
Jan 6 06:09:10 vps kernel: Node 0 DMA free:14336kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Jan 6 06:09:10 vps kernel: lowmem_reserve[]: 0 2658 7617 7617 7617
Jan 6 06:09:10 vps kernel: Node 0 DMA32 free:43156kB min:23540kB low:29424kB high:35308kB active_anon:1220696kB inactive_anon:1397372kB active_file:0kB inactive_file:1124kB unevictable:0kB writepending:0kB present:3129196kB managed:2756780kB mlocked:0kB bounce:0kB free_pcp:4624kB local_pcp:1184kB free_cma:0kB
Jan 6 06:09:10 vps kernel: lowmem_reserve[]: 0 0 4958 4958 4958
Jan 6 06:09:10 vps kernel: Node 0 Normal free:57448kB min:60292kB low:71268kB high:82244kB active_anon:2223640kB inactive_anon:2401152kB active_file:0kB inactive_file:344kB unevictable:0kB writepending:0kB present:5242880kB managed:5077564kB mlocked:0kB bounce:0kB free_pcp:4kB local_pcp:4kB free_cma:0kB
Jan 6 06:09:10 vps kernel: lowmem_reserve[]: 0 0 0 0 0
Jan 6 06:09:10 vps kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (M) 3*4096kB (M) = 14336kB
Jan 6 06:09:10 vps kernel: Node 0 DMA32: 14*4kB (UME) 1421*8kB (UME) 1210*16kB (UME) 328*32kB (UME) 28*64kB (UM) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 43072kB
Jan 6 06:09:10 vps kernel: Node 0 Normal: 1392*4kB (UME) 1281*8kB (UMEH) 1670*16kB (UMEH) 456*32kB (UME) 3*64kB (ME) 1*128kB (M) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 57448kB
Jan 6 06:09:10 vps kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jan 6 06:09:10 vps kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jan 6 06:09:10 vps kernel: 135214 total pagecache pages
Jan 6 06:09:10 vps kernel: 54989 pages in swap cache
Jan 6 06:09:10 vps kernel: Swap cache stats: add 1023479, delete 968452, find 19470929/19543715
Jan 6 06:09:10 vps kernel: Free swap = 0kB
Jan 6 06:09:10 vps kernel: Total swap = 2097148kB
Jan 6 06:09:10 vps kernel: 2097017 pages RAM
Jan 6 06:09:10 vps kernel: 0 pages HighMem/MovableOnly
Jan 6 06:09:10 vps kernel: 134591 pages reserved
Jan 6 06:09:10 vps kernel: 0 pages hwpoisoned

Again a memory issue, but no segfault messages this time, apparently. I noticed that the MariaDB memory usage is slowly increasing from an initial ~5% up to 25%, but perhaps that's normal? Not sure if it keeps going up till disaster. Is there a way to tell MariaDB to flush its memory or to know what's causing it to occupy that much?

Your VPS doesn't have enough memory and the OOM killer selects the highest memory consuming process (which on your setup is invariably going to be MariaDB) to avoid a kernel panic state. If you can add more memory, that would help as more than likely you have a momentary increase in traffic (legitimate or good/bad bot) and those PHP memory requirements quickly tax the VPS to the point where it no longer has memory to allocate to any process.

TLDR: Get more memory either on the same VPS or a different system. Ignoring this may cause downtime and/or database/file system corruption.
 
@trialotto
the settings suggested for the DB were implemented a few days ago, but MariaDB crashes when reaching 25%+ of the 8GB available, i.e., when it gets to 2GB or so. I had a very very stable VPS with CentOS for years and with more websites and traffic than today. No issues whatsoever. Problems started after the migration to Almalinux 8, a crash every month or so, which I was never able to pinpoint to a particular cause. Then, all of a sudden, about 2 weeks ago, it started to issue OOM all the time, first with php-fpm then with MariaDB. I tried a few things, such as the suggested pcre.jit=0, then vm.overcommit, etc., to no avail.
Before I move to another server I wish I knew what is causing all this nightmare, wouldn't you agree?

@pleskpanel
How come all of a sudden 8GB of RAM for less than 20 low traffic WP websites are no longer OK when I have a lot to spare? I used to run double that without any problem. And the swap file is most of the time barely touched. No, it's a fundamental issue which will keep creating crashes even if I'd double the memory.
 
@Franco,

This statement

Before I move to another server I wish I knew what is causing all this nightmare, wouldn't you agree?

might seem reasonable, but the real truth is that you cannot figure out what the root cause of the problem is when using your VPS.

After all, your host / VPS environment simply "stalls".

In essence, a new dedicated server allows you to investigate the issue .... it will have sufficient resources and is not very likely to stall.

Nevertheless, some of the online basic tools like MySQL Memory Calculator can help - it is not really updated, but it can help.

MySQL has a number of config settings that can be deemed logical or appropriate on their own, but the combination of all of them can make the DB server very slow or even problematic.

The challenge here is that you try to find a COMBINATION of config settings that works!

Now, this is rather impossible on a VPS that already stalls with the slightest of changes to MySQL config settings.

Hence the recommendation to use a dedicated server.

Kind regards.....

PS Please note that any misconfiguration of MySQL config settings will cause issues when restarting MySQL server or even rebooting the server - that is the very nature of MySQL (read: specific instructions / actions will be retried, hence again causing MySQL server to stall). One can attempt to mitigate this behavior by simply going back to the default MySQL config, but that is NEVER recommended in a production environment. This is another reason to simply use another server and test a proper combination of config settings in a development environment, before testing the combination of config settings in an actual production environment. Nevertheless, if the development environment is stable on a new server and all hosting data can be migrated fairly easily to a new server ....... then why not maintain the new server and abandon the troublesome old VPS? It is just something to consider!
 
@trialotto
the settings suggested for the DB were implemented a few days ago, but MariaDB crashes when reaching 25%+ of the 8GB available, i.e., when it gets to 2GB or so. I had a very very stable VPS with CentOS for years and with more websites and traffic than today. No issues whatsoever. Problems started after the migration to Almalinux 8, a crash every month or so, which I was never able to pinpoint to a particular cause. Then, all of a sudden, about 2 weeks ago, it started to issue OOM all the time, first with php-fpm then with MariaDB. I tried a few things, such as the suggested pcre.jit=0, then vm.overcommit, etc., to no avail.
Before I move to another server I wish I knew what is causing all this nightmare, wouldn't you agree?

@pleskpanel
How come all of a sudden 8GB of RAM for less than 20 low traffic WP websites are no longer OK when I have a lot to spare? I used to run double that without any problem. And the swap file is most of the time barely touched. No, it's a fundamental issue which will keep creating crashes even if I'd double the memory.
You seem to have a fundamental misunderstanding how VPS work. When memory allocation is dynamic, your provider might have underprovisioned/overbooked the VPS host and not actually have enough physical memory to guarantee you these 8GB. Probably another tenant on the same VPS host started to use more memory than they did before.
 
@Franco

This statement

the settings suggested for the DB were implemented a few days ago, but MariaDB crashes when reaching 25%+ of the 8GB available, i.e., when it gets to 2GB or so

are very indicative of the problems that you have.

In essence, when tweaking the MySQL settings, one will easily get to 8GB of memory usage - your VPS is not sufficient.

Please remove the

[mysqld]
innodb_buffer_pool_size=2GB
innodb_log_file_size=512M

immediately, since that will by default cause memory outage if your VPS only has 8GB.

In essence, try to use the online MySQL calculator first to determine how your MySQL settings have to be combined to use a memory usage of 2 to 4GB max.

Kind regards....
 
You seem to have a fundamental misunderstanding how VPS work. When memory allocation is dynamic, your provider might have underprovisioned/overbooked the VPS host and not actually have enough physical memory to guarantee you these 8GB. Probably another tenant on the same VPS host started to use more memory than they did before.

@Franco

The post by @mow should be read with attention.

I did not want to dive into the "VPS pitfalls discussion", since I sincerely hoped that the recommendation to use a dedicated server would be sufficient.

@mow is right though .......... and I can add that

1 - most virtualization software used by hosting providers offering VPSes is not really advanced or even without bugs : these providers are often not able to fully control the host / VPS environment themselves, with all kinds of odd issues as a result,

2 - some or most virtualization software tends to have a "performance penalty" for VPSes that are associated with overusage of any kind : the virtualization software is notoriously difficult to configurate optimally and, as a result, the (default) performance penalty is often maintained as part of config (even though in theory, it is not necessary at all).


Stated differently, spare yourself the common pitfalls of VPSes (and their providers!) ..... just make life simple and get a dedicated server.

Kind regards...
 
Thank you all, I'll think about migrating to something else. In any case a dedicated server is way an overkill.
The current host has told me they guarantee the 8GB 100%, no matter what.

Besides, I checked with the online mySQL calculator and with current settings (i.e., 2GB innodb_buffer_pool_size) I reach just 3.2GB.

What about upgrading to Almalinux 9 as an attempt of last resort? How complex is that?
 
Back
Top