• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Resolved Tables issues after MariaDB crash

Franco

Regular Pleskian
Hello,

following a MaraiDB crash (due to temp low memory, not sure why) had several DB errors in /var/log/messages, of the kind:

- vps mysqld: 140273982536448 [ERROR] mysqld: Table './table-name/4asx4R_posts' is marked as crashed and should be repaired
- vps mysqld: 140273982536448 [Warning] Checking table: './table-name/4asx4R_posts'

Following them I used the associated subscription Plesk DB repair but nothing has been detected, all fine, it says.

I am on CentOs 7.6, with Onyx 17.8.11 #39

Should I worry? How to repair those tables?

Any hint appreciated.

Franco

P.S. The crash log (all happened automatically, no intervention from my side):

Feb 12 12:49:15 vps kernel: Out of memory: Kill process 9802 (mysqld) score 63 or sacrifice child
Feb 12 12:49:15 vps kernel: Killed process 9802 (mysqld) total-vm:1031796kB, anon-rss:167644kB, file-rss:0kB, shmem-rss:0kB
Feb 12 12:49:15 vps systemd: mariadb.service: main process exited, code=killed, status=9/KILL
Feb 12 12:49:15 vps systemd: Unit mariadb.service entered failed state.
Feb 12 12:49:15 vps systemd: mariadb.service failed.
Feb 12 12:49:20 vps systemd: mariadb.service holdoff time over, scheduling restart.
Feb 12 12:49:20 vps systemd: Stopped MariaDB 10.1.37 database server.
Feb 12 12:49:20 vps systemd: Starting MariaDB 10.1.37 database server...
Feb 12 12:49:22 vps mysqld: 2019-02-12 12:49:22 139981684721920 [Note] /usr/sbin/mysqld (mysqld 10.1.38-MariaDB) starting as process 22653 ...
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: innodb_empty_free_list_algorithm has been changed to legacy because of small buffer pool size. In order to use backoff, increase buffer pool at least up to 20MB.
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Using mutexes to ref count buffer pool pages
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: The InnoDB memory heap is disabled
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Compressed tables use zlib 1.2.7
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Using Linux native AIO
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Using SSE crc32 instructions
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Initializing buffer pool, size = 128.0M
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Completed initialization of buffer pool
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Highest supported file format is Barracuda.
Feb 12 12:49:23 vps mysqld: 2019-02-12 12:49:23 139981684721920 [Note] InnoDB: Starting crash recovery from checkpoint LSN=287075511767
Feb 12 12:49:32 vps mysqld: 2019-02-12 12:49:32 139981684721920 [Note] InnoDB: Restoring possible half-written data pages from the doublewrite buffer...
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] InnoDB: 128 rollback segment(s) are active.
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] InnoDB: Waiting for purge to start
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] InnoDB: Waiting for purge to start
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] InnoDB: Percona XtraDB (Experts in Database Performance Management) 5.6.42-84.2 started; log sequence number 287075511777
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] Plugin 'FEEDBACK' is disabled.
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] Recovering after a crash using tc.log
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139980890437376 [Note] InnoDB: Dumping buffer pool(s) not yet started
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] Starting crash recovery...
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] Crash recovery finished.
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] Server socket created on IP: '127.0.0.1'.
Feb 12 12:49:34 vps mysqld: 2019-02-12 12:49:34 139981684721920 [Note] /usr/sbin/mysqld: ready for connections.
Feb 12 12:49:34 vps mysqld: Version: '10.1.38-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
Feb 12 12:49:34 vps systemd: Started MariaDB 10.1.37 database server.
 
An easy way to repair all tables of all your databases:
Code:
mysqlcheck -Ar -u admin -p`cat /etc/psa/.psa.shadow`'

Your MariaDB server was killed by the kernel because your system was out of memory.

So, your options:
1) Add more memory to your server
2) Reduce overall memory consumption on your server
3) Make sure MariaDB is properly configured and is not using more RAM than what you have. Use mysqltuner.pl to get an overview about your MariaDB config
4) Prevent that the OOM-Killer of the kernel kills MariaDB:
Code:
mkdir /etc/systemd/system/mariadb.service.d
(if it doesn't already exist)

- Create a file in that directory: oom.conf, content:
[Service]
OOMScoreAdjust=-900


- reload systemd:
Code:
systemctl daemon-reload
- restart mariadb:
Code:
systemctl restart mariadb

=> Now your mysqld process will have a OOMScore of -900 (default: 0), meaning that the kernel will never kill this process when the system is out of memory.
 
Thanks, Monty, I could verify/repair the tables. I also applied your suggestion to prevent mysql from being killed byt the kernel.
As for the crash, it is really unusual as it never happened in the last 2 years. My live and virtual memory should be enough for what's doing; that's why I suspect a sort of black swan event of unknown nature. Of course, I will monitor the situation and resolve to condir your other suggestions if it happens again.

Best,

Franco
 
Please make sure to report here if the issue returns. If possible and your system has not totally crashed in that situation, please run
# ps aux | grep httpd
and report the output here, too, in case this issue happens again. I am aware that this is Apache and not MariaDB, but httpd service might be responsible for using all RAM.
 
Back
Top