• Please be aware: Kaspersky Anti-Virus has been deprecated
    With the upgrade to Plesk Obsidian 18.0.64, "Kaspersky Anti-Virus for Servers" will be automatically removed from the servers it is installed on. We recommend that you migrate to Sophos Anti-Virus for Servers.
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Resolved Problems to backup database with many tables

martin.g

New Pleskian
Server operating system version
CentOS Linux 7.9.2009 (Core)
Plesk version and microupdate number
Plesk Obsidian Version 18.0.49 Update #2
Hi,

a colleague migrated a wordpress multisite with quite many tables (1717 tables, 628MB) to our central plesk server. Now the backup is failing with this message (warning):
Warning: Database "wp_multisite_XYZ"
Unable to make database dump. Error: Failed to exec mysqldump: Exit code: 2: mysqldump: Got error: 1016: "Can't open file: './wp_multisite_XYZ/wp_multi_20_icl_content_status.frm' (errno: 24)" when using LOCK TABLES
As said database has currently 1717 tables but does not use much space:
1674649418352.png
Plesk database tool "check and repair" does not find any problems.

This article (German) says there would be a mysqldump limit not to lock more than 330 tables.
Is there any (temporary?) workaround without reducing the number of tables by ~1400?

Regards
Martin
 
I don't think the number of tables is the issue. The error indicates a problem with locked tables. Best solution I can think off would be to make an new database dump/export without locked tables.
 
You mean manual dump problematic database on command line without locks. Truncate tables in this db and then manually import by cli. After that try a backup again?
 
No. I mean the issue you're facing is caused by locked tables. In order the solve your issue you'll have to create a new database dump/export explicitly without the tables locked. You can use any preferred method to created a new database dump/export. As long as the table lock is disabled. For importing you can use any method you prefer. No need to use CLI, but if that's what you prefer, go for it.
 
Hi,

regarding your previuos assumption, I did a lot of research, imports and exports, try/error, with and without data and so on. I suspected a special multisite but results were not consistent during different approaches. And while stripping the dumped no-data field to a minimal not-working version I noticed the critical number of tables to be between 400 and 800 and does not depend on a special table or multisite. It seemed to occur at the Nth table.

So it seemed to be as simple as described in the first post and with this simple script (that has nothing do with wordpress or even plesk) I noticed the limit of backupable tables per database is at 500 tables.

Bash:
NUM=500; \
MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql -u admin -e "DROP DATABASE IF EXISTS tmp; CREATE DATABASE tmp;"; \
for ((I=1; I<=$NUM; I++)); \
    do MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql -u admin tmp -e "CREATE TABLE \`tbl_$I\` ( \`id\` int(11) NOT NULL AUTO_INCREMENT ) ENGINE=MyISAM;"; \
done; \
MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysqldump -u admin tmp > /dev/null

For a NUM > 500 it nearly always gives this error: mysqldump: Got error: 1016: "Can't open file: './tmp/tbl_96.frm' (errno: 24)" when using LOCK TABLES
Below 495 it´s always fine
And a little below 500 it gives usually this error: mysqldump: Couldn't execute 'show fields from `tbl_1`': Out of resources when opening file '/var/tmp/#sql_4eb4_0.MAD' (Errcode: 24) (23)
And sometimes this which indeed is caused by mysql restart: mysqldump: Couldn't execute 'show fields from `tbl_103`': Lost connection to MySQL server during query (2013)

It´s the Plesk Appliance based on CentOS 7, running on a PVE Cluster with NVMe Mirror disks. It has assigned 6 Xeon cores and 64GB RAM and there is no performance problem recognizable:

Code:
top - 12:02:07 up 14 days, 1 min,  3 users,  load average: 0.81, 1.34, 2.13
Tasks: 170 total,   4 running, 166 sleeping,   0 stopped,   0 zombie
%Cpu0  :  4.7 us,  3.0 sy,  0.0 ni, 92.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  2.0 us,  1.3 sy,  0.0 ni, 96.6 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  1.7 us,  1.0 sy,  0.0 ni, 97.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  : 20.0 us, 12.3 sy,  0.0 ni, 67.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu4  : 11.5 us, 11.2 sy,  0.0 ni, 77.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu5  :  1.7 us,  1.0 sy,  0.0 ni, 97.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 65807708 total,  5633908 free,  3362704 used, 56811096 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 61241336 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
23353 usr-fac+  20   0  676624 111196  13536 S  31.6  0.2   0:02.87 php-fpm
23482 usr-fac+  20   0  497744  66868  41156 S  14.3  0.1   0:00.43 php-fpm
23490 usr-fac+  20   0  605304  43780  16408 R   5.3  0.1   0:00.16 php-fpm
20148 mysql     20   0 2549848 372736   9820 S   5.0  0.6   4:11.60 mysqld
23492 usr-fac+  20   0  490732  38184  19448 R   4.7  0.1   0:00.14 php-fpm
 3858 usr-inf+  20   0  219740   8992   4808 S   3.0  0.0   0:10.34 in.proftpd

Code:
free -m
              total        used        free      shared  buff/cache   available
Mem:          64265        3550        5233         656       55481       59539
Swap:             0           0           0

Any ideas on how to walk around this table limit?
 
It could be a result of a too low "open_files_limit".

Check it:
plesk db
> SHOW VARIABLES LIKE 'open%'

Set it in /etc/my.cnf:
Code:
[mysqld]
open_files_limit = 5000

[mysqldump]
open_files_limit = 5000
Database restart needed after the change as root service mariadb restart.
 
You are right. However the easiest solution (probably) is to simply run a mysqldump without tables locked with the --lock-tables=false option. Like mysqldump -u username --password=xxxxxx --lock-tables=false my_database_name.

The way mysqldump works (by default) is to lock each table before writing the output to a temporary dump file for each table to prevent that new data is written to the table at the same time. Which is a good thing. But this also means that mysqldump has to work with a lot of files. I you have a database with a lot of tables then mysqldump runs into the open_files_limit limit qucikly. So you could try to set a higher open_files_limit limit (as @Peter Debik also suggested) See this link for more info on this topic: #1016 - Can't open file: './database_name/#sql-38f_36aa.frm' (errno: 24)

However running mysqldump without table locking should, in theory at least, also work.
 
You are right. However the easiest solution (probably) is to simply run a mysqldump without tables locked with the --lock-tables=false option. Like mysqldump -u username --password=xxxxxx --lock-tables=false my_database_name.
Of course, I used this on command line all the time, but I don´t think I could even set this this as workaround within the automatic plex backup jobs.

The way mysqldump works (by default) is to lock each table before writing the output to a temporary dump file for each table to prevent that new data is written to the table at the same time. Which is a good thing.
Yes, it´s a good thing. Prevents inconsistency, so I am looking for a way to regular backup my large database, as this as productive system.

It could be a result of a too low "open_files_limit".

Check it:
plesk db
> SHOW VARIABLES LIKE 'open%'

Set it in /etc/my.cnf:
Code:
[mysqld]
open_files_limit = 5000

[mysqldump]
open_files_limit = 5000
Database restart needed after the change as root service mariadb restart.
Yes, that´s the right direction. In the meantime I did some tries on another (non plesk) debian container and there I could dump 10.000 tables without problems, but 15.000 tables gave mysqldump: Got error: 1030: "Got error 24 "Too many open files" from storage engine MyISAM" when using LOCK TABLES
I did some more tries on the CentOS Plesk and played with ulimit which seemed not to do the trick. I will have alook to you Mysql parameters. Thank you very much!
 
If you are looking into very large open file numbers, there will also be limits in the /etc/security/limits.conf and fs.file-max variable in /etc/sysctl.conf. The default values that operating systems come with are normally too low for today's requirements.
 
It could be a result of a too low "open_files_limit".
Per default there has been a limit of 962. After setting it to 5000 in my. cnf it raised to 1024 which is my default ulimit. Even if I raise ulimit S/N to 8192, mysql Limit stays (after restart) at 1024. It seems I have to read some more docu on how to raise the ulimit at boot time and perhaps even if I have to allow more within PVE.
 
If you are looking into very large open file numbers, there will also be limits in the /etc/security/limits.conf and fs.file-max variable in /etc/sysctl.conf. The default values that operating systems come with are normally too low for today's requirements.
If have checked fs.file-max already and it´s okay (6484201). The security/limits.conf will be my next point to start but there I have to read a little more and understand what it suitable.
 
Per default there has been a limit of 962. After setting it to 5000 in my. cnf it raised to 1024 which is my default ulimit. Even if I raise ulimit S/N to 8192, mysql Limit stays (after restart) at 1024. It seems I have to read some more docu on how to raise the ulimit at boot time and perhaps even if I have to allow more within PVE.
There could also be a limit imposed by systemd on the unit itself. You may try checking it by running systemctl show mariadb --property=LimitNOFILE.
In case that limit is applied, you can adjust it by editing the unit file as described here.
 
Hi,

I added to /etc/security/limits.conf:
Code:
mysql soft nofile 4096
mysql hard nofile 4096
root hard nofile 8192

value with systemd was ok, I guess it uses the value from ulimit (only tested after schaning limits.conf and rebooting):
Code:
systemctl show mariadb --property=LimitNOFILE
LimitNOFILE=4096

I added to /etc/my.cnf (in the [mysqld] section):
Code:
open_files_limit = 4000
mysqldump did complain about this unknown parameter and I had to remove it for section mysqldump.

After that variable was okay in mysql:
Code:
MariaDB [psa]> SHOW VARIABLES LIKE 'open%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| open_files_limit | 4000  |
+------------------+-------+
1 row in set (0.01 sec)

Now the ciritical point is at dumping ~1985 tables, which would be fine for now but as more growth is expected, I will raise the limit once more.
It seems the limit of possible tables is about the half of the allowed open files. I guess mysqldump opens not only the .frm file but also the MYD or MYI.
 
Limits of 100,000 or more should be o.k. for many servers. A four-digit limit will be too low for many cases.
 
There could also be a limit imposed by systemd on the unit itself. You may try checking it by running systemctl show mariadb --property=LimitNOFILE.
In case that limit is applied, you can adjust it by editing the unit file as described here.
Well, that´s it. I stumbled about the next limit during raising the 4000 limit. And now it had been systemd. I already thought about it before but as I identified ulimit as main issue, I discarded it quite fast. During next raise, open_file_limits failed backed to 1024, so I checked systemd and the vaule has not been adopted from ulimit but is fix. After raising it in systemd it was fine. Then tried which paramers really needed (my.cnf, security/limits.conf, systemd) and it turned out, systemd is the only one. Systemd even seems to override the hardlimit.
Now I did an export test with 8000 tables and 25000 NOF and it worked great.

The long term short - that´s the answer, nothing more necessary: https://support.plesk.com/hc/en-us/articles/213938885
Don´t know how I could miss this one.

Thank you to everyone!
 
Back
Top