• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue 500 Plesk\Exception\Database

pelo

New Pleskian
Hi,

I've a big Problem with Plesk. It doesn't start anymore...
This is be a double post, because I have already post my Question to a solved Problem.

Here my data - hope s.o. can help me...

DB query failed: SQLSTATE[HY000] [2002] No such file or directory
TypePlesk\Exception\Database
MessageDB query failed: SQLSTATE[HY000] [2002] No such file or directory
FileMysql.php
Line64



plesk db
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")
exit status 1

service mariadb status -l

● mariadb.service - MariaDB 10.1.47 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: signal) since Thu 2021-01-28 12:35:05 CET; 4s ago
Docs: man:mysqld(8)
systemd
Process: 18933 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=killed, signal=SEGV)
Process: 18851 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSI
Process: 18842 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 18831 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
Main PID: 18933 (code=killed, signal=SEGV)
Status: "Starting final batch to recover 11 pages from redo log"

df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 11M 3.2G 1% /run
/dev/sda2 28G 3.7G 24G 14% /
/dev/mapper/vg00-usr 20G 2.8G 16G 15% /usr
tmpfs 16G 4.0K 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg00-home 9.8G 37M 9.3G 1% /home
/dev/mapper/vg00-opt 20G 4.0G 15G 22% /opt
/dev/mapper/vg00-var 1.6T 77G 1.4T 6% /var
tmpfs 3.2G 0 3.2G 0% /run/user/0


df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 4087908 471 4087437 1% /dev
tmpfs 4095869 3214 4092655 1% /run
/dev/sda2 1831424 41846 1789578 3% /
/dev/mapper/vg00-usr 1310720 142261 1168459 11% /usr
tmpfs 4095869 2 4095867 1% /dev/shm
tmpfs 4095869 8 4095861 1% /run/lock
tmpfs 4095869 18 4095851 1% /sys/fs/cgroup
/dev/mapper/vg00-home 655360 11 655349 1% /home
/dev/mapper/vg00-opt 1310720 117916 1192804 9% /opt
/dev/mapper/vg00-var 104775680 848054 103927626 1% /var
tmpfs 4095869 11 4095858 1% /run/user/0



What ca I do? I'm searching the hole morning.... Hope someone can give me a hint...

The solved Problem Resolved - 500 Plesk\Exception\Database is not my solution
 
Thanks, this Ticket I saw already, but I don't have any lines discribed in this solution:
  1. Comment out the following lines in /etc/my.cnf:
    #log_bin = /var/log/mysql/mariadb-bin
    #log_bin_index = /var/log/mysql/mariadb-bin.inde
    #log_slow_verbosity = query_plan



- here my my.cnf, it is stored /etc/mysql/my.cnf:

# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.

#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
[mysqld]
bind-address = ::ffff:127.0.0.1
local-infile=0
 
During a MySQL or MariaDB start phase, a wealth of data is logged to /var/log/messages. If starting the service fails, you will find the reason for it in /var/log/messages. It is best to take a look at that, identify the root cause and then tackle it.
 
here some lines out of my var/log/syslog - var/log/messages is not there anymore

mariadb.service: Main process exited, code=killed, status=11/SEGV
Jan 28 14:12:59 mx01 systemd[1]: mariadb.service: Failed with result 'signal'.
Jan 28 14:12:59 mx01 systemd[1]: Failed to start MariaDB 10.1.47 database server.
Jan 28 14:12:59 mx01 systemd[1]: Started Plesk Web Socket Service.
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: [2021-01-28 14:12:59.124] ERR [panel] DB query failed: SQLSTATE[HY000] [2002] No such file or directory:
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 0: /opt/psa/admin/plib/Db/Adapter/Pdo/Mysql.php:64
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011Db_Adapter_Pdo_Mysql->query(string 'SET sql_mode = ''')
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 1: /opt/psa/admin/plib/CommonPanel/Application/Abstract.php:103
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011CommonPanel_Application_Abstract::initDbAdapter()
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 2: /opt/psa/admin/plib/Session/Helper.php:176
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011Plesk\Session\Helper::initStorage()
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 3: /opt/psa/admin/plib/CommonPanel/Application/Abstract.php:52
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011CommonPanel_Application_Abstract->run()
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 4: /opt/psa/admin/plib/CommonPanel/Application/Abstract.php:34
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011CommonPanel_Application_Abstract::init()
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 5: /opt/psa/admin/plib/pm/Bootstrap.php:16
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011pm_Bootstrap::init()
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 6: /opt/psa/admin/plib/sdk.php:11
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: #011require_once(string '/opt/psa/admin/plib/sdk.php')
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: 7: /opt/psa/admin/plib/WebSocket/bin/ws-server.php:3
Jan 28 14:12:59 mx01 sw-engine-pleskrun[1111]: ERROR: Plesk\Exception\Database: DB query failed: SQLSTATE[HY000] [2002] No such file or directory (Mysql.php:64)
Jan 28 14:12:59 mx01 systemd[1]: plesk-web-socket.service: Main process exited, code=exited, status=1/FAILURE
Jan 28 14:12:59 mx01 systemd[1]: plesk-web-socket.service: Failed with result 'exit-code'.

Is it possible to get some professional help for this issue?
 
That's the wrong part of the log. The interesting part happens before the second line of your excerpt. The other lines are follow-up errors, because the database service is not running. If you scroll up or run
# grep mysql /var/log/messages
or sometimes it is
# grep mariadb /var/log/messages
you'll get all the relevant material.

Sure, you can ask Plesk support to look at it.
 
grep mariadb /var/log/syslog
Jan 28 14:19:22 mx01 systemd[1]: mariadb.service: Service hold-off time over, scheduling restart.
Jan 28 14:19:22 mx01 systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 1001.
Jan 28 14:19:23 mx01 systemd[1]: mariadb.service: Main process exited, code=killed, status=11/SEGV
Jan 28 14:19:23 mx01 systemd[1]: mariadb.service: Failed with result 'signal'.
This error is every 5 Seconds.

a little bit early this day - this:
Jan 28 10:58:24 mx01 systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 28 10:58:24 mx01 systemd[1]: mariadb.service: Failed with result 'exit-code'.
Jan 28 10:58:30 mx01 mysqld[3797]: To report this bug, see https://mariadb.com/kb/en/reporting-bugs
Jan 28 10:58:30 mx01 mysqld[3797]: The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
Jan 28 10:58:31 mx01 systemd[1]: mariadb.service: Main process exited, code=killed, status=11/SEGV
Jan 28 10:58:31 mx01 systemd[1]: mariadb.service: Failed with result 'signal'.
Jan 28 10:58:36 mx01 systemd[1]: mariadb.service: Service hold-off time over, scheduling restart.
Jan 28 10:58:36 mx01 systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 1.
May be I have to look for the Problems yesterday - yesterday evening everything was working
 
Once again, not the right log part. When you see the result of a restart attempt as
"Jan 28 10:58:24 mx01 systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE"
in the log BEFORE that time MariaDB has logged the reason. MariaDB is very elaborate on that, there should be lots of text entries explaining what's going on. Normally you'll see specific databases or tables mentioned that are corrupted or other errors, but with many more details.
 
That's the wrong part of the log. The interesting part happens before the second line of your excerpt. The other lines are follow-up errors, because the database service is not running. If you scroll up or run
# grep mysql /var/log/messages
or sometimes it is
# grep mariadb /var/log/messages
you'll get all the relevant material.

Sure, you can ask Plesk support to look at it.
I have plesk installed from a server from 1und1.de - Plesk is already in this server package.

So I think, I don't have the possibility to get professional Support from Plesk
 
this I found in /var/log/mysql/error.log

But that doesn't tell me anything...

/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7fc0371a16db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fc03651f71f]
The manual page at How to Produce a Full Stack Trace for mysqld contains
information that should help you find out what is causing the crash.
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: innodb_empty_free_list_algorithm has been changed to legacy because of small buffer pool size. In order to use backoff, increase buffer pool at least up to 20MB.

2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: The InnoDB memory heap is disabled
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Compressed tables use zlib 1.2.11
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Using Linux native AIO
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Using SSE crc32 instructions
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Completed initialization of buffer pool
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Highest supported file format is Barracuda.
2021-01-28 14:40:12 140233190308992 [Note] InnoDB: Starting crash recovery from checkpoint LSN=24103226440
2021-01-28 14:40:13 140233190308992 [Note] InnoDB: Restoring possible half-written data pages from the doublewrite buffer...
2021-01-28 14:40:13 140233190308992 [Note] InnoDB: Starting final batch to recover 11 pages from redo log
210128 14:40:13 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

To report this bug, see MariaDB Community Bug Reporting

We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Server version: 10.1.47-MariaDB-0ubuntu0.18.04.1
key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 352468 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x563eed9eb22e]
/usr/sbin/mysqld(handle_fatal_signal+0x53b)[0x563eed5b196b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x12980)[0x7f8a946c3980]
/usr/sbin/mysqld(+0x83e0f0)[0x563eed8580f0]
/usr/sbin/mysqld(+0x83fd15)[0x563eed859d15]
/usr/sbin/mysqld(+0x82431c)[0x563eed83e31c]
/usr/sbin/mysqld(+0x826c37)[0x563eed840c37]
/usr/sbin/mysqld(+0x90aa10)[0x563eed924a10]
/usr/sbin/mysqld(+0x957d6b)[0x563eed971d6b]
/usr/sbin/mysqld(+0x8a6810)[0x563eed8c0810]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7f8a946b86db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f8a93a3671f]
The manual page at How to Produce a Full Stack Trace for mysqld contains
information that should help you find out what is causing the crash.
 
I'd guess that

"This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware."

is pointing to a hard disk issue, because that is the most frequent reason for such failure. Maybe you should check integrity of your disk and that you have enough resources available first. Without further data,
is the best next step to continue. If you are aware that there have been changes to MariaDB, e.g. a recent update, maybe an incomplete update, that would also be a place to look further.

You will not receive any Plesk support from your provider, but you can for sure get support directly from Plesk. However, this is not a Plesk issue, it is a typical system administration task. I am not sure whether Plesk support wants to solve it. But maybe they have someone who knows what the issue is when MariaDB does not mention anything specific - by experience. For the link to Plesk support see my post above.
 
Back
Top