• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Server Error 500 Plesk\Exception\Database. Solved, but why?

CobraArbok

Regular Pleskian
Server operating system version
Ubuntu 22.04.3 LTS
Plesk version and microupdate number
18.0.55 Update #2
This morning I couldn't access the Webmail and then also the web page.
Strange, because at the same time I was connected to mail from Thunderbird and it worked.
So I go to Plesk and I get the error "Server Error 500 Plesk\Exception\Database"
Panic!
Also because I don't know how long it has been in this state. The mail works and I only use that one, I haven't been to the site for months.

The page recommends launching the Repair Kit, but I can't log in as root.

I read some messages in the forum and there is advice to restart the server and so I did.

Now that it's working again, I'd like to understand what happened.
If space could have been the problem, little space was taken up for the domains.

Then I would like to understand how multiple administration users can be created.
 
The Server Error 500 Plesk\Exception\Database error indicates a database issue, but other then that it's an very generic error. You'll have search trough the log files for more details on the issue.

Looking at the journalctl as @skibidi suggested is a good start, but more details might also be available in the MySQL/mariaDB log too.
 
I generated the 5000 lines, but I can understand what could have generated the error since I don't know when the web server crashed while the mail continued to work.
I wouldn't even know what exactly to look for. I see many other unclear things.

I see that there are Acronis license errors (active-protection.sh), but I didn't install the extension.
Instead, I installed my ISP's agent (Ionos) to manage their cloud backup service.
Oct 16 15:50:42 my-srv active-protection.sh[1194750]: error: cannot open Packages database in /nonexistent/.rpmdb Oct 16 15:50:42 my-srv active-protection.sh[1194758]: error: cannot open Packages database in /var/lib/Acronis/.rpmdb

I see many a lot of this attempts:
Oct 16 15:50:44 my-srv plesk_saslauthd[898698]: No such user '[email protected]' in mail authorization database Oct 16 15:50:44 my-srv plesk_saslauthd[898698]: failed mail authentication attempt for user '[email protected]' (password len=10) Oct 16 15:50:44 my-srv postfix/smtpd[1179240]: warning: unknown[45.129.14.106]: SASL LOGIN authentication failed: authentication failure
Sooner or later they will manage to get in, but that's another story.

There are hundreds of lines like these
Oct 16 16:05:17 my-srv CRON[142586]: pam_unix(cron:session): session closed for user root

Oct 16 16:05:10 my-srv systemd[1]: cron.service: Unit process 48572 (sh) remains running after unit stopped. Oct 16 16:05:10 my-srv systemd[1]: cron.service: Unit process 48579 (nc) remains running after unit stopped. Oct 16 16:05:10 my-srv systemd[1]: cron.service: Unit process 49787 (cron) remains running after unit stopped.


I see severals:
Oct 16 16:05:16 my-srv ossec-control[1198083]: Killing ossec-analysisd .. Oct 16 16:05:16 my-srv ossec-control[1198083]: ossec-maild not running .. Oct 16 16:05:16 my-srv ossec-control[1198083]: Killing ossec-execd .. Oct 16 16:05:16 my-srv ossec-control[1198083]: OSSEC HIDS 3.1.0 Stopped Oct 16 16:05:16 my-srv systemd[1]: ossec-hids.service: Deactivated successfully. Oct 16 16:05:16 my-srv systemd[1]: Stopped OSSEC HIDS. Oct 16 16:05:16 my-srv systemd[1]: ossec-hids.service: Consumed 57.341s CPU time.

Oct 16 16:05:15 my-srv systemd[1]: finalrd.service: Deactivated successfully. Oct 16 16:05:15 my-srv systemd[1]: Stopped Create final runtime dir for shutdown pivot root.
 
You'd want to look for anything related to your database server. I assume you have MariaDB, so you'd check logs with grep "mariadb" or "MariaDB" to find all what the database server logs. MariaDB does a good job when logging issues, so you'll probably find what's been going on before it crashed. If you do not see entries related to it in journalctl or /var/log/messages, the event is too old and might exist in an archived log.
 
@Peter Debik

To lighten the load I removed some extensions that I wasn't using anyway and then waited to see if the problem recurred.
Yesterday afternoon I found the website and webmail blocked again.
This morning I ran the grep command again, these are the results:
  • with MARIADB there is nothing
  • with mariadb only entries relating to day 17 appear.
  • under var/log/ there is no messages folder, nor a messages file; there is a mysql folder, but it is empty.
I restarted the server, but now with grep there is nothing even with mariadb.

If there is a problem it doesn't seem to be mariadb. The messages seem more related to a previous server reboot.

Oct 17 16:35:02 my-srv mariadbd[1217]: 2023-10-17 16:35:02 6550 [Warning] Aborted connection 6550 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 17 16:35:02 my-srv mariadbd[1217]: 2023-10-17 16:35:02 6549 [Warning] Aborted connection 6549 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 17 17:05:44 my-srv mariadbd[1217]: 2023-10-17 17:05:44 6 [Warning] Aborted connection 6 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 17 17:05:44 my-srv mariadbd[1217]: 2023-10-17 17:05:44 7124 [Warning] Aborted connection 7124 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 17 17:05:45 my-srv mariadbd[1217]: 2023-10-17 17:05:45 0 [Note] /usr/sbin/mariadbd (initiated by: unknown): Normal shutdown Oct 17 17:05:45 my-srv mariadbd[1217]: 2023-10-17 17:05:45 0 [Note] InnoDB: FTS optimize thread exiting. Oct 17 17:05:54 my-srv mariadbd[1217]: 2023-10-17 17:05:54 0 [Note] InnoDB: Starting shutdown... Oct 17 17:05:54 my-srv mariadbd[1217]: 2023-10-17 17:05:54 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool Oct 17 17:05:54 my-srv mariadbd[1217]: 2023-10-17 17:05:54 0 [Note] InnoDB: Restricted to 2028 pages due to innodb_buf_pool_dump_pct=25 Oct 17 17:05:54 my-srv mariadbd[1217]: 2023-10-17 17:05:54 0 [Note] InnoDB: Buffer pool(s) dump completed at 231017 17:05:54 Oct 17 17:05:58 my-srv mariadbd[1217]: 2023-10-17 17:05:58 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" Oct 17 17:05:58 my-srv mariadbd[1217]: 2023-10-17 17:05:58 0 [Note] InnoDB: Shutdown completed; log sequence number 5162859566; transaction id 9852567 Oct 17 17:05:59 my-srv mariadbd[1217]: 2023-10-17 17:05:59 0 [Note] /usr/sbin/mariadbd: Shutdown complete Oct 17 17:05:59 my-srv systemd[1]: mariadb.service: Deactivated successfully. Oct 17 17:05:59 my-srv systemd[1]: mariadb.service: Consumed 4min 42.116s CPU time.
 
According to this excerpt, MariaDB does not crash. It just ordinarily shuts down. Something is asking it to do that. The log does not show what that is.
 
According to this excerpt, MariaDB does not crash. It just ordinarily shuts down. Something is asking it to do that. The log does not show what that is.
In your opinion, why does it not accept any users when I ask for the Repair Kit and the login popup appears?
 
@Peter Debik
@Maarten.

Still stuck with error 500.
With systemctl status, both Nginx and Apache are running.
Instead MariaDB is blocked.

# systemctl status mariadb × mariadb.service - MariaDB 10.6.12 database server Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled) Active: failed (Result: oom-kill) since Mon 2023-10-23 15:18:27 CEST; 1 day 21h ago Docs: man:mariadbd(8) https://mariadb.com/kb/en/library/systemd/ Main PID: 1273 (code=killed, signal=KILL) Status: "Taking your SQL requests now..." CPU: 8min 5.286s Oct 21 10:36:58 srv /etc/mysql/debian-start[2589]: Upgrading MySQL tables if necessary. Oct 21 10:36:59 srv /etc/mysql/debian-start[2715]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables Oct 21 10:45:59 srv mariadbd[1273]: 2023-10-21 10:45:59 1194 [Warning] Aborted connection 1194 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 22 00:00:35 srv mariadbd[1273]: 2023-10-22 0:00:35 2217 [Note] InnoDB: Cannot close file ./cps@002dqs2/shlg5_extensions.ibd because of pending fsync Oct 22 00:15:33 srv mariadbd[1273]: 2023-10-22 0:15:33 2705 [Warning] Aborted connection 2705 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 23 00:15:35 srv mariadbd[1273]: 2023-10-23 0:15:35 7006 [Warning] Aborted connection 7006 to db: 'psa' user: 'admin' host: 'localhost' (Got an error reading communication packets) Oct 23 15:18:25 srv systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer. Oct 23 15:18:27 srv systemd[1]: mariadb.service: Main process exited, code=killed, status=9/KILL Oct 23 15:18:27 srv systemd[1]: mariadb.service: Failed with result 'oom-kill'. Oct 23 15:18:27 srv systemd[1]: mariadb.service: Consumed 8min 5.286s CPU time.

With plesk repair all from SSH I get:
DB query failed: SQLSTATE[HY000] [2002] Connection refused exit status 1

Is there a way to test every 30 minutes (by sending an email) to see what happens?
 
It seems that your server is Out Of Memory: "A process of this unit has been killed by the OOM killer"

Could it be that an old instance of MySQL/MariaDB is still running?
# ps -ef | grep -E 'mariadb|mysql'
 
It seems that your server is Out Of Memory: "A process of this unit has been killed by the OOM killer"

Could it be that an old instance of MySQL/MariaDB is still running?
# ps -ef | grep -E 'mariadb|mysql'
# ps -ef | grep -E 'mariadb|mysql' mysql 1178 1 0 12:16 ? 00:00:05 /usr/sbin/mariadbd root 23313 21034 0 12:33 pts/0 00:00:00 grep --color=auto -E mariadb|mysql
 
Can you try to restart the mariadb process?
# systemctl restart mariadb

Just to be sure, does your server have enough free memory?
# free -h
 
Hi all,
I have the same problem with a server frehsly migrated from Ubuntu 20.04 to 24.04.
I used plesk migrator, so I can still compate both servers. Both have the same specs, 2 GiB RAM.

Plesk now also has a FAQ entry, just saying it's my problem, I should upgrade my RAM:

That's funny, because the new server in "idle state" says the following:
# free -h
total used free shared buff/cache available
Mem: 1.8Gi 1.5Gi 194Mi 258Mi 597Mi 369Mi
Swap: 0B 0B 0B
And here's the old server:
# free -h
total used free shared buff/cache available
Mem: 1.9Gi 911Mi 106Mi 124Mi 921Mi 712Mi
Swap: 2.0Gi 476Mi 1.5Gi

So it looks like the old server has less free RAM than the new one?
What about the swap, I guess activating that could be a solution?

I guess there is a plesk related issue here? What do the experts here think? :)
 
What about the swap, I guess activating that could be a solution?
Could very well be. Having ample Swap is definitely recommended, particularly with so little memory. Bear in mind that if Swap get's used a lot it means that there is not enough memory in the first place. So adding more more memory won't such a bad idea.

I would suggest to start with enabling/adding Swap and go from there.

I guess there is a plesk related issue here? What do the experts here think? :)
If MariaDB/MySQL fails because of it's Out Of Memory then something is consuming most of the memory, leaving to little for MariaDB/MySQL or MariaDB/MySQL needs more memory then is available on your server.
 
Back
Top