Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
We value your experience with Plesk during 2025 Plesk strives to perform even better in 2026. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2025. Please take this short survey: https://survey.webpros.com/
Hi @Sebahat.hadzhi — following up: the current patch releases “Plesk Obsidian 18.0.75 Update 1” and “Plesk Obsidian 18.0.76 Update 2” address a critical security issue.
Could you please share details as soon as possible (e.g., CVE ID, affected versions/components, impact, and recommended...
Maybe a small annotation: if you’re affected by both bugs (duplicated entries and unescaped entries), you need to run the script twice. The first run will generate the required SQL DELETE statements; the second run will generate the UPDATE statements. In short, run it repeatedly and execute the...
@Sebahat.hadzhi But this is my question: how do I re-run the upgrade? Please provide the command. The error log says:
Run upgrade with option --repair to rerun failed steps.
I cannot find any command in the Plesk documentation. Please help me sort this out.
@hschramm Thanks for the info. So far we only cleaned up servers with a single database host setup. However, we also have affected servers with a multi-host database setup, where we grant access rights for different database hosts per user.
@Sebahat.hadzhi I’m confused. This database command does not re-run the failed upgrade step (2025-11-05-09-56-08_FixGrantPrivileges.php). How can we trigger that upgrade step again after fixing the duplicated entries?
@Sebahat.hadzhi We’re seeing the same error message as the thread author during the update process, for example:
ERROR: Upgrade step 2025-11-05-09-56-08_FixGrantPrivileges.php failed with code 1 and output:
INFO: Executing upgrade task: 2025-11-05-09-56-08
[2026-02-15 04:18:29.511]...
@Sebahat.hadzhi The script is generating the needed sql statements correctly to delete the doublicated entries without the escaping.
What is the fastest way to re-run the missing migration scripts afterwards?
Hi @Sebahat.hadzhi , quick follow-up two months later: is there any update from the security/product team?
Can you share either (a) the advisory details (CVE/ID, affected component, severity) for that “critical security update”, or (b) an ETA and where this information will be published?
Thanks.
@hschramm We are experiencing the same issues on some of our servers. Some database names were not escaped, and other entries exist twice (with and without escaping). So it seems many more users may be affected than it initially appears. In our case, we have three servers we need to clean up...
Some of our Plesk servers also have faulty entries in the mysql.db table.
We are affected by two types of inconsistencies:
a) Duplicate entries where the database name exists both with and without escaping the underscore (e.g. db_name and db\_name), leading to the duplicate entry error as...
At the moment, we want to give NGINX geo-blocking a try. Grouping attacking countries and applying global rate limits per group (rather than per IP) seems promising and works for now, since most attacks are not coming from Europe, where our servers are located. This can also confuse attackers...
Hello @ChristophRo , thanks for your reply.
At the moment we’re following a similar approach. We already block some major cloud hosting providers (e.g., DigitalOcean) by denying their ASNs, since a lot of spam originates from their data centers. This helped, but it was not sufficient. For now...
Hello,
We currently have a client’s online shop under attack by a large bot network using thousands of different IPs.
For example, the last 50,000 requests in the logs came from more than 25,000 IP addresses worldwide. Blocking specific ISPs or countries doesn’t help in this case.
Our Imunify...
@danami
Thank you for your answer. I think there is not much we can optimize as long as we are still querying a full day of the journald log. Searching with --grep is an expensive operation, and on top of that the log files need to be read from disk first.
I did some testing to demonstrate...
@danami
We just tested the new journald optimization. On our servers it is still quite slow, as it still needs to scan a full day of logs:
/usr/bin/journalctl --no-pager --quiet --unit='pc-remote' --unit='dovecot_authdb_plesk' --unit='amavisd-milter' --unit='postfix@-' --unit='amavis'...