• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Input Upgrade experience 12.5.3 -> 17.5.3 on CentOS 7.3

Bitpalast

Plesk addicted!
Plesk Guru
I would like to share a list of issues that we experienced on the latest upgrade, because likely other users will experience some of the same issues.

12.5.3 -> 17.5.3, CentOS 7.3, 64-Bit
The previous installation was a clean installation with no specific add-ons or changes on a CentOS 7.1 machine. CentOS was upgraded to 7.3 with latest patches, system was rebooted, then the Plesk upgrade process was started (after pre-upgrade checker said it will be o.k.).

Results:

- psa-selinux was not upgraded through yum Plesk installation with error “TypeError: an integer is required”. Unclear why this has happened, what the consequences are and how to fix it. Currently still an unresolved issue.

- Upgrade was finished incompletely, maybe due to the psa-selinux issue? Plesk still offered an upgrade to 17.5.3 after logging in. Re-running the upgrade script seems to have completed the upgrade, but not to MU #4. Running
# plesk installer --select-release-current --reinstall-patch --upgrade-installed-components
to get the latest MU version failed, because autoinstaller processes remained active but where hanging. Killing them, then running the command seems to have installed MU #4.

- After the upgrade fail2ban did not start, because the package provided by the OS was incompatible with the package delivered with Plesk. Solution: Exclude fail2ban package from Epel repository, then remove existing fail2ban package with yum, then reinstall the component through Plesk autoinstaller. Here again, another autoinstaller process was still in the process list, obviously doing nothing but just sitting there. It had to be killed so that the fail2ban installation became possible.

- httpd went down for seemingly no reason; Only after
# plesk repair web
(reconfiguration of the web server configuraton files) web service was working again. It remains unclear what happened that the existing configuration files were not good any longer.

- After
# plesk repair web
one seemingly random of the approx. 900 domains was missing the symbolic link entry in /etc/httpd/conf/plesk.conf.d/vhosts and another one was missing the symbolic link entry for the webmail-subdomain component. Both had to be reconfigured manually. For the webmail-subdomain component, a reconfiguration through the “Troubleshooter” extension did not resolve the issue (the script responed “success” in green, and directly underneath still showed the same error in red), only a # httpdmng reconfiguration of that single domain from the console resolved it.

- After completion of the above, checking the “upgrade” page in GUI it offered an upgrade of Nginx (separately from all the previous upgrades we did as described above). The upgrade of Nginx through the Plesk GUI upgrade utility broke ulimit configurations in /usr/lib/systemd/system/nginx.service and /usr/local/psa/admin/sbin/nginx-config, causing Nginx to fail, because of “too many open files”. The ulimit entries had to be replaced manually to get Nginx up again.

- Many packages that belong to Plesk 12.5.30 remained in the system. It is unknown why they were not removed. Now the system updates tool is complaining about inconsistencies and asks for a manual solution. Warning: Information on some packages might not be actual: inconsistencies were detected in the system's package manager database. Please resolve this issue manually. is no help, because it is impossible to tell if a package is “important” and must not be removed or whether it can be removed safely or whether this is merely an inconsistency in the yum database. Maybe some of the old packages are the ones that are actually in use? Maybe some of the new ones? Until now it remains unclear whether the 12.5.30 packages can simply be removed manually or whether they still play a role in the current installation.

- Health monitor settings were reverted to "default". The customized XML definition had to be uploaded again.

My impression:
Taking into account that this was a perfectly well organized server with no file system/permission issues and an up-to-date operating system with only original files from base, updates and epel repository I’d rate the overall upgrade experience 1 out of 5 stars. This needs much more attention by developers. Many severe issues, some of them still unresolved and no way for a user to find out why or how to solve them without paid support assistance.

My suggestion is to shift development effort from gimmicks like Docker, Git etc. to a much more robust upgrade process and to pay more attention on the OS behavior. The focus should be on operating system behavior and issues with upgrading the OS components.
 
Last edited:
Back
Top