• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Urgent - Failed Upgrade to 7.5.3

R

rspurlock

Guest
Hi all,

I have attempted an upgrade to 7.5.3 using the autoupdater but it has failed miserably and I have no idea where to go from here. I've searched the forum for any help but have found none that applies to this error.

When I open the Plesk Control Panel I see this:

ERROR

Table 'psa.ControlVisibility' doesn't exist

--------------------------------------------------------------------------------

0: /usr/local/psa/admin/plib/visibility.php:148 psaerror(string "Table 'psa.ControlVisibility' doesn't exist")
1: /usr/local/psa/admin/plib/visibility.php:291 getvisibilitycustomizations(string "/login_up.php3")
2: /usr/local/psa/admin/plib/visibility.php:300 getvisibility(string "login", boolean true, NULL "")
3: /usr/local/psa/admin/plib/elements.php3:180 iscontrolvisible(string "login", boolean true)
4: /usr/local/psa/admin/plib/elements.php3:106 fetch_hideable_button(string "commonButton", string "login", string "", string "", boolean false, string "", string "return login_oC(document.forms[0], document.forms[1])", boolean true, integer "3", boolean false, boolean false)
5: /usr/local/psa/admin/htdocs/login_up.php3:769 comm_button(string "login", string "", string "return login_oC(document.forms[0], document.forms[1])", boolean true, integer "3")

And the upgrade log ends with this:


Trying to install chrooted environment... chrootmng: stat("bin") failed: No such file or directory
chrootmng: stat("lib") failed: No such file or directory
chrootmng: stat("usr") failed: No such file or directory
chrootmng: stat("etc") failed: No such file or directory
chrootmng: stat("tmp") failed: No such file or directory
chrootmng: stat("var") failed: No such file or directory


Trying to create pmadb... done
done
Trying to append 7.5.3 changes to database... ERROR 1025 at line 2: Error on rename of './psa/#sql-5b42_80' to './psa/Cards' (errno: 121)

ERROR while trying to convert all tables to InnoDB

Aborting...

I had to replace the httpd.includes file with the backup to get all sites back up so all customers are online, but they can no longer add e-mails, mysql db's etc.

Please provide any assistance you can.

TIA,

Rob
Lock-Net Hosting
 
Resolution for those of you who run into it

Since it appears no one had any suggestions about the problem, I'll post the fix here for those who need it without hunting down support from SW-Soft. (They fixed me up in case you're wondering)


The problem was the failure of the updater to change the psa database to InnoDB. This error:

ERROR while trying to convert all tables to InnoDB

So Support sent me this little script to manually fix the tables::

#!/bin/sh
#Convert all tables into InnoDB Type

mysql -Ns -uadmin -p`cat /etc/psa/.psa.shadow` psa -e 'show table status' \
| awk -F '\t' -- '$2 != "InnoDB" { print "alter table "$1" type=InnoDB;" }' |
mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa

Not sure if it will show but you can contact me if you need the file. I just touched a new file and ran with sh. I had to run it once for each table in the psa database, not just a single time FYI.

After that, I had to rerun the upgrade manually. Thanks to a couple other forums I did these commands:

rpm -Uvh --justdb psa-7.5.3-rh9.build75050506.13.i586.rpm (first and should say it's installed)
rpm -Uhv --force --nodeps psa-7.5.3-rh9.build75050506.13.i586.rpm (second and it starts the whole process again).

(Don't forget to change this to suit your OS)

This time it went smoothly except that httpd wouldn't start. I got this error:

(98)Address already in use: make_sock: could not bind to address 0.0.0.0:443
no listening sockets available, shutting down

And it wouldn't go away until I rebooted the server. Now all is well. If you didn't notice, this was on a RH9 server. Hope it helps someone.

Rob
 
I had the same problem but I know why my upgrade failed. Mine failed because for some reason my module_cs_gs_configs, module_cs_gs_parameters, and module_cs_gs_servers tables in the psa database were listed but the myi files were missing. Therefore the mysql structure upgrade failed and the upgrade failed. I dropped those 3 tables, and used your rpm --justdb and rpm --force (didn't bother with --nodeps) and the upgrade went flawlessly.
I was hoping that it would add those tables back with the correct structures, however it didn't, so if anyone would like to post the structures for those three tables in psa 7.5.3 I would appreciate it.

Thank you so much for coming back and posting the solution to your problem after you found it. It saved my butt.

Stucco
 
By the way, to the Plesk programmers, I'm sorely disappointed that the failure of the mysql structure upgrade does not back out the entire upgrade. It leaves people with serious problems, when you simply could have automated a backup before the process started.
 
dont know if this will help Stucco but found this in another post. PSA keeps a dump of the database in /var/lib/psa/dumps

--------------
Now comes the fun part, you need to restore the psa database (mysql database where plesk stores hosting data) to the last good dump (plesk makes a dump of all databases every night). The files are called mysql.daily.dump.X.gz where X is the age (in days) of the dump, get the last known good dump (i.e. if the problems started 3 days ago after the dump file was created get mysql.daily.dump.3.gz, in the following example that's the one we'll use.

cd /var/lib/psa/dumps

cp mysql.daily.dump.3.gz /tmp

cd /tmp/

gunzip mysql.daily.dump.3.gz

Now we need to remove the current databases (may have been left in an inconsistent state by the autoupdater that for some reason best known to SW-SOFT simply borks at errors instead of backing out and restoring the system to the previous working state, but that's a rant for another day)

So, the long and the short of it is you need to lose the horde and psa databases, launch the mysql admin tool:

mysql -u admin -p`cat /etc/psa/.psa.shadow`

DROP DATABASE horde;

DROP DATABASE psa;

quit

(the ; is important as it signals the end of the command.)

Now we need to reload the working db

mysql -u admin -p`cat /etc/psa/.psa.shadow` < mysql.daily.dump.3
-----------------

Hope his helps
Mike
 
Re: Resolution for those of you who run into it

Originally posted by rspurlock
If you didn't notice, this was on a RH9 server. Hope it helps someone.
Rob [/B]
I just wanted to say my thank you. It worked fine on SuSE 9.1. I just had to install mysql-max from Yast2, and uncomment innodb stuff in /etc/my.cnf - by default there is no support for InnoDB on SuSE. And this was the reason why my upgrade failed.
 
Back
Top