• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue [PMT-2963] plesk-migrator fails with "Cause: 'NoneType' object is not iterable"

burnley

Regular Pleskian
Environment:
1. Both src & target were running at the time Plesk 12.5.30 Update #39 with Plesk Migrator 1.12.3
2. OS:
- src: CentOS release 5.11 (Final) 64bit as Virtuozzo container
- target: CentOS Linux release 7.2.1511 (Core) as Xen domU

While testing a full Plesk->Plesk server migration from cli I'm getting this "critical" error:
[2016-07-08 08:32:53][INFO] ******************** Summary ********************
[2016-07-08 08:32:53][INFO] Operation finished successfully for 6 out of 6 services
[2016-07-08 08:32:53][INFO] Checked objects Total Successful Warnings Failed
[2016-07-08 08:32:53][INFO] Service 6 6 0 0
[2016-07-08 08:32:53][INFO] All services are working correctly.
[2016-07-08 08:32:53][INFO] FINISH: check services on target servers
[2016-07-08 08:32:53][INFO] FINISH: Check connections
[2016-07-08 08:32:53][INFO] Check migration compatibility of source and target Plesk versions
[2016-07-08 08:32:53][INFO] Check that all required components are installed on source Plesk
[2016-07-08 08:32:53][INFO] START: Fetch basic information about resellers, clients and domains data from source servers
[2016-07-08 08:32:53][INFO] Using the existing shallow dump for 'pfu'
[2016-07-08 08:32:53][INFO] FINISH: Fetch basic information about resellers, clients and domains data from source servers
[2016-07-08 08:32:53][INFO] START: Read migration list
[2016-07-08 08:32:54][INFO] FINISH: Read migration list
[2016-07-08 08:32:54][INFO] Read IP mapping file
[2016-07-08 08:32:54][INFO] START: Fetch information from source panel
[2016-07-08 08:32:54][INFO] START: Fetch configuration data from Plesk servers
[2016-07-08 08:32:54][INFO] Using the existing dump '/usr/local/psa/var/modules/panel-migrator/sessions/migration-session/plesk.backup.pfu.raw.tar' for 'pfu'
[2016-07-08 08:32:54][INFO] FINISH: Fetch configuration data from Plesk servers
[2016-07-08 08:32:54][INFO] Fetch information about APS web applications
[2016-07-08 08:32:54][INFO] Merge information about APS web applications into backup
[2016-07-08 08:32:54][INFO] START: Fetch capability info from Plesk servers
[2016-07-08 08:32:54][INFO] Deploy migration agent to 'src.ip.ad.dr'
[2016-07-08 08:32:56][INFO] Create source capability dump.
[2016-07-08 08:33:15][ERROR] Failed to fetch capability info from Plesk servers
Cause: 'NoneType' object is not iterable
That is a critical error, migration was stopped.

In the debug_log I'm seeing this:
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu|stderr: [19712]: 2016-07-08 08:33:12.861 DEBUG New connection: mysql DBI connection. db psa, user admin, host localhost
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu|[19712]: 2016-07-08 08:33:12.862 TRACE SQL:
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu| SELECT `id`, `domain_service_id`
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu| FROM `WebApps`
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu| : Params:
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu|
=|2016-07-08_08:33:12,920|D|MT|core.runners.base||pfu|exit code: 0
+|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription|||Exception:
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription|||Traceback (most recent call last):
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 119, in _run_common_action_plain
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| run()
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 110, in run
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| action.run(self._context)
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/actions/fetch/fetch_backup.py", line 20, in run
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| self._fetch_dump(global_context, local_runner, source_id)
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/actions/fetch/fetch_backup.py", line 50, in _fetch_dump
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| cls._create_dump(global_context.dump_agent, dump_filename, selection)
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/actions/fetch/fetch_capability_info.py", line 23, in _create_d
ump
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| agent.create_capability_dump(dump_filename, selection=selection)
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/pmm/agent.py", line 165, in create_capability_dump
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| self._run_capability(filename, self.capability_dump_log, selection)
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/pmm_agent/unix.py", line 124, in _run_capability
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| CapabilityXMLConverter(capability_model).write_xml(local_data_filename)
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/capability_dump/xml_converter.py", line 18, in write_xml
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| capability_dump_contents = xml_to_string_pretty(self.create_xml())
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/capability_dump/xml_converter.py", line 35, in create_xml
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| ] + [
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/capability_dump/xml_converter.py", line 69, in _create_client_node
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| ] if domains else [])
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/capability_dump/xml_converter.py", line 83, in _create_domain_node
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| ] + [
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/capability_dump/model/plesk.py", line 160, in get_domain_ips
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription||| for plesk_ip_pool_item in plesk_ip_pool:
=|2016-07-08_08:33:15,204|D|MT|core.workflow.runner.by_subscription|||TypeError: 'NoneType' object is not iterable
+|2016-07-08_08:33:15,206|D|MT|core.workflow.runner.by_subscription|||Execute shutdown action 'cleanup'
[...]

It appears to be related to the IP configuration. I'm not using an IP mapping file, I'll try to restart it using an IP mapping and see what happens.
 
No luck. I ran:
/usr/local/psa/admin/sbin/modules/panel-migrator/plesk-migrator check --ip-mapping-file /usr/local/psa/var/modules/panel-migrator/conf/ipmapping.conf
to get the same error:
[2016-07-08 09:00:25][INFO] Using the existing dump '/usr/local/psa/var/modules/panel-migrator/sessions/migration-session/plesk.backup.pfu.raw.tar' for 'pfu'
[2016-07-08 09:00:25][INFO] FINISH: Fetch configuration data from Plesk servers
[2016-07-08 09:00:25][INFO] Fetch information about APS web applications
[2016-07-08 09:00:25][INFO] Merge information about APS web applications into backup
[2016-07-08 09:00:25][INFO] START: Fetch capability info from Plesk servers
[2016-07-08 09:00:25][INFO] Deploy migration agent to 'src.ip.ad.dr'
[2016-07-08 09:00:27][INFO] Create source capability dump.
[2016-07-08 09:00:49][ERROR] Failed to fetch capability info from Plesk servers
Cause: 'NoneType' object is not iterable
That is a critical error, migration was stopped.
And the same log entries in the debug log, bar the timestamps.
Any idea?
 
Update: after deciding to skip 'check', the following command fails in exactly the same way:
/usr/local/psa/admin/sbin/modules/panel-migrator/plesk-migrator import-resellers --ip-mapping-file /usr/local/psa/var/modules/panel-migrator/conf/ipmapping.conf
I believe it's a bug.
 
Hello!

I glad to inform you that patch for bug PMT-2963 is ready. You can easily install it by following steps:

cd /usr/local/psa/
wget http://autoinstall.plesk.com/panel-migrator/patches/PMT-2963
patch -p0 < PMT-2963

Please note, that this patch is applicable only for Plesk Migrator 1.12.3. This fix also will be included into future Plesk Migrator releases.

I hope it will help. I am sorry for any inconvenience.
 
Great! Thanks @Aleksey Filatev for assistance!
Looks like that sometimes support on forum much more effective that official support... :)
 
Last edited:
Thanks guys, always appreciate fast response time :)
After following the Plesk support engineer's advice I've got this:

mysql> select count(*) from clients where pool_id=0;
+----------+
| count(*) |
+----------+
| 160 |
+----------+
1 row in set (0.00 sec)

I then went into the Panel and checked some of the clients returned by your query and, as far as I can tell, none of the clients have active hosting. They're either:
- Clients with subscriptions with hosting type = 'DNS/URL Forwarding', or
- Clients with no subscriptions at all.
Then, I've applied on both source and destination the patch from Akeksey Filatev, which applied cleanly and now the panel migrator crashes somewhere else. /usr/local/psa/var/modules/panel-migrator/logs/debug.log now says:

[...]
+|2016-07-11_09:35:10,026|D|MT|core.dump.dump|||Parsed info for client 'obscuredclientname'
+|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription|||Exception:
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription|||Traceback (most recent call last):
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 119, in _run_common_action_plain
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| run()
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 110, in run
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| action.run(self._context)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/actions/base/legacy_action.py", line 68, in run
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| self.function(global_context)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/shared_hosting_workflow.py", line 195, in <lambda>
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| function=lambda ctx: ctx.migrator._fetch(ctx.options)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/migrator.py", line 418, in _fetch
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| raw_dump = self.load_raw_dump(self.global_context.source_servers[server_id])
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/__init__.py", line 226, in wrapper
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| value = func(*args, **kw)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/migrator.py", line 2096, in load_raw_dump
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| discard_mailsystem=self._is_mail_centralized(source_config.source_id),
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 60, in load
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| return PleskBackupSource11(container, migration_list_data, is_expand_mode)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 319, in __init__
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| self.resellers_info, self.clients_info, self.domains_info = self._index_entities()
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 1095, in _index_entities
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| clients_info = self._index_client_files()
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 1116, in _index_client_files
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| for path, xml, node, name in self._iter_client_files():
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 359, in _iter_client_files
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| for path, xml in self._iter_xml_files(re.compile(r'^(.*/)?clients/[^/]+/[^/]+\.xml$')):
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 350, in _iter_xml_files
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| yield member_name, self.container._load_xml(member.name)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/dump/dump.py", line 172, in _load_xml
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| tree = self._cleanup_tree(ElementTree.ElementTree(file=source))
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/opt/plesk/python/2.7/lib64/python2.7/xml/etree/ElementTree.py", line 611, in __init__
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| self.parse(file)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/opt/plesk/python/2.7/lib64/python2.7/xml/etree/ElementTree.py", line 656, in parse
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| parser.feed(data)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/opt/plesk/python/2.7/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in feed
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| self._raiseerror(v)
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| File "/opt/plesk/python/2.7/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription||| raise err
=|2016-07-11_09:35:10,030|D|MT|core.workflow.runner.by_subscription|||ParseError: not well-formed (invalid token): line 111, column 33
+|2016-07-11_09:35:10,062|D|MT|core.workflow.runner.by_subscription|||Execute shutdown action 'cleanup'
+|2016-07-11_09:35:10,063|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2016-07-11_09:35:10,063|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2016-07-11_09:35:10,063|D|MT|core.workflow.runner.by_subscription|||START: Uninstall migration agent files
+|2016-07-11_09:35:10,063|D|MT|core.runners.base|||Execute command on the source server 'pfu' (175.45.125.163): /bin/rm -rf /tmp/panel_migrator/pmm_agent
+|2016-07-11_09:35:10,186|D|MT|core.runners.base|||Command execution results:
=|2016-07-11_09:35:10,186|D|MT|core.runners.base|||stdout:
=|2016-07-11_09:35:10,186|D|MT|core.runners.base|||stderr:
=|2016-07-11_09:35:10,186|D|MT|core.runners.base|||exit code: 0
+|2016-07-11_09:35:10,187|D|MT|core.workflow.runner.by_subscription|||FINISH: Uninstall migration agent files
[...]

I was tempted initially to blame it on some sort of db inconsistency, but now I'm not sure anymore.
 
Hello!

Looks like Plesk Migrator is unable to process dump of your source server. The most common root cause is an issue during decryption of passwords, stored in dump, if source Plesk encryption key was broken. Encrypted password should be replaced with plain text in XML as a result of encryption, but if passwords was decrypted wrong, XML could become broken (not well-formatted) after that.

Could you please send me (privately, because this file will contain sensitive data) dump file, stored in your migration session directory (/usr/local/psa/var/modules/panel-migrator/sessions/<session_name>/plesk.backup.<source_name>.raw.tar)? Or alternatively you can attach it to support request #2058534. Via forum we can solve the problem quicker. :)
 
Aleksey, unfortunately we can't provide the dump file. But had a look at the source dump and here's what I've found after unpacking the tar:
# grep '<password type=' backup_info_1607121009.xml | wc -l
366
# grep '<password type=' backup_info_1607121009.xml | grep -v plain | wc -l
0
However, there's *only* one interesting password entry in the xml:
[...]
<user file-sharing-id="ba9d819c5d59fcca647ffb6c94dd257b" is-domain-admin="false" name="admin" is-built-in="true" contact="The Administrator" external-email="true" external-id="" is-legacy-user="false" email="[email protected]" guid="a61c0f3e-255c-c111-c377-4f65e8afab2d" cr-date="2011-09-18T18:59:36+11:00">
<properties>
<password type="plain"/>
<status>
<enabled/>
</status>
</properties>
[...]
This is the actual email address of the Plesk admin user. Looks like no password is set, according to Plesk. Later in the file there's another entry for the same email address:
[...]
<user file-sharing-id="217839124237bf869c4c0cb32f5cf1a2" is-domain-admin="false" name="[email protected]" is-built-in="false" contact="systems" external-email="true" subscription-name="removed" is-legacy-user="false" email="[email protected]" guid="8c213c9c-6c14-4ce1-ad17-7afcacea9071" cr-date="2013-05-09T12:10:43+10:00">
<properties>
<password type="plain">cl34rT3xtPas$w0rd</password>
<status>
<enabled/>
</status>
</properties>
<limits-and-permissions>
<role-ref>Mail User</role-ref>
</limits-and-permissions>
<preferences>
<pinfo name="email">[email protected]</pinfo>
</preferences>
</user>
[...]
Is the '<password type="plain"/>' entry above a possible cause for the issue?
 
Could you please validate (check syntax) all XML files in unpacked tar? You can use Notepad++ with XML Tools plugin or any tool that you prefer. This way we can check that dump contains only valid data.
 
Legend, Aleksey :)
I ran 'xmllint --noout' on all the .xml files in the source archive and I've ended up with *only* 73 backup_info_*.xml results, what the !!1!1!!11!!
Now I need to understand why this happens. If Plesk decryption key was broken, why some passwords were decrypted correctly?
 
Probably your source Plesk was previously upgraded or something bad happens in the past, so now you have two types of passwords: encrypted with current key (could be decrypted fine) and encrypted with forfeited key (could not be decrypted properly). Also possible that passwords encrypted with one key could be decrypted into well formatted text with another key, but it will be not the same text, as was encrypted. So migration will be fine in this case, but nonetheless customers will be unable to login into his services on target using their passwords.

Unfortunately, there are only one possible workaround for this issue now: change broken password on source manually, and then perform migration.
 
At least now we understand what the problem is. No wonder we've seen this large number of clients with corrupted backup xml files, the source server started at version 9.2 and was periodically upgraded over the years :)
Thanks again.
 
Back
Top