• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

[SOLVED] Migration Fails to move databases with error

KirkM

Regular Pleskian
I have searched everywhere for an answer to this and finally have to resort to posting a new thread.

Using Migration Manager to move individual sites from source server:

CentOS 6.7 (Final)
Plesk version 12.0.18 Update #70

To a new destination server:

CentOS Linux 7.1.1503 (Core)‬
Plesk version 12.5.30 Update #9

I fixed all warnings and got the pre-check to come up without any issues.
The server connects without issue and shows my choices of sites to migrate.
I choose one site and check all the information including mail and database info.

No matter what I try, it always moves the site fine but no databases and shows a warning at completion.

The info log shows this:
[ERROR] Failed to perform action: Set security policy on Windows nodes
Cause: expected string or buffer
That is a critical error, migration was stopped.

The debug log shows this:
=|2015-11-06_20:54:58,222|D|MT|parallels||| raise MigrationError(full_failure_message)
=|2015-11-06_20:54:58,222|D|MT|parallels|||MigrationError: Failed to perform action: Set security policy on Windows nodes
=|2015-11-06_20:54:58,222|D|MT|parallels|||Cause: expected string or buffer
=|2015-11-06_20:54:58,222|D|MT|parallels|||That is a critical error, migration was stopped.
+|2015-11-06_20:54:58,223|E|MT|parallels|||Failed to perform action: Set security policy on Windows nodes
=|2015-11-06_20:54:58,223|E|MT|parallels|||Cause: expected string or buffer
=|2015-11-06_20:54:58,223|E|MT|parallels|||That is a critical error, migration was stopped.

I have the latest version of Migration Manager installed (released today), so that is not the problem unless the update that came out today is broken.

Sorry if this is a stupid question, but why is it talking about security policy on Windows nodes when both source and destination servers are Linux?

I have checked and opened everything in the firewall and just can't get this to work. Any help pointing me in the right direction would be appreciated.
 
Last edited:
issue seems to be with the hardware node.
Can you please change subscriptions to a new service plan with everything unlimited and try the migration again.
 
Thanks for your suggestion.

Both source and destination were completely unlimited to begin with but I created new unlimited plans on both and it still failed with exactly the same errors.
 
Last edited:
Just noticed another component update for the Migration Manager. I updated and... still doesn't work. Don't know what they are fixing with all these daily updates but it isn't this problem.
 
Another odd and maybe irrelevant item is that I get 2 notification emails for each domain that is migrated (or sort of migrated).
 
Could you please look for Python stack traces around "expected string or buffer" in debug.log file, there should be some errors a bit before these line you provided:
=|2015-11-06_20:54:58,222|D|MT|parallels||| raise MigrationError(full_failure_message)
?

Which kind of notification e-mails do you get - what is the subject/body? Do you receive them from source or target server?
 
I want to mention that the target is a clean install brand new server -
Plesk 12.5.30 on Centos 7

Here is the entire last section of the debug.log file. I don't see any additional E codes in there:
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||START: Stop remote Windows agents
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||FINISH: Stop remote Windows agents
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:54:58,222|D|MT|parallels|||Context:
=|2015-11-06_20:54:58,222|D|MT|parallels|||Traceback (most recent call last):
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/cli/common_cli.py", line 50, in run
=|2015-11-06_20:54:58,222|D|MT|parallels||| options.method(migrator.action_runner)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/cli/migration_cli.py", line 298, in <lambda>
=|2015-11-06_20:54:58,222|D|MT|parallels||| lambda runner: runner.run_entry_point('transfer-accounts'),
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/base.py", line 35, in run_entry_point
=|2015-11-06_20:54:58,222|D|MT|parallels||| self.run(entry_point)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 61, in run
=|2015-11-06_20:54:58,222|D|MT|parallels||| self._run_common_actions_tree(actions_tree)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 93, in _run_common_actions_tree
=|2015-11-06_20:54:58,222|D|MT|parallels||| self._run_common_action_plain(action)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 128, in _run_common_action_plain
=|2015-11-06_20:54:58,222|D|MT|parallels||| raise MigrationError(full_failure_message)
=|2015-11-06_20:54:58,222|D|MT|parallels|||MigrationError: Failed to perform action: Set security policy on Windows nodes
=|2015-11-06_20:54:58,222|D|MT|parallels|||Cause: expected string or buffer
=|2015-11-06_20:54:58,222|D|MT|parallels|||That is a critical error, migration was stopped.
+|2015-11-06_20:54:58,223|E|MT|parallels|||Failed to perform action: Set security policy on Windows nodes
=|2015-11-06_20:54:58,223|E|MT|parallels|||Cause: expected string or buffer
=|2015-11-06_20:54:58,223|E|MT|parallels|||That is a critical error, migration was stopped.
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||START: Get operation progress in current session
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||FINISH: Get operation progress in current session
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||START: Get operation progress in current session
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||FINISH: Get operation progress in current session
+|2015-11-06_20:55:22,509|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:55:25,950|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:55:25,950|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:55:25,950|D|MT|core.workflow.runner.by_subscription|||START: Get operation progress in current session
+|2015-11-06_20:55:25,951|D|MT|core.workflow.runner.by_subscription|||FINISH: Get operation progress in current session
+|2015-11-06_20:55:25,951|D|MT|core.workflow.runner.by_subscription|||Exit common action block


Duplicate emails come from target server as if everything is OK. The domain is created on the target, just no databases come across and no aliases. I don't think email comes across either:
Subject:
<secure.xxxserver.com> Notification of the site creation.

Body:
A new domain name has been created.
Domain name: xxdomain.com
Domain name owner: John Smith
IP address associated with the domain name: xx.xxx.xxx.xx

No email from source.
 
There should be another stack trace a bit before. Could you please look for all stack traces in debug.log starting with "Traceback" and post them here?
 
Thanks for your help.

Here is everything from the first stack trace I could find (I have to post 2 replies, it won't allow so much in one post):
+|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription|||Exception:
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription|||Traceback (most recent call last):
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 115, in _run_common_action_plain
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| run()
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 108, in run
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| action.run(self._context)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/actions/base/legacy_action.py", line 68, in run
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| self.function(global_context)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/shared_hosting_workflow.py", line 674, in <lambda>
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| function=lambda ctx: ctx.migrator.set_security_policy()
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/migrator.py", line 906, in set_security_policy
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| for server in self._get_nodes_to_set_security_policy():
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/migrator.py", line 931, in _get_nodes_to_set_security_policy
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| return set(self._get_subscription_windows_web_nodes()) | set(self._get_subscription_windows_mssql_nodes())
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/migrator.py", line 2119, in _get_subscription_windows_web_nodes
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| subscription_web_node = self._get_subscription_nodes(subscription.name).web
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/__init__.py", line 179, in wrapper
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| value = func(*args, **kw)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/migrator.py", line 305, in _get_subscription_nodes
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| return target_panel.get_subscription_nodes(self.global_context, subscription_name)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/__init__.py", line 179, in wrapper
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| value = func(*args, **kw)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/panel.py", line 99, in get_subscription_nodes
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| subscription_target_services = self.get_subscription_target_services(global_context, subscription_name)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/__init__.py", line 179, in wrapper
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| value = func(*args, **kw)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/panel.py", line 143, in get_subscription_target_services
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| subscription_info_response = request_single_optional_item(plesk_api, request)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/plesk_api_utils.py", line 220, in request_single_optional_item
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| response = plesk_api.send(request)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/client.py", line 32, in send
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| return self.send_many(operation, **request_settings)[0]
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/client.py", line 99, in send_many
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| operation_responses.append(operation.parse(operation_response))
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/operator/subscription.py", line 161, in parse
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| results = [core.Result.parse(r, cls._parse_data) for r in elem.findall('result')]
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/core.py", line 35, in parse
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| return Success(parse_data(elem))
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/operator/subscription.py", line 175, in _parse_data
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| mail = if_not_none(data.find('mail'), cls._parse_mail)
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/__init__.py", line 20, in if_not_none
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| return func(value) if value is not None else None
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/operator/subscription.py", line 221, in _parse_mail
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| ip_addresses = _parse_plesk_ips([ip_node.text for ip_node in elem.findall('ip_address')])
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/operator/subscription.py", line 28, in _parse_plesk_ips
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| if is_ipv4(ip):
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/ip.py", line 57, in is_ipv4
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| return pattern.match(ip_address) is not None
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription|||TypeError: expected string or buffer
+|2015-11-06_20:54:55,499|D|MT|core.workflow.runner.by_subscription|||Execute shutdown action 'restore-apache-restart-interval'
+|2015-11-06_20:54:55,499|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:54:55,499|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:55,499|I|MT|core.workflow.runner.by_subscription|||Restore Apache restart interval
+|2015-11-06_20:54:55,500|D|MT|plesk.actions.apache_restart_interval.restore|||Restore old Apache restart interval value
+|2015-11-06_20:54:55,500|D|MT|core|||Call local command: /bin/sh -c '/usr/local/psa/bin/server_pref -u -restart-apache 60'
+|2015-11-06_20:54:55,843|D|MT|core|||Command stdout: SUCCESS: Server preferences are successfully updated
=|2015-11-06_20:54:55,843|D|MT|core|||
+|2015-11-06_20:54:55,844|D|MT|core|||Command stderr:
+|2015-11-06_20:54:55,844|D|MT|core|||Command exit code: 0
+|2015-11-06_20:54:55,844|I|MT|plesk.actions.apache_restart_interval.restore|||Force Apache restart
+|2015-11-06_20:54:55,844|D|MT|core|||Call local command: /bin/sh -c '/usr/local/psa/admin/bin/websrvmng -r'
+|2015-11-06_20:54:58,204|D|MT|core|||Command stdout:
+|2015-11-06_20:54:58,204|D|MT|core|||Command stderr:
+|2015-11-06_20:54:58,204|D|MT|core|||Command exit code: 0
 
Next half:

+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||Execute shutdown action 'cleanup'
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||START: Uninstall migration agent files
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||FINISH: Uninstall migration agent files
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,205|D|MT|core.workflow.runner.by_subscription|||START: Remove temporary SSH keys
+|2015-11-06_20:54:58,206|D|MT|core.workflow.runner.by_subscription|||FINISH: Remove temporary SSH keys
+|2015-11-06_20:54:58,206|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,206|D|MT|core.workflow.runner.by_subscription|||START: Close SSH connections
+|2015-11-06_20:54:58,206|D|MT|core.connections.ssh.connection_pool|||Closing SSH connection to the source server 'source' (xx.xxx.xxx.xx)
+|2015-11-06_20:54:58,206|D|MT|core.connections.ssh.server_connection.lazy_open|||Close SSH connection to the the source server 'source' (xx.xxx.xxx.xx)
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||FINISH: Close SSH connections
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||START: Shutdown Windows rsync servers
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||FINISH: Shutdown Windows rsync servers
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||START: Stop remote Windows agents
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||FINISH: Stop remote Windows agents
+|2015-11-06_20:54:58,222|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:54:58,222|D|MT|parallels|||Context:
=|2015-11-06_20:54:58,222|D|MT|parallels|||Traceback (most recent call last):
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/cli/common_cli.py", line 50, in run
=|2015-11-06_20:54:58,222|D|MT|parallels||| options.method(migrator.action_runner)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/cli/migration_cli.py", line 298, in <lambda>
=|2015-11-06_20:54:58,222|D|MT|parallels||| lambda runner: runner.run_entry_point('transfer-accounts'),
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/base.py", line 35, in run_entry_point
=|2015-11-06_20:54:58,222|D|MT|parallels||| self.run(entry_point)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 61, in run
=|2015-11-06_20:54:58,222|D|MT|parallels||| self._run_common_actions_tree(actions_tree)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 93, in _run_common_actions_tree
=|2015-11-06_20:54:58,222|D|MT|parallels||| self._run_common_action_plain(action)
=|2015-11-06_20:54:58,222|D|MT|parallels||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 128, in _run_common_action_plain
=|2015-11-06_20:54:58,222|D|MT|parallels||| raise MigrationError(full_failure_message)
=|2015-11-06_20:54:58,222|D|MT|parallels|||MigrationError: Failed to perform action: Set security policy on Windows nodes
=|2015-11-06_20:54:58,222|D|MT|parallels|||Cause: expected string or buffer
=|2015-11-06_20:54:58,222|D|MT|parallels|||That is a critical error, migration was stopped.
+|2015-11-06_20:54:58,223|E|MT|parallels|||Failed to perform action: Set security policy on Windows nodes
=|2015-11-06_20:54:58,223|E|MT|parallels|||Cause: expected string or buffer
=|2015-11-06_20:54:58,223|E|MT|parallels|||That is a critical error, migration was stopped.
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||START: Get operation progress in current session
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||FINISH: Get operation progress in current session
+|2015-11-06_20:54:58,428|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||START: Get operation progress in current session
+|2015-11-06_20:55:22,508|D|MT|core.workflow.runner.by_subscription|||FINISH: Get operation progress in current session
+|2015-11-06_20:55:22,509|D|MT|core.workflow.runner.by_subscription|||Exit common action block
+|2015-11-06_20:55:25,950|D|MT|core.workflow.runner.by_subscription|||Enter common action block
+|2015-11-06_20:55:25,950|D|MT|core.workflow.runner.by_subscription|||Checking whether it is required to execute action
+|2015-11-06_20:55:25,950|D|MT|core.workflow.runner.by_subscription|||START: Get operation progress in current session
+|2015-11-06_20:55:25,951|D|MT|core.workflow.runner.by_subscription|||FINISH: Get operation progress in current session
+|2015-11-06_20:55:25,951|D|MT|core.workflow.runner.by_subscription|||Exit common action block
 
The root cause is somewhere there:
=|2015-11-06_20:54:55,497|D|MT|core.workflow.runner.by_subscription||| File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/utils/xml_rpc/plesk/operator/subscription.py", line 28, in _parse_plesk_ips

There should be Plesk API request and Plesk API response (two XML documents, request should contain "<webspace><get>" tags) right in debug log. It seems that response does not contain some IP address, while Plesk Migrator expects it. Probably some inconsistency in Plesk. Could you please try to find this request/resonse in debug log?
 
Anyway, first I'd try to remove that subscription from target Plesk, try to start migration for it once more, and see if issue persists.
 
I tried that and even re-imaged the entire server and it still doesn't work. I have tried it with many domains that are on my current production server (source) and it is always the same.

I looked at the XML for the webspace and there are a few. Strangely, the ones for the domain have error 1013 - domain does not exist (but it does and it gets moved to the target) but the ones for mail of that domain have no error and say it is all good.

+|2015-11-06_20:54:42,936|D|MT|core.utils.common.http_xml_client|||API request to https://xx.xxx.xxx.xx:8443/enterprise/control/agent.php:
=|2015-11-06_20:54:42,936|D|MT|core.utils.common.http_xml_client|||<?xml version='1.0' encoding='utf-8'?>
=|2015-11-06_20:54:42,936|D|MT|core.utils.common.http_xml_client|||<packet version="1.6.6.0"><webspace><get><filter><name>xxdomain.com</name></filter><dataset><gen_info /></dataset></get></webspace></packet>
+|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client|||API response from https://xx.xxx.xxx.xx:8443/enterprise/control/agent.php:
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client|||<?xml version='1.0' encoding='utf-8'?>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client|||<packet version="1.6.6.0">
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <webspace>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <get>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <result>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <status>error</status>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <errcode>1013</errcode>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <errtext>domain does not exist</errtext>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| <filter-id>xxdomain.com</filter-id>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| </result>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| </get>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client||| </webspace>
=|2015-11-06_20:54:43,009|D|MT|core.utils.common.http_xml_client|||</packet>
+|2015-11-06_20:54:43,039|I|MT|core.workflow.runner.by_subscription|||FINISH: Fetch information from target servers
+|2015-11-06_20:54:55,339|D|MT|core.utils.common.http_xml_client|||API request to https://74.208.149.84:8443/enterprise/control/agent.php:
=|2015-11-06_20:54:55,339|D|MT|core.utils.common.http_xml_client|||<?xml version='1.0' encoding='utf-8'?>
=|2015-11-06_20:54:55,339|D|MT|core.utils.common.http_xml_client|||<packet version="1.6.6.0"><webspace><get><filter><name>xxdomain.com</name></filter><dataset><gen_info /><hosting /><mail /></dataset></get></webspace></packet>
+|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client|||API response from https://xx.xxx.xxx.xx:8443/enterprise/control/agent.php:
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client|||<?xml version='1.0' encoding='utf-8'?>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client|||<packet version="1.6.6.0">
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <webspace>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <get>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <result>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <status>ok</status>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <filter-id>xxdomain.com</filter-id>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <id>10</id>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <data>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <gen_info>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client|||...
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <ip_address>xx.xxx.xxx.xx</ip_address>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <mail-provider>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| <local />
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| </mail-provider>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| </mail>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| </data>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| </result>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| </get>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client||| </webspace>
=|2015-11-06_20:54:55,495|D|MT|core.utils.common.http_xml_client|||</packet>
 
1013 error is ok - that is how Plesk Migrator checks for existing domains on target, if 1013 is returned then domain does not exist yet, and so should be created.
The 2nd one is what we need. I will check tomorrow what could be wrong with it and let you know.
 
According to the stack trace, there should be the following in response to some of <webspace><get> requests:
Code:
<mail>
    ...
    <ip_address/>
    ...
</mail>
or
Code:
<mail>
   ...
   <ip_address></ip_address>
   ...
</mail>
(IP address is absent or empty)

Could you check if that true, and if so - for which domain?

Also, it will could be helpful to check results of the following query:
Code:
select domains.name, IpAddressesCollections.ipCollectionId as collection_id, IpAddressesCollections.ipAddressId as ip_address_id, IP_Addresses.ip_address, IP_Addresses.public_ip_address from Subscriptions join domains on object_type = 'domain' and Subscriptions.object_id = domains.id left join DomainServices on domains.id = dom_id and type = 'mail' left join IpAddressesCollections on IpAddressesCollections.ipCollectionId =  DomainServices.ipCollectionId left join IP_Addresses on IP_Addresses.id = IpAddressesCollections.ipAddressId;
with plesk db command on the target server

I would also check if the mail service works correctly for new domains on that server, e.g. you can create new mailboxes, they are able to send and receive mail.
 
The issue seems to be quite complex, to speed up resolution it makes sense:
1) Provide access to your target and source servers to http://talk.plesk.com/members/aleksey-filatev.174043/ from Odin team with a private conversation (click link, then "Information" tab, then "Start a Conversation"). Then we will investigate the issue directly on the server.
or
2) Create a support ticket, and let me know the ID.
 
First off, let me summarize what is and isn't happening.

What IS happening:
The domain, IP assignment (no matter if shared or dedicated) and DNS records are all migrated successfully. The DNS is correctly reformatted to the target server's DNS Template, it even revises the SPF record correctly.

What ISN'T happening:
Everything else. No site content, no aliases, no mail accounts and no databases. The whole process pretty much stops after the domain is retrieved and the IP and DNS are assigned. NOTHING happens after that.


According to the stack trace, there should be the following in response to some of <webspace><get> requests:
Code:
<mail>
...
<ip_address/>
...
</mail>
or
Code:
<mail>
...
<ip_address></ip_address>
...
</mail>
(IP address is absent or empty)

There are no <ip_address> xml tags at all in the entire debug log.
However, I did notice something I thought was strange. Look at this xml:
Code:
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||<?xml version='1.0' encoding='utf-8'?>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||<packet version="1.6.6.0">
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||  <webspace>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||    <get>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||      <result>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||        <status>ok</status>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||        <filter-id>xxxdomain.com</filter-id>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||        <id>4</id>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||        <data>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||          <gen_info>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||...
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||            <outgoing-messages-mbox-limit>default</outgoing-messages-mbox-limit>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||            <outgoing-messages-domain-limit>default</outgoing-messages-domain-limit>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||            <outgoing-messages-subscription-limit>default</outgoing-messages-subscription-limit>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||            <outgoing-messages-enable-sendmail>default</outgoing-messages-enable-sendmail>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||          </mail>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||        </data>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||      </result>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||    </get>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||  </webspace>
=|2015-11-10_10:08:12,758|D|MT|core.utils.common.http_xml_client|||</packet>
There is a </mail> closing tag but no <mail> opening tag. Shouldn't there be a <mail> opening tag?

Also, it will could be helpful to check results of the following query:
Code:
select domains.name, IpAddressesCollections.ipCollectionId as collection_id, IpAddressesCollections.ipAddressId as ip_address_id, IP_Addresses.ip_address, IP_Addresses.public_ip_address from Subscriptions join domains on object_type = 'domain' and Subscriptions.object_id = domains.id left join DomainServices on domains.id = dom_id and type = 'mail' left join IpAddressesCollections on IpAddressesCollections.ipCollectionId = DomainServices.ipCollectionId left join IP_Addresses on IP_Addresses.id = IpAddressesCollections.ipAddressId;
with plesk db command on the target server
This only shows the domain I entered manually to set up my own nameservers.
I would also check if the mail service works correctly for new domains on that server, e.g. you can create new mailboxes, they are able to send and receive mail.
Mail (and everything else) works fine for domains that are set up from scratch. This is an issue with the Migrator. I even tried to do a simple backup from the source server and restore on the target server to move some domains and, although it does move the databases and some mailboxes (it doesn't move all of them for some reason) it breaks your ability to delete some mailboxes and has a few other problems with aliases.

To be honest, I have had so many issues with a very straightforward clean install of Plesk 12.5 on a brand new server that I am starting to feel that version 12.5 isn't ready for production. I have spent 2 weeks fixing issues that are all over the Odin forums that many people are having too. It has so many bugs that I have had to solve, I don't trust it to function correctly when I take it to production. Perhaps it should still be in beta. I may re-image the server and use 12.0.18 until 12.5 is truly finished.
 
@KirkM,

Without commenting to all the detailed posts, let´s return to the start, where you stated:

CentOS 6.7 (Final)
Plesk version 12.0.18 Update #70

To a new destination server:

CentOS Linux 7.1.1503 (Core)‬
Plesk version 12.5.30 Update #9

In fact, the issue with the migration manager arise from migrating "12.0.18 data" to "12.5.30 data".

I agree that this ("across-version") migration process should be possible, without any issues, but it simply isn´t possible, at least for the time being.

The current impossibility is independent of the version of the migration manager.

In the last response, you stated:

To be honest, I have had so many issues with a very straightforward clean install of Plesk 12.5 on a brand new server that I am starting to feel that version 12.5 isn't ready for production. I have spent 2 weeks fixing issues that are all over the Odin forums that many people are having too. It has so many bugs that I have had to solve, I don't trust it to function correctly when I take it to production. Perhaps it should still be in beta. I may re-image the server and use 12.0.18 until 12.5 is truly finished.

These conclusions are not necessary.

True, some bugs were present in the last couple of weeks, but almost of all of them were patched properly.

Any current "clean" installation of Plesk 12.5.30 would result in a proper installation, with the exception of some bugs that are inherent to or persistent in the third-party packages that Odin uses or provides with Plesk (for example, a well-known spamassassin bug, that already exists for many years in the vendor packages).

In short, any clean installation of Plesk 12.5.30 will do and most of the issues are not to be related to or associated with the Plesk packages.

Sure, in my humble opinion Odin Team should also provide patches for the common and well-known bugs in third-party/vendor packages, but that is only a personal opinion.

Anyway, going back to Plesk 12.0.18 is similar to "inventing a square wheel", when taking into consideration the following advantages of Plesk 12.5.30:

a) it is more efficient, more reliable (yes, indeed!) and it certainly has better performance,

b) it is more up-to-date (Plesk 12.0.18 certainly was not),

c) it is more flexible, with respect to the various methods of setting up hosting environments,

d) it is more user-friendly, in many ways,

and the above is certainly not a complete summary.

After all, Plesk 12.0.18 seems to be more stable, but it is factually consisting of a core, that requires 70 (!) micro-updates to be stable to a high degree.

Plesk 12.5.30 is just a new core, also providing a "clean sheet" for further development of reliability and stability of the Plesk Panel and its components.

In conclusion, have some faith and patience.

Regards....
 
I would agree with many of your points on the progress of the 12.5 version as far as new functionality and features, that is why I was trying to move to it. But if you can't migrate your clients from your current production server to this new version, then those new features and improvements don't matter.

And this is migrating from the SAME panel and only ONE version behind.

I can't do an entire server backup and restore because I have to do them in small groups to coordinate the eCommerce stores needing to be closed during the migration and the sites on which I have active data management systems in place so my licensees have to have their admin access taken offline. This all has to be coordinated in small bits.

Since the restore function on 12.5 also doesn't work entirely correctly coming from 12.0.18, that leaves rebuilding all sites from scratch and then manually FTP content files, moving huge mailboxes and doing sql output and reconstruction. That isn't practical. So, in my situation, and I am sure many others, not being able to migrate from 12.0.18 to 12.5 is a deal breaker.
In conclusion, have some faith and patience.
Exactly my previous point. I will have to have patience and go back to 12.0.18 until 12.5 is truly ready for production. I can't stop my business and I (literally) can't move it to 12.5.
 
Back
Top