• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Migration problem

La Linea

Basic Pleskian
Server operating system version
CentOS 7.9 and either Alma Linux 8.x/9.x or Rocky Linux 8.x/9.x
Plesk version and microupdate number
18.0.62
Hello @ all!

When we try to migrate a domain from a Centos 7.9 server with Plesk Obsidian 18.0.62 to a freshly installed Alma Linux or Rocky Linux servers with the same Plesk version, no matter if the target server is a 8.x or a 9.x version, we are either being thrown a ...

"Failed to select free name for session directory"

... error in the "Preparing migration" status overlay in the GUI (when using SSH keys) or a ...

"Failed to check SSH connection to the source server 'source' (xxx.xxx.xxx.xxx): Unable to connect to 'xxx.xxx.xxx.xxx' by SSH: Authentication failed..
Ensure that the server is up and there are no firewall rules that may block SSH connections to the server,
then restart migration.
"

... (when attemting to use User / password) shortly after the click on "Prepare migration".

The used password is correct, login by password allowed and port 22 is open between all our servers.

Any ideas what might be going wrong here or how to get the migration to work (preferably by using SSH keys)?
 
Yes, PermitRootLogin is set to yes on the source server.

We even switched to without-password temporarily to force it to fail and back to yes.

Interestingly we now get the "Failed to select free name for session directory" in both cases. Hmmm...
 
Hello, please take a look into logs /usr/local/psa/var/modules/panel-migrator/sessions/*
For instance, I found a similar problem and it was related to "Shell access is not enabled on your account!".
Maybe it is your case as well.
 
Fresh logs since we had already "finished" all the non working migrations. Within the debug.log everything seems to be fine until ...

Code:
core.connections.plesk_server|||Unix product root directory on the source server 'source' (xxx.xxx.xxx.xxx): /usr/local/psa
core.runners.unix.ssh|||Get contents of file '/usr/local/psa/version' on the source server 'source' (xxx.xxx.xxx.xxx) with 'cat' utility
core.workflow.runner.by_subscription|||FINISH: Check connections
core.workflow.runner.by_subscription|||Check migration compatibility of source and target Plesk versions
core.workflow.runner.by_subscription|||Check that all required components are installed on source Plesk
core.workflow.runner.by_subscription|||START: Fetch basic information about resellers, clients and domains data from source servers
plesk.source.legacy.pmm_agent||source|Deploy migration agent to 'xxx.xxx.xxx.xxx'
core.runners.base||source|Execute command on the source server 'source' (xxx.xxx.xxx.xxx): test -e /root/plesk_migrator
core.runners.base||source|Command execution results:
core.runners.base||source|stdout: ESC[0;92m/root/plesk_migrator is NOT in set -e.ESC[0m
core.runners.base||source|
core.runners.base||source|stderr:
core.runners.base||source|exit code: 0
core.runners.base||source|Execute command on the source server 'source' (xxx.xxx.xxx.xxx): test -e /root/plesk_migrator/plesk_migrator-2tx1b82bgmwfk8o5znmzra3yj0p33qvz
core.runners.base||source|Command execution results:
core.runners.base||source|stdout: ESC[0;92m/root/plesk_migrator/plesk_migrator-2tx1b82bgmwfk8o5znmzra3yj0p33qvz is NOT in set -e.ESC[0m
core.runners.base||source|
core.runners.base||source|stderr:
core.runners.base||source|exit code: 0

The last segment with "test -e" repeats with different unique names 9 (!) more times. After that it ends as follows:

Code:
core.workflow.runner.by_subscription|||Exception:
core.workflow.runner.by_subscription|||Traceback (most recent call last):
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 167, in run_multi_attempts
core.workflow.runner.by_subscription|||    run()
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/workflow/runner/by_subscription.py", line 156, in run
core.workflow.runner.by_subscription|||    action.run(self._context)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/actions/fetch/fetch_shallow_backup.py", line 40, in run
core.workflow.runner.by_subscription|||    dump_agent = create_pmm_agent(global_context, source_server)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/common/__init__.py", line 257, in wrapper
core.workflow.runner.by_subscription|||    value = func(*args, **kw)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/pmm_agent/utils.py", line 32, in create_pmm_agent
core.workflow.runner.by_subscription|||    PleskPMMConfig(source_server.node_settings)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/pmm_agent/unix.py", line 41, in __init__
core.workflow.runner.by_subscription|||    super(PleskXPmmMigrationAgent, self).__init__(global_context, server, migrator_pmm_dir, settings)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/legacy/pmm_agent.py", line 33, in __init__
core.workflow.runner.by_subscription|||    super(UnixPmmMigrationAgent, self).__init__(global_context, server)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/pmm/agent.py", line 148, in __init__
core.workflow.runner.by_subscription|||    self.agent_dir = self._deploy()
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/plesk/pmm_agent/unix.py", line 111, in _deploy
core.workflow.runner.by_subscription|||    agent_dir = super(PleskXPmmMigrationAgent, self)._deploy()
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/legacy/pmm_agent.py", line 148, in _deploy
core.workflow.runner.by_subscription|||    self._cleanup_dir()
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/legacy/pmm_agent.py", line 159, in _cleanup_dir
core.workflow.runner.by_subscription|||    runner.remove_directory(self._agent_dir)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/plesk/source/legacy/pmm_agent.py", line 213, in _agent_dir
core.workflow.runner.by_subscription|||    return self._source_server.get_session_file_path('pmm_agent')
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/connections/server.py", line 67, in get_session_file_path
core.workflow.runner.by_subscription|||    return self._session_dir.get_file_path(filename)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/session_dir.py", line 40, in get_file_path
core.workflow.runner.by_subscription|||    self._lazy_instantiate()
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/session_dir.py", line 65, in _lazy_instantiate
core.workflow.runner.by_subscription|||    self._session_dir = self._create()
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/session_dir.py", line 247, in _create
core.workflow.runner.by_subscription|||    return self._create_secure_subdir(self._base_session_dir)
core.workflow.runner.by_subscription|||  File "/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/session_dir.py", line 275, in _create_secure_subdir
core.workflow.runner.by_subscription|||    raise MigrationNoContextError(messages.FAILED_TO_SELECT_SESSION_DIRECTORY_NAME)
core.workflow.runner.by_subscription|||MigrationNoContextError: Failed to select free name for session directory
core.workflow.runner.by_subscription|||START: Uninstall migration agent files
core.workflow.runner.by_subscription|||FINISH: Uninstall migration agent files
core.workflow.runner.by_subscription|||START: Remove temporary users and SSH keys that were required to transfer files
core.workflow.runner.by_subscription|||FINISH: Remove temporary users and SSH keys that were required to transfer files
core.workflow.runner.by_subscription|||START: Remove temporary SSH keys
core.workflow.runner.by_subscription|||FINISH: Remove temporary SSH keys
core.workflow.runner.by_subscription|||START: Close SSH connections
core.connections.ssh.connection_pool|||Closing SSH connection to the source server 'source' (xxx.xxx.xxx.xxx)
core.connections.ssh.lazy_open_ssh_connection|||Close SSH connection to the the source server 'source' (xxx.xxx.xxx.xxx)
core.workflow.runner.by_subscription|||FINISH: Close SSH connections
core.workflow.runner.by_subscription|||START: Shutdown Windows rsync servers
core.workflow.runner.by_subscription|||FINISH: Shutdown Windows rsync servers
core.workflow.runner.by_subscription|||START: Stop remote Windows agents
core.workflow.runner.by_subscription|||FINISH: Stop remote Windows agents
core.workflow.runner.base|||MIGRATOR END: /usr/local/psa/admin/sbin/modules//panel-migrator/plesk-migrator generate-migration-list /usr/local/psa/var/modules/panel-migrator/sessions/20240730093944/config.ini --migration-list-format=json --migration-list-file=/usr/local/psa/var/modules/panel-migrator/sessions/20240730093944/migration-list-raw.json --skip-services-checks --include-existing-subscriptions --overwrite --reload-source-data
core.cli.common_cli|||Failed to select free name for session directory
 
Any ideas why the migrator tool doesn't use any of the successfully tested unique folder names and aborts instead?

Or what we can do to solve this issue?


Would it perhaps be sufficient to change the line

def __init__(self, runner_cm, session_dir, secure_subdir=True):
to
def __init__(self, runner_cm, session_dir, secure_subdir=False):

under the

class UnixSessionDir(SessionDir):

within

/usr/local/psa/admin/plib/modules/panel-migrator/backend/lib/python/parallels/core/utils/session_dir.py

to force the system to use any of the tested folders and continue?!?
 
I searched for similar issues "Failed to select free name for session directory" reported by users, but there are very few. Reports we do have about this issue are either caused by the fact that the SSH user on the source server doesn't have shell access (as @Aleksei Fedorov mentioned) or the SSH user on the source server isn't the root user.
 
Well, I'm afraid that doesn't really make sense IMO. IF the SSH user on the source didn't have shell access, the migrator wouldn't be able to connect and execute commands on the source server, would it?

As the migrator logs I have posted clearly show, it DID connect and it DID execute commands on an interactive shell. And they also show that the SSH user on the source IS root, otherwise it wouldn't be able to successfully run a "test -e /root/plesk_migrator/plesk_migrator-2tx1b82bgmwfk8o5znmzra3yj0p33qvz" and receive the response that it is NOT in set -e, which it logged. Or am I just on the wrong track?

However, all servers that I have tried to run a migration from this last remaining CentOS 7.9 source server, are either upgraded from CentOS 7.9 or fresh installs with either Alma Linux 8, Alma Linux 9, Rocky Linux 8 or Rocky Linux 9. They are all a Web Admin Edition on Plesk Obsidian 18.0.62 Update #2, at least until now, as new updates are already available. And they all have previously perfomed several migrations between each other over the years.

What I would like to know is if it would be sufficient to modify the python script session_dir.py as mentioned in my post Issue - Migration problem to force the migrator to continue, or what we can do to either skip this "security check" or to make the migrator more verbose and find out why it doesn't use any of the tested unique foldernames, but simply aborts.
 
What I would like to know is if it would be sufficient to modify the python script session_dir.py as mentioned in my post Issue - Migration problem to force the migrator to continue, or what we can do to either skip this "security check" or to make the migrator more verbose and find out why it doesn't use any of the tested unique foldernames, but simply aborts.
I am not sure to be honest. I suppose you could try?

However, I would recommend to maybe open a support ticket to get one of our support engineers involved.
 
Back
Top