• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Resolved Dist-Uprade Debian 11 to 12

froggift

New Pleskian
Server operating system version
Debian 11
Plesk version and microupdate number
Newest
Hi,

since Debian 12 is officially released. Will there be an official Upgrade Guide from Debian 11 to 12 as soon as Debian 12 is supported?

Thank you!
 
To include on top of what Peter said, I, personally, wouldn't recommend upgrading to the next version for a least a year or so after the release simply because it's a newly released upgraded version of Debian that not all packages will be supported out of the gate and giving it the year time will allow the maintainers to bring it up if needed.

Hell, if I'm honest, I wouldn't even recommend doing a upgrade but instead deploy a new server and just transfer everything over to there but that's just my Windows Server mentality lol.
 
Hello,
Sorry to bump this. Just to add a more Linux server mentality to this latest post which I keep falling on by Googling.

I've upgraded 9 of my Debian 11 to Debian 12 servers flawlessly. No errors, no hassle, no package dependency problems... Only one config file to edit on a few DNS servers (obsolete value to remove in Bind9). Official instructions from Debian worked flawlessly.

I have 4 machines remaining on Debian 11 which are 3 Plesk servers and 1 OpenMediaVault server (the developer is alone and releases a major version with each Debian release so it takes time).

It is kind of frustrating that Plesk doesn't give as much care to Debian as they do for Ubuntu or AlmaLinux. Debian is very popular in the server industry. It's disappointing that Plesk doesn't support it faster. And about this, I can say for sure that some Plesk providers wanted to go with Debian when CentOS died, but chose AlmaLinux because of the late support from Plesk for Debian.
 
Docker Extension - Changelog:
1.8.1 (6 November 2023)
[+] Added support for Debian 12.

Maybe an early Christmas gift is on the way ;).

Good finding. 5 months since Debian 12 release. Usually Plesk takes 6 months to support new Debian releases if I recall correctly. So... Likely one month max to go. Hopefully. I'd be happy already if it went down to 2 or 3 months. Especially when distro changes are pretty minor as with Deb 11->12.
 
I saw that on release day. Was so excited!
So, is dist-upgrade officially supported now? Or just fresh installs?
 
@Peter Debik Maybe as an additional info: we did just test the upgrade on one of our servers having also the Imunify360 extension. The result is a broken system as Imunify360 does not support Debian 12 + Plesk yet (event though we did find a Debian 12 repository for it that was ok in the upgrade process: Index of /imunify360/debian/12/). There was no warning or hint till the post-installation script of imunify360-firewall failed. "plesk repair installation" is failing afterwards too. We tried to uninstall Imunify360 with their i360deploy.sh script, but Debian 12 is not supported. We will now contact their support.
 
My upgrade went well, except for one thing: agent360 does not work any more.

root@hosting:/usr/local/bin# systemctl status agent360.service
× agent360.service - agent360
Loaded: loaded (/etc/systemd/system/agent360.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Thu 2023-11-23 09:59:30 CET; 4min 6s ago
Duration: 3.033s
Process: 729 ExecStart=/usr/local/bin/agent360 (code=exited, status=1/FAILURE)
Main PID: 729 (code=exited, status=1/FAILURE)
CPU: 109ms

Nov 23 09:59:27 hosting systemd[1]: Started agent360.service - agent360.
Nov 23 09:59:30 hosting agent360[729]: Traceback (most recent call last):
Nov 23 09:59:30 hosting agent360[729]: File "/usr/local/bin/agent360", line 5, in <module>
Nov 23 09:59:30 hosting agent360[729]: from agent360.agent360 import main
Nov 23 09:59:30 hosting agent360[729]: ModuleNotFoundError: No module named 'agent360'
Nov 23 09:59:30 hosting systemd[1]: agent360.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 09:59:30 hosting systemd[1]: agent360.service: Failed with result 'exit-code'.
root@hosting:/usr/local/bin#
 
Had to install manually with
pip3 install agent360 --break-system-packages
due to PEP-688

Now is works again:
root@hosting:~# pip3 install agent360 --break-system-packages
Collecting agent360
Using cached agent360-1.2.47-py3-none-any.whl (67 kB)
Collecting psutil
Downloading psutil-5.9.6-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (283 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 283.6/283.6 kB 2.9 MB/s eta 0:00:00
Collecting netifaces
Downloading netifaces-0.11.0.tar.gz (30 kB)
Preparing metadata (setup.py) ... done
Collecting configparser
Downloading configparser-6.0.0-py3-none-any.whl (19 kB)
Collecting future
Downloading future-0.18.3.tar.gz (840 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 840.9/840.9 kB 8.2 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: distro in /usr/lib/python3/dist-packages (from agent360) (1.8.0)
Requirement already satisfied: certifi in /usr/lib/python3/dist-packages (from agent360) (2022.9.24)
Building wheels for collected packages: future, netifaces
Building wheel for future (setup.py) ... done
Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492025 sha256=b35dd422eb40d9a94d9e1d36c253c17c8b00654bdf17b7536250493b249e16bb
Stored in directory: /root/.cache/pip/wheels/da/19/ca/9d8c44cd311a955509d7e13da3f0bea42400c469ef825b580b
Building wheel for netifaces (setup.py) ... done
Created wheel for netifaces: filename=netifaces-0.11.0-cp311-cp311-linux_x86_64.whl size=34231 sha256=8d2f1b6e1a1dfb8cfbf157edb94af7e10a8fe1c5f3a9409b4189798c28ebfb5b
Stored in directory: /root/.cache/pip/wheels/40/85/29/648c19bbbb5f1d30e33bfb343fd7fb54296b402f7205d8e46f
Successfully built future netifaces
Installing collected packages: netifaces, psutil, future, configparser, agent360
Successfully installed agent360-1.2.47 configparser-6.0.0 future-0.18.3 netifaces-0.11.0 psutil-5.9.6
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: 12. Virtual Environments and Packages
root@hosting:~# systemctl start agent360
root@hosting:~# systemctl status agent360
● agent360.service - agent360
Loaded: loaded (/etc/systemd/system/agent360.service; enabled; preset: enabled)
Active: active (running) since Thu 2023-11-23 10:39:50 CET; 6s ago
Main PID: 16984 (agent360)
Tasks: 2 (limit: 19014)
Memory: 11.5M
CPU: 149ms
CGroup: /system.slice/agent360.service
└─16984 /usr/bin/python3 /usr/local/bin/agent360

Nov 23 10:39:50 hosting systemd[1]: Started agent360.service - agent360.

but the warnings are a bit scary. It seems that the agent360 package and it's installation routine in Plesk should be changed to properly use a Python3 VENV, right?!
 
Still, this agent360 did not upload to the 360Monitoring ingestion hosts. Had to "pip3 uninstall agent360" again, then return to internal monitoring in Plesk, and in 360Monitoring, delete the host association and all monitored websites. ***sigh***
Then reinstall as root with "wget -q -N monitoring.platform360.io/agent360.sh && bash agent360.sh [USER_TOKEN]", switch back to 360 Monitoring, and re-add the host and all websites again. Now it's uploading and working, and also the updated OS is now reflected correctly in 360 Monitoring.

This was a bit cumbersome... But I'm glad it works now.
 
Back
Top