• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Plesk Updates | GUI Plesk Panel ✔️ | CLI ✔️ | Cron ❌ - Close But No Cigar

learning_curve

Silver Pleskian
Server operating system version
Ubuntu 22.04.2 LTS
Plesk version and microupdate number
Plesk Obsidian 18.0.54 MU #2
After THIS previous (but different) issue with these, which even the excellent Plesk Support Team couldn't identify the true cause of, suddenly fixed itself (after some minor, in theory, unconnected Plesk point updates on the previous main release), we now have another mysterious one, which also appears to be related to a Cron job provided by Plesk.

The background: It's not Mission or Operation Critical, as updates will run elsewhere (as per thread title). The daily error warnings are only received by e-mail. Here's yesterday's:
Reason: 2023-08-07 06:30:38 INFO: pum is called with arguments: ['--list', '--repo-info', '--json']
2023-08-07 06:31:35 ERROR: Apt cache fetch failed:
2023-08-07 06:31:35 ERROR:
2023-08-07 06:31:35 ERROR: Exited with returncode 1.
We can run updates any time we like, by either; GUI Plesk Panel or (as we usually do) directly via CLI. All updates run perfectly and there are no errors or warnings when using either.
Just in case... ;) we ran #plesk daily -l and then the command, specifying each task as an argument but... also had no errors or warnings at all on any of these tasks (Plesk Article)

Just like last time, there's no trace of any of these warnings / errors within any of the logs (e.g. panel.log or task-manager.log or error.log etc) located in here: /var/log/plesk or here: /var/log/sw-cp-server and, if searched for within the panel https://***My Domain***:8443/modules/log-browser/ there's nothing within the System, Mail, Plesk or Overview tabs

However, unlike last time, if we run #plesk sbin pum --check we do get the expected: INFO: pum is called with arguments: ['--check'] but, we don't get anything else (of any concern)

So, as per last time: Has anybody had the same warnings with the exact same criteria / results that we have? Or, is there another check method / log location that we've maybe missed @Peter Debik ? Thanks! There's existing online data, but that's specific to apt cache failures directly relating to Ubuntu, not Plesk (It could be a route to find the cause but, the fact that upgrades run perfectly with no errors or warnings via GUI / CLI does seem to indicate that there's a glitch somewhere within Plesk 18.0.54 on Ubuntu 22.04 as opposed to the OS.
 
If no trace of the mail can be found on the server, the mail does not come from that server, but from another server. It is the same problem as before where mails seemed to be sent from one server, but they did not show up on that server in logs. Have you made sure that the mails are actually coming from that very one server you are examining? They must appear at least in the maillog. It only makes sense to look further on that server if they are shown in the maillog.
 
If no trace of the mail can be found on the server, the mail does not come from that server, but from another server. It is the same problem as before where mails seemed to be sent from one server, but they did not show up on that server in logs. Have you made sure that the mails are actually coming from that very one server you are examining?
Yes, agreed, it should be that simple i.e. different servers / cause etc
However... they DO indeed come from the same server, in all cases.
The "sender" of said e-mail is as expected i.e. It's the e-mail address that is provided for the Profile of the Admin User of Plesk on each server (albeit we have re-directs setup to that same e-mail address, for all of the obvious system system response email addresses e.g. root@***** or postmaster@***** etc). When there's no errors / warnings and there's an e-mail with genuine upgrade availability content in it, then they also do come from that same e-mail address - as expected.
They must appear at least in the maillog. It only makes sense to look further on that server if they are shown in the maillog.
The content shown within the Mail tab, within the panel https://***My Domain***:8443/modules/log-browser/ merely shows logins / outs via Dovecot etc. If we switch to looking at the content that's shown within the System tab instead, then yes, we can see all of the system processes that are utilized to generate, check (compliance / spam etc) and then send that same e-mail (by way of the date & time filter) but of course, it's impossible to see its content via this view.

To summarize, these update / upgrade e-mails are system generated, regardless of if they provide correct (usual) or incorrect (only recently) data and we're assuming, that they originate from a Plesk provided daily cron task? (we'll need to check this bit). In all cases, they do come from the expected and correct mail account of the correct domain for the Plesk managed hosting server. So the real question is; What and why is the source of the apt cache fetch error that Plesk is reporting via these e-mails, but that the OS is seemingly, oblivious too and/or shows no related or similar errors during manual updates (Panel GUI or CLI).
 
Maybe leave this for a day or two yet @Peter Debik as we've just completed all of the Plesk Obsidian 18.0.54 MU #2 > MU#3 upgrades, so if it's like last time, when fortunately, doing that applied a self-fix, it might do the same this time too.
 
Just like the other slightly different intermittent message we were both getting, I'm now also getting the apt cache fetch error emails.

Exactly the same set up as you and on Ionos (I think I remember you are too?). I wonder if it's some weird Ionos intricacy specific to how their template Ubuntu image is set up on their VMs?
 
In your "# journalctl", do you see anything interesting that happens immediately before the mail is logged in your /var/log/maillog?
 
Here's my journalctl from immediately before the email gets sent, no failures, all success!

Aug 07 06:22:52 srv01.xxxxxx.com systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 07 06:22:56 srv01.xxxxxx.com systemd[1]: apt-daily-upgrade.service: Deactivated successfully.
Aug 07 06:22:56 srv01.xxxxxx.com systemd[1]: Finished Daily apt upgrade and clean activities.
Aug 07 06:22:56 srv01.xxxxxx.com systemd[1]: apt-daily-upgrade.service: Consumed 3.487s CPU time.
Aug 07 06:25:01 srv01.xxxxxx.com CRON[595135]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 07 06:25:01 srv01.xxxxxx.com CRON[595134]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 07 06:25:01 srv01.xxxxxx.com CRON[595137]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Aug 07 06:25:01 srv01.xxxxxx.com CRON[595136]: (root) CMD ([ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1)
Aug 07 06:25:01 srv01.xxxxxx.com CRON[595135]: pam_unix(cron:session): session closed for user root
Aug 07 06:25:04 srv01.xxxxxx.com systemd[1]: Starting Update APT News...
Aug 07 06:25:04 srv01.xxxxxx.com systemd[1]: Starting Update the local ESM caches...
Aug 07 06:25:04 srv01.xxxxxx.com systemd[1]: apt-news.service: Deactivated successfully.
Aug 07 06:25:04 srv01.xxxxxx.com systemd[1]: Finished Update APT News.
Aug 07 06:25:07 srv01.xxxxxx.com systemd[1]: esm-cache.service: Deactivated successfully.
Aug 07 06:25:07 srv01.xxxxxx.com systemd[1]: Finished Update the local ESM caches.
Aug 07 06:25:07 srv01.xxxxxx.com systemd[1]: esm-cache.service: Consumed 3.131s CPU time.
Aug 07 06:25:27 srv01.xxxxxx.com systemd[1]: Starting Update APT News...
Aug 07 06:25:27 srv01.xxxxxx.com systemd[1]: Starting Update the local ESM caches...
Aug 07 06:25:27 srv01.xxxxxx.com systemd[1]: apt-news.service: Deactivated successfully.
Aug 07 06:25:27 srv01.xxxxxx.com systemd[1]: Finished Update APT News.
Aug 07 06:25:30 srv01.xxxxxx.com systemd[1]: esm-cache.service: Deactivated successfully.
Aug 07 06:25:30 srv01.xxxxxx.com systemd[1]: Finished Update the local ESM caches.
Aug 07 06:25:30 srv01.xxxxxx.com systemd[1]: esm-cache.service: Consumed 2.638s CPU time.
Aug 07 06:26:45 srv01.xxxxxx.com systemd[1]: Starting Update APT News...
Aug 07 06:26:45 srv01.xxxxxx.com systemd[1]: Starting Update the local ESM caches...
Aug 07 06:26:45 srv01.xxxxxx.com systemd[1]: apt-news.service: Deactivated successfully.
Aug 07 06:26:45 srv01.xxxxxx.com systemd[1]: Finished Update APT News.
Aug 07 06:26:46 srv01.xxxxxx.com systemd[1]: esm-cache.service: Deactivated successfully.
Aug 07 06:26:46 srv01.xxxxxx.com systemd[1]: Finished Update the local ESM caches.
Aug 07 06:27:01 srv01.xxxxxx.com CRON[597175]: pam_unix(cron:session): session opened for user psaadm(uid=998) by (uid=0)
Aug 07 06:27:01 srv01.xxxxxx.com CRON[597176]: (psaadm) CMD (/opt/psa/admin/bin/php -dauto_prepend_file=sdk.php '/opt/psa/admin/plib/modules/revisium-antivirus/scripts/ra_executor_run.php')
Aug 07 06:27:01 srv01.xxxxxx.com CRON[597175]: pam_unix(cron:session): session closed for user psaadm
Aug 07 06:27:58 srv01.xxxxxx.com systemd[1]: Starting Update APT News...
Aug 07 06:27:58 srv01.xxxxxx.com systemd[1]: Starting Update the local ESM caches...
Aug 07 06:27:58 srv01.xxxxxx.com systemd[1]: apt-news.service: Deactivated successfully.
Aug 07 06:27:58 srv01.xxxxxx.com systemd[1]: Finished Update APT News.
Aug 07 06:27:59 srv01.xxxxxx.com systemd[1]: esm-cache.service: Deactivated successfully.
Aug 07 06:27:59 srv01.xxxxxx.com systemd[1]: Finished Update the local ESM caches.
Aug 07 06:29:13 srv01.xxxxxx.com systemd[1]: Starting Update APT News...
Aug 07 06:29:13 srv01.xxxxxx.com systemd[1]: Starting Update the local ESM caches...
Aug 07 06:29:13 srv01.xxxxxx.com systemd[1]: apt-news.service: Deactivated successfully.
Aug 07 06:29:13 srv01.xxxxxx.com systemd[1]: Finished Update APT News.
Aug 07 06:29:14 srv01.xxxxxx.com systemd[1]: esm-cache.service: Deactivated successfully.
Aug 07 06:29:14 srv01.xxxxxx.com systemd[1]: Finished Update the local ESM caches.
Aug 07 06:30:30 srv01.xxxxxx.com plesk-sendmail[598445]: S598445: from=<[email protected]> to=<=?UTF-8?Q?Rich?= <[email protected]>>
 
Yes, the exact same error:

Hello, XXX XXX
Some problems occurred with the System Updates tool on your server srv01.xxxxxx.com. Please resolve them manually.

Reason: 2023-08-07 06:29:13 INFO: pum is called with arguments: ['--list', '--repo-info', '--json']
2023-08-07 06:30:30 ERROR: Apt cache fetch failed:
2023-08-07 06:30:30 ERROR:
2023-08-07 06:30:30 ERROR: Exited with returncode 1.
 
Just like the other slightly different intermittent message we were both getting, I'm now also getting the apt cache fetch error emails.
Exactly the same set up as you and on Ionos (I think I remember you are too?).
Good to see that we're not just pedantic, unique, one-offs :) Yes IONOS Cloud Servers, that's right.
I wonder if it's some weird Ionos intricacy specific to how their template Ubuntu image is set up on their VMs?
Might be...

They did /still do make mods to the Ubuntu OS Image when first installed yes, e.g. our IONOS setup is still using the ifupdown system rather than Netplan, which they said, was / is a result, of compromises & issues that started with this (now quite old) bug: Ip address deleted when dhcp renegotiation if gateway is not in the same network · Issue #8888 · systemd/systemd

However, we've successfully used the Plesk dist-upgrade process on all of our IONOS Ubuntu Cloud Severs (having checked in advance, with IONOS that this would be okay!!) Plus we've never had any Ubuntu issues at all (i.e. 'cos of the IONOS mods), either pre-or post all of those dist-upgrades, only a couple of very small Plesk reported 'glitches' - like these last two update e-mail items. Maybe, IONOS and Plesk are 100% operationally okay when in isolation, but as you've suggested, when operating together, there's a 0.001% oddity that gives some false positive outputs like these?

Update @Peter Debik Ticket ID sent in a private message and FWIW the Plesk Obsidian 18.0.54 MU #2 > MU#3 upgrade didn't fix this issue (for us).
 
I think that the issue is caused by a repository source that is not available during the daily maintenance window. Moving daily maintenance to a different time might help, but although it sounds like an easy task, it may not be that easy. That is because this is started through the /etc/cron.daily file 50plesk-daily. And the daily execution time of daily crons is not determined statically through anacron, but dynamically depending on when the last job ran plus an offset. So if you have the time and motivation, you could try to temporarily remove 50plesk-daily from /etc/cron.daily and instead run the same "controlled" at a different time, e.g. two hours earlier or later. It is a shot into the dark, but all sources I found point to a missing, unreachable or inaccessible repository that causes these mails.
 
I think that the issue is caused by a repository source that is not available during the daily maintenance window. Moving daily maintenance to a different time might help, but although it sounds like an easy task, it may not be that easy. That is because this is started through the /etc/cron.daily file 50plesk-daily. And the daily execution time of daily crons is not determined statically through anacron, but dynamically depending on when the last job ran plus an offset.
Understood
So if you have the time and motivation, you could try to temporarily remove 50plesk-daily from /etc/cron.daily and instead run the same "controlled" at a different time, e.g. two hours earlier or later. It is a shot into the dark, but all sources I found point to a missing, unreachable or inaccessible repository that causes these mails.
Just to confirm, did you mean:

Remove /etc/cron.daily/50plesk-daily (content shown below) from /etc/cron.daily and then run /etc/cron.daily/50plesk-daily, at a different time of day, via CLI ensuring, that at least 2 hours have elapsed from its usual time of running

Or

Remove /etc/cron.daily/50plesk-daily from /etc/cron.daily & then put it back into /etc/cron.daily, ensuring, that at least 2 hours have elapsed from its usual time of running (currently, it does appear to run at exactly the same time of day, each day)

We can do either and see what happens
#!/bin/sh
### Copyright 1999-2023. Plesk International GmbH. All rights reserved.

/opt/psa/bin/sw-engine-pleskrun /opt/psa/admin/plib/DailyMaintainance/script.php >/dev/null 2>&1

/opt/psa/bin/mysqldump.sh >/dev/null 2>&1

/usr/lib/plesk-9.0/preupgrade_backup_cleaner 365 days >/dev/null 2>&1
 
I meant removing it form the daily entry and running it separately at a different time - at least 2 hours off the daily automatic time (whenever that is). I am proposing this, because I have frequently observed that during that time some third-party servers or mirrors are either overloaded or under maintenance in that time frame, so I suspect this could be the case here, too. For example if your server is using a mirror that your provider operates, but all other servers of that provider do the same and try to access that mirror at the same time, it can easily lead to a situation where some requests are not served and this again could lead to the symptom.
 
Update @Peter Debik ;)

Took on board what you posted and temporarily removed /etc/cron.daily/50plesk-daily from /etc/cron.daily
Sure enough, there was no errors / warnings e-mail generated (so received by us) albeit that's simply because said file was absent when required.
Having dug a bit deeper, previous to this, it appears that the daily time is indeed specified. Details in this Plesk Support Doc:
So, in our case:
Aug 11 06:25:01 *our-server* CRON[2530863]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 11 06:25:01 *our-server* CRON[2530864]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
^^ followed by lots of missing entries (due to the missing /etc/cron.daily/50plesk-daily file)

Therefore, to emulate your suggested "controlled" invoking of /etc/cron.daily/50plesk-daily (in this case) 3 hours later then the default time of 06:25 GMT (UTC) via config, we took the easy option and edited the /etc/crontab file as follows:
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
Default line ^^
25 9 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
Edited line ^^

NB As you've already identified, anacron still controls the overall process (dynamically) so... we were still looking with crossed fingers, but... the result was fine!
Aug 11 09:25:01 *our-server* CRON[2577140]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 11 09:25:01 *our-server* CRON[2577141]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
^^ followed by lots of applicable entries (due to the /etc/cron.daily/50plesk-daily file being back in place)
Plus... no errors / warnings e-mail generated (so received by us)

Conclusion: @Peter Debik change his name, today, by deed pol to Sherlock Holmes :p or, it's a pure fluke. We're going for the 1st option...

Footnote FWIW: Two of the plesk update IP addresses: Resolved - Plesk - The Mysterious Location Of Plesk's Official IP Addresses / CIDR Addresses (which are intentionally not blocked in or Danami Juggernaut Firewalls) are unfortunately, within the same CIDR - 89.187.160.0/19 as many infamous DDoS attack IP addresses, even though ironically, their respective Geo-locations are totally different. If they weren't, we could of course simply block the whole CIDR as we have done on other instances, but we can't here, as that would also block those Plesk IP Addresses, so we must block them all individually. What's clear though, from our examination of all of the journalctl log data, is that many of these DDoS attack IP addresses can often make new (but blocked and repelled) attacks at times very similar to the default timimg of /etc/cron.daily/50plesk-daily i.e. 06:25 GMT (UTC). This might also be a contributory factor for the true source(s) of the original problem described in the opening post. Meantime, we're also going to further edit etc/crontab and change the hour of the weekly and monthly crons too. AFAIK this file isn't changed as part of any normal Plesk point or release updates (please confirm!) so this should be fine. The only mystery, is why this issue only started on the Plesk Obsidian 18.0.54 release, when running on the latest update of Ubuntu 22.04.2 LTS on our IONOS setup, but seeing as we now appear to have solved this for our own setups (thanks again to you @Peter Debik ) we'll let somebody else work that one out!
 
Edits (as you previous forum posts can't be edited now after a certain time has elapsed)
a) Server OS is now Ubuntu 22.04.3 LTS - upgraded since that previous post which mentions Ubuntu 22.04.2 LTS
b) In this post which followed this post which is included in the post above (in this thread) we transposed the Geo-locations. The Plesk servers are located in Dallas Texas (but have CDN). It's all of the DDoS attack IP addressed Servers that are located in London UK (again with CDN) Locations are mentioned ^ above.
 
Back
Top