• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Resolved My server's website are falling daily

Émerson Felinto

Regular Pleskian
About a month ago something very strange is happening.
Every day the sites of my Plesk server at some time of day simply drops for about 1 or 3min and then come back. This is already starting to irritate my clients which in turn irritate me.

Could you recommend some tests to find out where the problem is?
 
Did you or someone else create a backup job that has the option "Suspend domains until the backup task is completed" enabled? (the "Scheduled Backups List" extension can help you identify such jobs)

This can lead to service downtime (besides the backuped sites that are obviously down during that time) due to restarting the Apache2 webservice twice for every backuped site.
If you have a backup job for multiple/many sites, that can lead to a substantial ammount of time the webservice spends on restarting, more so if the backups are completed quick (incremental) and the Apache2 restart interval is not set. (default)
 
Did you or someone else create a backup job that has the option "Suspend domains until the backup task is completed" enabled? (the "Scheduled Backups List" extension can help you identify such jobs)
Ironically I was without backup in recent weeks because I had problems with my FTP server. So the backup should not have influenced anything at all.
 
Sorry, Igor, that did not help much.
Is there a way to filter the logs to see only the logs that were generated in a given time frame?
 
Hey Emerson, how do you recognize that the sites drops frequently, are all sites or several impacted?
have you something like the watchdog or uptime robot Extension in place to nearow down the issue
 
The symptom is pointing to an Apache "restart" event. Such a restart (in opposition to a reload) can take anywhere between a few seconds to a few minutes. Running processes will be killed and aftwards Apache will be restarted. For that time period, the sites of which the processes have already been killed become inaccessible.

Make sure that you are not using "restart" syntax but "reload" for Apache restart events, for example as described in How to enable/disable graceful restart for Apache? .
 
I also thought that the problem was the web server. I installed Litespeed this morning and received another site off notification. I'm waiting a little longer to confirm that everything is okay.
 
Is there a way to filter the logs to see only the logs that were generated in a given time frame?

Wel if I understand you right, you like to get the filtered log entrys from a logfile within a timeframe, well there are several different ways you can achive this like the following example.
Code:
sed -n '/Mar 20 16:16:22/,/Mar 20 16:17:14/p' /var/log/apache2/error.log
and of course it differ based on the timeformat of the logfile, and there are a lot more examples with egrep, awk and so on, just google for it.
 
Last edited:
Unfortunately this command did not return any results :(

It's not Apache, given that I'm using Litespeed now for testing purposes.
The problem does not seem to be in the link because I am monitoring the IP of the machine by the PING command and it is online most of the time.
 
After a better look at the server logs I noticed that there is a peak load average and so the system crashes. Can Plesk help me figure out the cause of this?
Screenshot 2018-03-21 at 14.19.43.png
 
Unfortunately this command did not return any results :(

copy and past might not work, because we can use different systems.. anyway I gave you a working example and you have to adjust only 2 settings...
a) path to your logfile
b) the timestamp in the sed comand. the timestamp format you have to examine from the logfile you choose because they can differ for many reasons...
 
After following up for a few days I can consider the problem solved.
Really the problem was Apache. Now that I'm using Litespeed I have not had any downtime so far. Thank you! :)
 
Back
Top