• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Input plesk fail2ban missing some important filter rules

PeterKi

Regular Pleskian
Server operating system version
Ubuntu 22.04.3 on strato vServer
Plesk version and microupdate number
Plesk Obsidian Web Admin Edition Version 18.0.56
Looking at the server logs of my Ubuntu 22.04 system with Plesk obsidian 18.0.56 I found many entries which are not covered by the plesk jails.
The most frequent of these are apache 404 and postfix errors.
The plesk developers eventually should take a deeper look at server logs and add filters for the most frequent failing access attempts.
I have added 3 filters for the entries metioned above, but I would expect that plesk fail2ban rules cover those frequent access attempts by default.
The filters I suggest are:

apache404
[Definition]
failregex = <HOST> - .* "(GET|POST|HEAD).*HTTP.*"\s404\s
ignoreregex = .*(robots.txt|favicon.ico|jpg|png)

postfix-sasl
[Definition]
failregex = ^%(__prefix_line)swarning: [-._\w]+\[<HOST>\]: SASL ((?i)LOGIN|PLAIN|(?:CRAM|DIGEST)-MD5) authentication failed([\s\w+\/:]*={0,4})?\s*$
ignoreregex = authentication failed: Connection lost to authentication server$

postfix-ssl
[Definition]
failregex = ^%(__prefix_line)sSSL_accept error from \S+\s*\[<HOST>\]
ignoreregex =
 
Hi in my opinion blocking Apache's 404 errors using fail2ban might it could lead to unintended consequences if its enabled by default

Many websites feature numerous 404 pages, often not indicative of an attack. It's crucial to handle this issue delicately—blocking access to crawlers such as Google and Bing can lead to unintended consequences.

Consider this scenario: if I link your website to several 404 URLs using follow links, certain crawlers will detect these links and attempt access. Consequently, you might end up blocking Googlebot IPs, causing your website to vanish from Google's index or triggering SEO complications.
 
Sorry for not being precise enough.
I did not mean that a filter for 404 errors should be enabled by default, but should be supplied by Plesk.
It is then up to the user to enable it or not.
Also, I think crawlers that do not obey pages excluded by a robots.txt should be banned.
 
Back
Top