michaeljoseph01
New Pleskian
- Server operating system version
- Ubuntu 20.04.6
- Plesk version and microupdate number
- 18.0.51
I have a new site up, a work in progress and I'm already seeing tons of malicious traffic. I went from relying on mod_security and fail2ban to installing imunify360 because of how much hype I saw online. Now, i'm how different Imunify360 works compared to fail2ban and I'm not convinced its better, at least for my setup. It doesn't utilize "jails", so no matter how many times a malicious client tries to brute force into ssh, or wp-login, or probe for xmlrpc vulnerabilities, or any other clearly malicious behavior - they can come back again and again and I see all these requests in the logs drowning out legitimate traffic.
I emailed Imunify support about this:
----
I just installed Imunify360 and am trying to understand these filtering rules. I've attached a screenshot that shows I'm having dozens of malicious events in a span of minutes by a small number of IP's, yet there is not one IP in the blacklist yet? When using fail2ban, I could determine how many times an IP offended before they were totally banned for whatever length of time I choose. Where is the equivalent configuration here? It's still sapping server resources by handling these requests one by one, why are none of these bad actors ending up on a permanent drop list and able to come back again and again to probe different parts of the attack surface??
----
The response I got:
----
Hi Michael,
All of the IP addresses on your screenshot were blocked: the ones in a blue bubble were blocked by the active response feature on the fly, without adding to any lists due to the way how the feature works, and the ones in the gray bubble were graylisted, i. e. served captcha before allowing access to the actual sites.
Permanent blocking brings high risks of false-positives and we never do this automatically - we limit accesses in a smart and sophisticated ways with the help of gray list, heuristics on the central server, WAF, and the on-the-fly blocking features - active response and PAM.
Permanent list is available only for manual blocking, and the automatic blockings are implemented via the gray list, to avoid false-positives, as there has to be a balance between security and usability.
----
From my view, if I see someone with clearly malicious intent, I'm not going to continue to allow them to come back to probe other areas, or even the same area over and over and over again. I can totally see how this tradeoff would be necessary if you're running absolutely critical services, but for a website with no users yet this seems ludicrous to allow this resource-intensive firewall to keep sapping memory, cpu cycles and log entries dealing with the kind of traffic that in my eyes should be stopped at the front gate.
What do other people think, or use??? I can't be the only one fretting about malicious traffic, seeing how my site doesn't even have any backlinks yet and I'm already seeing the logs filling up with the probing of bad actors.
I emailed Imunify support about this:
----
I just installed Imunify360 and am trying to understand these filtering rules. I've attached a screenshot that shows I'm having dozens of malicious events in a span of minutes by a small number of IP's, yet there is not one IP in the blacklist yet? When using fail2ban, I could determine how many times an IP offended before they were totally banned for whatever length of time I choose. Where is the equivalent configuration here? It's still sapping server resources by handling these requests one by one, why are none of these bad actors ending up on a permanent drop list and able to come back again and again to probe different parts of the attack surface??
----
The response I got:
----
Hi Michael,
All of the IP addresses on your screenshot were blocked: the ones in a blue bubble were blocked by the active response feature on the fly, without adding to any lists due to the way how the feature works, and the ones in the gray bubble were graylisted, i. e. served captcha before allowing access to the actual sites.
Permanent blocking brings high risks of false-positives and we never do this automatically - we limit accesses in a smart and sophisticated ways with the help of gray list, heuristics on the central server, WAF, and the on-the-fly blocking features - active response and PAM.
Permanent list is available only for manual blocking, and the automatic blockings are implemented via the gray list, to avoid false-positives, as there has to be a balance between security and usability.
----
From my view, if I see someone with clearly malicious intent, I'm not going to continue to allow them to come back to probe other areas, or even the same area over and over and over again. I can totally see how this tradeoff would be necessary if you're running absolutely critical services, but for a website with no users yet this seems ludicrous to allow this resource-intensive firewall to keep sapping memory, cpu cycles and log entries dealing with the kind of traffic that in my eyes should be stopped at the front gate.
What do other people think, or use??? I can't be the only one fretting about malicious traffic, seeing how my site doesn't even have any backlinks yet and I'm already seeing the logs filling up with the probing of bad actors.