@H.W.B
Those statistics do not say anything: if anyone actually "tries" to access the system, you will barely notice anything in terms of CPU or memory.
However, each login attempt will take time to process, resulting in an endless "queue" in which the genuine login attempts are superseded by all other attempts.
More or less the same applies to Fail2Ban: it uses a lot of resources to scan logs over and over again, with every now and then some detection and some action.
In most cases, Fail2Ban miserably fails to recognize the hack attempts to the Plesk Panel, unless you have fine-tuned the default plesk-panel jail.
As a final (and important) remark, the statistics in question do show up Nginx related statistics, but as far as I know, those do not include the statistics associated with the custom (and separate) Nginx server being used for Plesk Panel: I am pretty sure that you can find a lot of activity in /var/log/plesk/httpsd_access_log
Nevertheless, there is no reason (so far) to rule out the potential causes, mentioned by me in points a and b.
The simplest way to analyse the problem is to restart psa service (in order to have a clean slate and not have bias interfere in the analysis of the problem).
Just provide some feedback in the form of the output (a couple of minutes after restart AND after trying to login as admin and as client) from
- /var/log/plesk/sw-cp-server/sw-engine.log
- /var/log/plesk/sw-cp-server/error_log (note: if everything is ok, no new entries should be found here)
- /var/log/plesk/httpsd_access_log (note: only provide output if you see something peculiar over there)
- /var/log/plesk/psa_service.log (note: if everything is ok, you will only see that psa service has been started. If that is the case, no need to provide output)
- /var/log/plesk/panel.log (this will be a huge file, only provide the relevant output)
Regards