Question network.audit in journal log

carini

Basic Pleskian
Server operating system version
Ubuntu 22.04.5 LTS
Plesk version and microupdate number
Plesk Obsidian v18.0.67_build1800250217.08
While checking the system logs to investigate a slowness issue, I came across this journalctl output. It appears that the systemd journal is being flooded with non-critical errors, and I wonder whether this could ultimately contribute to system performance degradation.

Here’s a snippet of the log. Anyone with the same issue?

Feb 24 14:29:58 myplesk.mynet.com audit[86033]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=5ccabbf2f8a8 a1=5ccabbf2f900 a2=5ccabbf2f920 a3=2 items=2 ppid=86030 pid=86033 auid=4294967295 uid=0 gid=986 euid=0 suid=0 fsuid=0 egid=986 sgid=986 fsgid=986 tty=(none) ses=4294967295 comm="grep" exe="/usr/bin/grep" subj=unconfined key="auoms"
Feb 24 14:29:58 myplesk.mynet.com audit[86034]: SYSCALL arch=c000003e syscall=59 success=yes exit=0 a0=5ccabbf2f8a8 a1=5ccabbf2f8f0 a2=5ccabbf2f908 a3=2 items=2 ppid=86030 pid=86034 auid=4294967295 uid=0 gid=986 euid=0 suid=0 fsuid=0 egid=986 sgid=986 fsgid=986 tty=(none) ses=4294967295 comm="awk" exe="/usr/bin/gawk" subj=unconfined key="auoms"
Feb 24 14:29:58 myplesk.mynet.com audit: EXECVE argc=2 a0="/usr/bin/awk" a1=7B7072696E742024327D
Feb 24 14:29:58 myplesk.mynet.com audit: CWD cwd="/opt/bitninja"
Feb 24 14:29:58 myplesk.mynet.com audit: PATH item=0 name="/usr/bin/awk" inode=1990 dev=08:11 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 24 14:29:58 myplesk.mynet.com audit: PATH item=0 name="/usr/bin/grep" inode=1629 dev=08:11 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 24 14:29:58 myplesk.mynet.com audit: PATH item=1 name="/lib64/ld-linux-x86-64.so.2" inode=15003 dev=08:11 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 24 14:29:58 myplesk.mynet.com audit: PROCTITLE proctitle=2F7573722F62696E2F67726570005C732F6F70742F6269746E696E6A612D70726F636573732D616E616C797369732F6269746E696E6A612D70726F636573732D616E616C79736973
Feb 24 14:29:58 myplesk.mynet.com audit: EXECVE argc=3 a0="/usr/bin/grep" a1="-v" a2="grep"
Feb 24 14:29:58 myplesk.mynet.com audit: CWD cwd="/opt/bitninja"
Feb 24 14:29:58 myplesk.mynet.com audit: PATH item=0 name="/usr/bin/grep" inode=1629 dev=08:11 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 24 14:29:58 myplesk.mynet.com audit: PATH item=1 name="/lib64/ld-linux-x86-64.so.2" inode=15003 dev=08:11 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
 
Hi. Thanks for sharing the details! From your logs, it looks like the system is being flooded with audit logs, potentially leading to performance degradation.

Key Observations:
  • Frequent `grep` and `awk` executions:
    - Your logs show multiple instances of `grep` and `awk`, indicating that a process—most likely BitNinja—is continuously scanning or filtering data.
  • Excessive `auditd` logging:
    - The `auditd` system is capturing detailed logs for every executed command (`SYSCALL`, `EXECVE`, `PATH`).
    - If logging is too verbose, it can impact disk I/O and system performance.
  • BitNinja's Role:
    - Since the working directory is `/opt/bitninja/`, it's likely that BitNinja is responsible for this behavior.
    - Some security tools generate detailed process logs, which could be causing high resource usage.

Recommendations to Reduce Logging Impact:

1. Adjust BitNinja Logging Settings

Try listing active BitNinja modules:
Code:
sudo bitninja -m list
If you identify a module generating excessive logs, consider adjusting or disabling it.

2. Reduce Audit Log Verbosity
You can modify `/etc/audit/rules.d/audit.rules` to exclude certain log types:
Code:
-a always,exclude -F msgtype=PATH
-a always,exclude -F msgtype=EXECVE
Then restart `auditd`:
Code:
sudo systemctl restart auditd

3. Tune Systemd Journal Logging
To prevent the journal from being overwhelmed, adjust the rate limits in `/etc/systemd/journald.conf`:
Code:
RateLimitInterval=30s
RateLimitBurst=500
SystemMaxUse=500M
Then restart journald:
Code:
sudo systemctl restart systemd-journald

4. Monitor Resource Usage
If slowness persists, check CPU and disk activity:
Code:
top -o %CPU
iotop
journalctl -f
If BitNinja or `auditd` is consuming high resources, further tuning might be needed.

I believe reducing log verbosity and monitoring BitNinja’s behavior should help. Let me know if any of these suggestions work for you!
 
Back
Top