• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Problem with ipv4 and plesk

sebgonzes

Silver Pleskian
Hello

In server (centos7 + plesk 17.5 updated), we have an very strange issue.
Server have 6 public ips on em1:
- 3 ips that are in an /29 (YYY ips)
- 3 ips that are in an /28 (XXX ips)

we configure it by plesk directly, only first one have an gateway in fisical file. All ips works fine, but sometime, one of them stop working... and some time work again after a few hours! in other case, with an ifdown / iftop, problem solve, but when failed, of course, all domain associated to this ip go down (one of /29). We have try to change ip and associated all domain that associated to previous ip to this new ip, and same problem after few hours.
The datacenter say us that there no problem on networks and switch...

we are a few confused, I have seem some similar item :
Issue - Problem adding second IPv6 address

but have not obtain an real solution for now.
The very strange things, is that all other 5 ips (and domain associated with these ip) still working without problem (include ips that are in same /29!)
No firewall activated, no iptables problem, no fail2ban, no selinux... We don't understand what block/disable this ip :-(

em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXX.XXX.XXX.105 netmask 255.255.255.240 broadcast XXX.XXX.XXX.111
inet6 fe80::862b:2bff:fe51:a749 prefixlen 64 scopeid 0x20<link>
ether 84:2b:2b:51:a7:49 txqueuelen 1000 (Ethernet)
RX packets 234609714 bytes 131387799417 (122.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 271765587 bytes 263121693820 (245.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

em1:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet YYY.YYY.YYY.132 netmask 255.255.255.248 broadcast YYY.YYY.YYY.135
ether 84:2b:2b:51:a7:49 txqueuelen 1000 (Ethernet)

em1:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXX.XXX.XXX.103 netmask 255.255.255.240 broadcast 0.0.0.0
ether 84:2b:2b:51:a7:49 txqueuelen 1000 (Ethernet)

em1:3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet YYY.YYY.YYY.134 netmask 255.255.255.248 broadcast 0.0.0.0
ether 84:2b:2b:51:a7:49 txqueuelen 1000 (Ethernet)

em1:4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXX.XXX.XXX.110 netmask 255.255.255.240 broadcast 0.0.0.0
ether 84:2b:2b:51:a7:49 txqueuelen 1000 (Ethernet)

em1:5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet YYY.YYY.YYY.133 netmask 255.255.255.248 broadcast YYY.YYY.YYY.135
ether 84:2b:2b:51:a7:49 txqueuelen 1000 (Ethernet)

em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.154 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::862b:2bff:fe51:a74a prefixlen 64 scopeid 0x20<link>
ether 84:2b:2b:51:a7:4a txqueuelen 1000 (Ethernet)
RX packets 23762725 bytes 2779300854 (2.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 113263517 bytes 171328023927 (159.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 194427436 bytes 176561336736 (164.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 194427436 bytes 176561336736 (164.4 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


I don't know if it can be an sysctl problem...


Hope someone can give us some test to do to find problem.

Thanks a lot.
 
Hi sebgonzes,

I'm not sure, but why some interfaces have broadcast configured as 0.0.0.0? Also, if you configure aliases, why you use netmask /29 or /28 on alias interfaces, but not /32?
 
Hi sebgonzes,

I'm not sure, but why some interfaces have broadcast configured as 0.0.0.0? Also, if you configure aliases, why you use netmask /29 or /28 on alias interfaces, but not /32?

Hello

I have changed netmask, but have not solve issue, after 3 days of uptime, ip alias (one of the 6 configurated) go down and after 4 hours, go up again without any manual action ...
Is someone have had some similar issue? it can be an issue to some keep alive parameter? :

-rw-r--r-- 1 root root 0 ene 21 19:20 /proc/sys/net/ipv4/tcp_keepalive_intvl
-rw-r--r-- 1 root root 0 ene 21 19:20 /proc/sys/net/ipv4/tcp_keepalive_probes
-rw-r--r-- 1 root root 0 ene 21 19:18 /proc/sys/net/ipv4/tcp_keepalive_time

I really not sure that can be an server issue, it just what from datacenter they saying that there is no problem on datacenter network/firewall.

The only real solve for now, is an crontab in server that check an external page as IsUp.me - Short URL to check if a site is up or down. to see if ip is UP or DOWN, and in case of DOWN, do an ifdown and ifup of eth0:5 (in our case, is the alias 5 that failed), an very bad solution but not know to do it better for now with this issue....
 
Have you added all IPv4 addresses fo Fail2Ban? Else it will react on frequent identical traffic between Nginx and Apache and maybe block traffic for a certain time. That would explain why it is always a few hours while your IPs are blocked, then unblocked.
 
Have you added all IPv4 addresses fo Fail2Ban? Else it will react on frequent identical traffic between Nginx and Apache and maybe block traffic for a certain time. That would explain why it is always a few hours while your IPs are blocked, then unblocked.

We have disabled and uninstall all firewall possible :

[root@server ~]# selinuxenabled; echo $?
1
[root@server ~]#
-------------------------------------------

[root@server ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@server ~]#

-------------------------------------------

[root@server ~]# rpm -qa | grep fail2ban
[root@server ~]#
-------------------------------------------

[root@server ~]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@server ~]#

It's the first that I check when I detect the problem for first time, but nothing seem active to block the ip in the server... I am able to up again the ip with an ifdown and ifup when we detect the problem, or in some other case, ip after 4 hours works again without manual action in ssh...
 
Back
Top