Description
What happened?
When using kube-proxy on nftables mode and specify a LoadBalancer
Service with externalTrafficPolicy:Local
that has an ExternalIP assigned, kube-proxy will create an entry in the kube-proxy ip nftable that will drop traffic to that external IP, like:
map no-endpoint-services {
type ipv4_addr . inet_proto . inet_service : verdict
comment "vmap to drop or reject packets to services with no endpoints"
elements = {
10.88.1.2 . tcp . 80 comment "sys-ingress-priv/internal-ingress-controller-v2:web" : drop,
As a result, any pod on the host cannot send traffic to the external IP of the LoadBalancer.
What did you expect to happen?
On nodes where no workload is present, traffic should not be dropped but load balanced by kube proxy to the target Service endpoints.
This will also bring consistency with how other modes work, as this issue was addressed both for ipvs (#93456)
and iptables (#77523) modes.
How can we reproduce it (as minimally and precisely as possible)?
Run kube-proxy on nftables mode and try hitting a LoadBalancer
Service ExternalIP
from within a pod running in a node that does not have ready endpoints of the target service
Anything else we need to know?
@kubernetes/sig-network-bugs
Simmilar to #75262 but for nftables
mode
Kubernetes version
Server Version: v1.33.0
kube-proxy: v1.33.0
Cloud provider
aws, gcp and bare metal
OS version
No response
Install tools
No response
Container runtime (CRI) and version (if applicable)
No response
Related plugins (CNI, CSI, ...) and versions (if applicable)
No response