-
Notifications
You must be signed in to change notification settings - Fork 789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interfaces with IPAM IPv6 addresses also pick up SLAAC addresses #160
Comments
I agree, we should probably disable |
I wouldn't want to disable accept_ra unless there was a route specified. A lot depends whether there is any plan to support SLAAC in this set of plugins, and how you are going to support the GET proposal with IPv6 - does it include local and temporary addressing for example and discovered routes |
I got a similar issue, that 2nd network interface got an extra IPv6 ip address.
Here're more details.
2001::f:10/116 is in the expectation. |
Even if I disabled autoconfig and accept_ra for the pod config, it also got an extra IP address. @squeed @NeilW any thoughts? Thanks!
|
This is still an issue for us. I think it is a race condition between the RA packets and when the This is my theory (I am not very familiar with CNI or Go so take this with a grain of salt):
If an RA packet arrives between steps 3 and 4 then the container will have an extra IPv6 address and default gateway. We use this configuration (the variables filled in properly by another mechanism): {
"cniVersion": "1.0.0",
"name": "test",
"plugins": [
{
"type": "bridge",
"bridge": "pod",
"ipam": {
"type": "host-local",
"ranges": [
[
{
"subnet": "$IPV4_RANGE",
"rangeStart": "$IPV4_RANGE_START",
"rangeEnd": "$IPV4_RANGE_END",
"gateway": "$IPV4_GATEWAY"
}
],
[
{
"subnet": "$IPV6_RANGE",
"rangeStart": "$IPV6_RANGE_START",
"rangeEnd": "$IPV6_RANGE_END",
"gateway": "$IPV6_GATEWAY"
}
]
],
"routes": [
{ "dst": "0.0.0.0/0" },
{ "dst": "::/0" }
]
}
},
{
"type": "tuning",
"sysctl": {
"net.ipv6.conf.all.accept_ra": "0",
"net.ipv6.conf.all.autoconf": "0",
"net.ipv6.conf.default.accept_ra": "0",
"net.ipv6.conf.default.autoconf": "0",
"net.ipv6.conf.eth0.accept_ra": "0",
"net.ipv6.conf.eth0.autoconf": "0"
}
}
]
} |
Is the bridge in layer 2 mode by any chance? In layer 3 mode (the default) an RA sent to Thus the RA shouldn't propagate to containers on the bridge, that's an entire different subnet ("broadcast domain"). Is something on your node doing the RA? It has to come from somewhere. If nothing sends RA there shouldn't be a need for tuning inside the containers. |
It is a Layer 2 bridge. I have a separate physical network for pods and I assign IP addresses from that network. I don't need or want to do any Layer 3 processing on the host node. I have RAdvD servers on that network advertising prefixes and default routes for other reasons. I created a small PR (#910). It is working for me. It adds an enableSlaac parameter and turns off the accept_ra on the container side based on the value. It is just the code. If you think it is a good approach then I'll extend the PR with test cases and documentation changes etc. |
The IPAM system appears to be top down in nature - in that the interface assigns the addresses returned by the IPAM plugin. However the interfaces created don't set the interface IPv6 autoconfiguration to off - which can result in the interface picking up a bottom up SLAAC address in addition to the IPAM allocated one if it is on a network where other devices and interfaces are using SLAAC.
With a CNI config of:
on kubernetes I get:
Interfaces with IPAM IPv6 addressing should probably set
/proc/sys/net/ipv6/conf/<int>/autoconf
and/proc/sys/net/ipv6/conf/<int>/accept_ra
appropriately.(Perhaps accept_ra is switched off if there are routes specified, and switched on if not, similarly autoconf is switched off if there are ranges specified, and switched on if not).
The text was updated successfully, but these errors were encountered: