[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metallb-controller reassign the IP if IP pool CIDR update such that existing allocated IP is not part of new IP CIDR block. #2449

Open
8 tasks done
abhishekjain1982 opened this issue Jul 10, 2024 · 1 comment · May be fixed by #2594

Comments

@abhishekjain1982
Copy link

MetalLB Version

0.14.5

Deployment method

Charts

Main CNI

calico

Kubernetes Version

No response

Cluster Distribution

No response

Describe the bug

Steps:

  1. Initially ipaddresspool created with subnet 172.168.2.0/24
  2. metallb-controller assign service IPs from above subnet.
  3. Now ipaddresspool is updated with subnet 172.168.3.0/24

Observation:
As the existing assigned Service is not part of new subnet 172.168.3.0/24. Service is getting reassign the new IP from updated subnet.

This is change in behavior, In earlier versions v0.13.12 the same results in config stale alarm. Also, the current document also states the same.

_Changing the IP of a service
The current behaviour of MetalLB is to try to preserve the connectivity despite a change of configuration that might disrupt a service happens. For example, removing an IPAddressPool that contains IPs currently assigned to services.

_If that happens, instead of reallocating (if possible) a new IP to the service, the configuration change is marked as stale and MetalLB keeps running with the last valid configuration.

In order to re-assign a new IP to the services, there are two options:
restarting the MetalLB’s controller pod
deleting and re-creating the service

To Reproduce

Steps:

  1. Initially ipaddresspool created with subnet 172.168.2.0/24
  2. metallb-controller assign service IPs from above subnet.
  3. Now ipaddresspool is updated with subnet 172.168.3.0/24

Expected Behavior

This is change in behavior, In earlier versions v0.13.12 the same results in config stale alarm. Also, the current document also states the same.

_Changing the IP of a service
The current behaviour of MetalLB is to try to preserve the connectivity despite a change of configuration that might disrupt a service happens. For example, removing an IPAddressPool that contains IPs currently assigned to services.

_If that happens, instead of reallocating (if possible) a new IP to the service, the configuration change is marked as stale and MetalLB keeps running with the last valid configuration.

In order to re-assign a new IP to the services, there are two options:
restarting the MetalLB’s controller pod
deleting and re-creating the service

Additional Context

NA

I've read and agree with the following

  • I've checked all open and closed issues and my request is not there.
  • I've checked all open and closed pull requests and my request is not there.

I've read and agree with the following

  • I've checked all open and closed issues and my issue is not there.
  • This bug is reproducible when deploying MetalLB from the main branch
  • I have read the troubleshooting guide and I am still not able to make it work
  • I checked the logs and MetalLB is not discarding the configuration as not valid
  • I enabled the debug logs, collected the information required from the cluster using the collect script and will attach them to the issue
  • I will provide the definition of my service and the related endpoint slices and attach them to this issue
@fedepaol
Copy link
Member

Thanks @abhishekjain1982 to raise this issue. In 0.14.2 we choose to make metallb to be more kubernetes compliant and to honor the configuration. The change was willingly implemented in #2097 and mentioned in the 0.14.2 release notes https://metallb.io/release-notes/#version-0-14-2

I agree though that the documentation is not aligned anymore. I will label this as documentation so we can align it with the current behaviour.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants