Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Race condition causes operator to implicitly select the same available exit node #143

Copy link
Copy link
@venkatamutyala

Description

@venkatamutyala
Issue body actions
kubectl apply -k https://github.com/FyraLabs/chisel-operator?ref=v0.4.1

kubectl apply -f - <<YAML
apiVersion: v1
kind: Secret
metadata:
  name: selfhosted
  namespace: chisel-operator-system
type: Opaque
stringData:
  auth: "XXXXXX:XXXXXXX"
---
apiVersion: chisel-operator.io/v1
kind: ExitNode
metadata:
  name: exit1
  namespace: chisel-operator-system
spec:
  host: "51.11.21.111"
  port: 9090
  auth: selfhosted
---
apiVersion: chisel-operator.io/v1
kind: ExitNode
metadata:
  name: exit2
  namespace: chisel-operator-system
spec:
  host: "31.162.29.111"
  port: 9090
  auth: selfhosted
YAML

The IPs above are invalid but the 51.11.21.111 got used twice in my environment for two different services of type load balancer. I would have expected both 51.11.21.111 and 31.162.29.111 to of been used.

Normally i provision the chisel operator first but i forgot and by the time i did both my services were waiting for a load balancer and it looks like the chisel operator assigned the same one to each of my requests. I'm thinking there is a race condition in the operator that caused this but i have no logs to share since i deleted/recreated the environment.

I'm using kubernetes 1.29.X

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingSomething isn't workinggood first issueGood for newcomersGood for newcomers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    Morty Proxy This is a proxified and sanitized view of the page, visit original site.