-
Notifications
You must be signed in to change notification settings - Fork 40.6k
kubelet: set terminationMessagePath perms to 0660 #108076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
kubelet: set terminationMessagePath perms to 0660 #108076
Conversation
Hi @skrobul. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
/triage accepted
/priority important-longterm
/retest |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale
|
/remove-lifecycle stale |
} else if pod.Spec.SecurityContext != nil && pod.Spec.SecurityContext.RunAsUser != nil { | ||
containerUid = int(*pod.Spec.SecurityContext.RunAsUser) | ||
} else { | ||
containerUid = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also consider SecurityContext.RunAsNonRoot
here and below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as I understand the RunAsNonRoot
does not influence the UID selection, it merely enables a validation that it's non-root, is that not the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true, but this change would set UID to 0 (root) even if RunAsNonRoot is true. It's may be ok, but looked suspicious to me. What would happen if container runs as non-root, but containerLogPath owner is root?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The container would not run at all if the RunAsNonRoot
was set to true, but the pod.Spec.SecurityContext.RunAsUser
was set unset. In other words, as far as I understand there is no way for the container to start as non-root without changing the RunAsUser
either on container or pod level. What am I missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about it. I thought that RunAsNonRoot
and RunAsUser
are independent options. By default user UID and/or username is taken from the image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the code I'm talking about: https://github.com/kubernetes/kubernetes/blob/release-1.29/pkg/kubelet/kuberuntime/kuberuntime_container.go#L326
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/test all |
/easycla |
/assign @SergeyKanzhelev |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: skrobul The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
World-writable files caused by the issue this PR fixes were flagged by security tools in a question that came my way. Is this PR still being prioritised? |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Currently, kubelet creates a world-readable and world-writeable empty files in
/var/lib/kubelet/pods/{podUID}/containers/{containerName}/{containerId}
. These are meant to be written by the process in containers when container is terminated.Originally, this file was created with
0644
, then despite security concerns, it was changed to0666
in #31839. This was completed to allow containers running as non-root to write termination messages. Later on, in 2019 this has been highlighted as a security vulnerability in Kubernetes Security Audit Report in #81116.This PR changes termination log file mode to
0660
which is the best of both worlds - it removes world-writable file, yet still allows the container user and it's group to write the termination message.Which issue(s) this PR fixes:
Related (fixes only part) #81116
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: