-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Patch verb for persistentvolumes resources in the external-provisioner-runner clusterrole is not effective #1255
Comments
@xing-yang @carlory In the above code we are still doing the update operation and in the RBAC we don't have update access rather as part of #1155 to address the same issue, can you please check this one and let me know if we need to add RBAC for update or change the code in provisioner to use patch as well https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L984 we still need an update for adding the annotation and for updating finalizers we can use the patch |
/assign |
I will try to reproduce it with hostpath driver.
Agreed. According to xing-yang's comment, to add a new RBAC rule, we need to bump the major version. |
we need to add some e2e tests to detect a similar issue. should the new tests be added to the external-provisioner repo? cc @xing-yang @jsafrane |
/unassign sorry i was wrong i like to the PVC update code not the PV update, i don't see code in main that require PV update access. |
@sameshai Can you execute the following command with your service account name? A service account may be bound with more than one cluster role. kubectl auth can-i patch pv --as=system:serviceaccount:default:csi-hostpathplugin-sa |
If the answer is "yes", it means that the external provisioner can patch the persistent volume resource even if the external-provisioner cluster role does not have permission. It is allowed via other cluster roles. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened:
As per PR below PR If the feature-gate HonorPVReclaimPolicy is enabled, create a pvc with a delete relaim policy, then delete the pvc, the pv stuck in deleting status, the error message is: #1155
I did try this with IBM VPC Block CSI driver and 5.0.2 provisioner but seems without added the patch permission I am still able to delete PVC/PV and no error.
What you expected to happen:
I was expecting RBAC error
How to reproduce it:
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):The text was updated successfully, but these errors were encountered: