Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

500 error on VolumeCreate | snapshot source &{"" ""} is not compatible with these parameters: rpc error: code = InvalidArgument desc = server is a required parameter) #795

Open
ebenoist opened this issue Nov 22, 2024 · 2 comments

Comments

@ebenoist
Copy link

What happened:

❥ nomad volume create ./volumes/uptime-v
olume.hcl 
Error creating volume: Unexpected response code: 500 (rpc er
ror: 1 error occurred:
        * controller create volume: CSI.ControllerCreateVolu
me: volume "uptime" snapshot source &{"" ""} is not compatib
le with these parameters: rpc error: code = InvalidArgument 
desc = server is a required parameter)

What you expected to happen:

A volume to be created!

How to reproduce it:

Using the nfs CSI plugin with a volume defined like this:

type = "csi"
id = "uptime"
name = "uptime"
plugin_id = "nfs"

capability {
  access_mode = "multi-node-multi-writer"
  attachment_mode = "file-system"
}

capability {
  access_mode = "single-node-writer"
  attachment_mode = "file-system"
}

context {
  server = "192.168.1.135"
  share = "/volume1/nomad/uptime"
}

mount_options {
  fs_type = "nfs"
  mount_flags = [ "timeo=30", "intr", "vers=4", "_netdev", "nolock" ]
}

Anything else we need to know?:

I do have one other nfs mount that is working, but I cannot create a subsequent:

type = "csi"
id = "nfs"
name = "nfs"
plugin_id = "nfs"

capability {
  access_mode = "multi-node-multi-writer"
  attachment_mode = "file-system"
}

capability {
  access_mode = "single-node-writer"
  attachment_mode = "file-system"
}

context {
  server = "192.168.1.135"
  share = "/volume1/nomad"
}

mount_options {
  fs_type = "nfs"
  mount_flags = [ "timeo=30", "intr", "vers=4", "_netdev", "nolock" ]
}

Environment:

[main?+][~/dev/home]❥ nomad plugin status -verbose
Container Storage Interface
ID   Provider        Controllers Healthy/Expected  Nodes Healthy/Expected
nfs  nfs.csi.k8s.io  1/1                           3/3
@andyzhangx
Copy link
Member

@ebenoist you could get csi driver controller logs to get detailed error msg: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/csi-debug.md#case1-volume-createdelete-failed

@ebenoist
Copy link
Author

@andyzhangx Thanks. I'll try and reproduce that via nomad. I'm not sure yet if this csi plugin is the issue yet or if its something unique to nomad's use of the interface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants