How to properly specify s3 credentials for mainApplicationFile in Spark operator? #328
-
I have this mainApplicationFile config:
I specified an s3connection for it:
I also created 2 files:
But with this config there's nothing happening, the pod that's supposed to start the spark application is in stuck "Pending" state. Optional info: I previously used (and it worked):
But now, for some reason it doesn't work properly when using multiple buckets in my Spark application. For some reason it applies these keys to all buckets that I use and I get an error, saying that my access and secret key is wrong. I did specify
in my spark config, but it still seems to use the keys from env for some weird reason. I tried:
Any ideas on how to resolve this? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
What does the whole SparkApplication yaml look like (redacted where need be)? In a previous version we set the env variables, but don't use them in 23.11 (nor possibly earlier). And the credentials provider is set automatically depending on whether an S3 connection is detected: see https://github.com/stackabletech/spark-k8s-operator/blob/release-23.11/rust/crd/src/lib.rs#L443-L457. |
Beta Was this translation helpful? Give feedback.
-
Glad it's working! |
Beta Was this translation helpful? Give feedback.
For each distinct bucket you'll need something like this: