Skip to content

How to properly specify s3 credentials for mainApplicationFile in Spark operator? #328

Answered by adwk67
paulpaul1076 asked this question in Q&A
Discussion options

You must be logged in to vote

For each distinct bucket you'll need something like this:

spark.hadoop.fs.s3a.bucket.{bucket_name}.endpoint=...
spark.hadoop.fs.s3a.bucket.{bucket_name}.path.style.access=true
spark.hadoop.fs.s3a.bucket.{bucket_name}.access.key=...
spark.hadoop.fs.s3a.bucket.{bucket_name}.secret.key=...
spark.hadoop.fs.s3a.bucket.{bucket_name}.path.style.access=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
2 replies
@adwk67
Comment options

adwk67 Dec 20, 2023
Collaborator

Answer selected by paulpaul1076
@paulpaul1076
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants