-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamically create ingress for running sparkapplication to access spark-ui outside of the k8s cluster #454
Comments
we now have two workarounds that make it possible to see the running spark applications without the operator provisioning an ingress itself 1. deploying a spark history server and enabling rolling spark logs by setting spark.eventLog.rolling.enabled to true, disadvantage clearly is the delay of the log roll, it's not as live as a real spark would be but it's working pretty well and without the need to deploy an ingress for every spark application running in the cluster. further disadvantage is the additional spark history configs in the spark application where the whole s3 connection has to be set instead of just an url or something similar (#415 tracks that already, could have been me creating it) I really recommend adding a possibility too deploy an ingress automatically with every spark application the operator submits, maybe with an ingress template that's defined one time for the operator via helm values or smth like that |
Thank you for your report, and sorry for the late response. We discussed this briefly and it will hopefully be prioritized for the next release but I cannot guarantee. Until then, a somewhat better workaround is to use the listener operator together with the I tested this snippet and it worked:
Hope this helps. |
when submitting a sparkapplication resource the driver will expose port 4040 where we can access the spark-ui. unfortunately the ui is available only within the cluster, not outside of it by using a webbrowser on a users pc.
of course it's possible to create an ingress by myself for every spark application I'm submitting but since sparkapplications are ephemeral after a couple of weeks there will be lots of dead ingresses in the cluster because the spark application they belong to has terminated already.
i think the operator should create an ingress / route whenever a sparkapplication is submitted. possible configuration option would be a value set in the operator itself. i'd prefer that rather than setting the ingress configs with every application
@sbernauer feel free to add
The text was updated successfully, but these errors were encountered: