-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Idea - kubectl-trace operator #85
Comments
I strongly agree on this, I think we eventually want to use the CRD in all the cases. IN a lightweight scenario we orchestrate it from the local kubectl-trace (the local kubectl-trace effectively becomes the shared informer for that CRD) while in the FULL scenario we do the whole operator. Which in my opinion should be installable and maintainable from kubectl-trace itself. |
I really like the idea. Especially the interface part, basically splitting the concepts in a more cleaner way! 🤓 Let's just reason about the process to let users choose which mode they want kubectl-trace to run magic things under the hoods. Should we think about a command to let the users choose? |
I suggest —standalone We can default it to true, then once the other approach is stable default it to false. |
I really like the idea of a trace operator. It strikes me as a nice integration point for different kinds of automation -- deeper analysis triggered by detection of some external metric, or scheduled runs + exfiltrating the log data programmatically. I'm not sure I follow the desired cli <-> operator integration (except maybe bootstrapping?) but I took a very naive stab at wrapping the TraceJob package in a CRD + controller, curious if this is what you had in mind or if i'm lost somewhere: main loop is pretty much converting CRD -> TraceJob -> ConfigMap and Job and applying both: |
@alexeldeib really cool! I also have some code from a hackdays we did a few months ago that works similarly, but haven't been able to open source yet. I was hoping to get to this in the next month or so. Perhaps we could work together on this? |
I have been heads-down on getting a stable release pipeline for a bpftrace container set up for the last little while, but I'll be ready to context switch onto this soon. I spoke with @alexeldeib a couple of weeks ago, and discussed a prototype I had built a few months ago during a hackdays. I want to get a new kubectl-trace release cut using this new image, and then my attention will turn to this - probably for most of March. |
I develop an open source automations framework for Kubernetes (https://github.com/robusta-dev/robusta) and I'd like to suggest doing this as a joint project. We have the resources for someone to work on this part time, but we'll need feedback and direction on what features would be most useful.
Would an integration like this make sense? |
I really like that kubectl-trace works as a standalone executable with no dependencies, and I think that's a desirable feature to keep.
Given the popularity of operators in the kubernetes community (and in our own infrastructure), I think that there is probably a good use-case for a different mode of operation for kubectl-trace that is custom-resource based.
For example, the current TraceJob struct is a nice blueprint of what a CRD for such an operator could be. The operator itself could just vendor in kubectl-trace code, and avoid duplication.
For a hackdays project, we built something very similar. One of the big draws with this is that kubect-trace doesn't have to be the only client for this - using a CR as the interface would allow for other tooling to plug into it very easily.
The text was updated successfully, but these errors were encountered: