-
Notifications
You must be signed in to change notification settings - Fork 980
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next Steps for Fully Functioning Composite Actions #646
Comments
will reopen when feature launches. |
Thanks for the release! Can composited actions be referenced from the local repository (in the way standard actions can https://docs.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#referencing-an-action-in-the-same-repository-where-a-workflow-file-uses-the-action) or can they only be referenced from separate public repositories? |
Great question @latacora-tomekr . Yes, this can be referenced from the local repository as well. |
Thank you @latacora-tomekr for that pointer. I was trying to follow this example: https://docs.github.com/en/actions/creating-actions/creating-a-composite-run-steps-action And it wasn't seeing the action. I tried it with the relative path, and it worked in my dummy repo. I will leave this up, despite my embarrassment, in the hopes it will help others: https://github.com/jeffeb3/hello-world-composite-run-steps-action |
Thank you for this feature! I was wondering today if it would be possible to reuse Github Actions yaml steps and found out this was just released. Talk about great timing! Aside from what's already in the roadmap (uses, visualization), it would be great to have action scope env and shell keywords to allow specifying action scope step shell and action scope environment variables. Thanks again for working on this! |
This is very useful and fits my use case well: I mostly run shell scripts in my workflows, and making them reusable is now a lot simpler. I've noticed that the "Scripts" example uses an environment variable |
@bewuethr Thanks for the kind words! github.action_path is documented here: https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context (see " We follow this same pattern of also adding the github context attribute as an environment variable like github.token = GITHUB_TOKEN. |
Hi, this is really great. Maybe I missed it, am I right in saying the composite action must be in a public repo right now? If so, is there a plan for referencing private actions? We don't really want to make our all CICD scripts public just to reuse them and we have many repos that all need to reference central pipelines / scripts / actions. |
@mikeparker good point - I'll add it to our tracking issue so that we can think it through. Thanks for bringing this up! |
@mikeparker @ethanchewy I can see that issue on the roadmap for Q1 2021. |
Steps in workflow are shown as different collapsible sections in the UI. Are there any plans for supporting similar UI (nested within the workflow's |
It seems that we cannot use the and both of the above code snippets have empty values for the ${{ }} parameters. The output of |
Also, it seems that composite actions do not support defaults. https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrun Is this something planned? I notice its not in the initial description of this issue. |
This reverts commit 780cd79 Refer actions/runner#646
Outcomes are not supported in composite steps at the moment. I'll update my description above - we should support them once we add conditionals. |
Defaults are not supported in composite run steps actions. The reason why we avoided defaults for the design is that we want the workflow author to not know any of the internal workings of the composite action yaml file. So, we require the composite action author to always explicitly define shell for each run step in a composite action. |
@thboop Any idea when the |
@thboop Will it be possible to use secrets directly in the composite action instead of passing them as input parameters? Or is it not planned to be an option? |
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true). - Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well. - Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times. - Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment. - Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests. - Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible. - By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure. - Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose. - Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now. - Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task. - Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc. - Add README instructions about `.env` files and new test run scripts. That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem! I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR: - *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts. - When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions). - The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this. - Actions are somewhat limited. More details here: actions/runner#646 Fixes replayio#4313
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true). - Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well. - Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times. - Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment. - Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests. - Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible. - By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure. - Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose. - Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now. - Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task. - Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc. - Add README instructions about `.env` files and new test run scripts. That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem! I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR: - *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts. - When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions). - The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this. - Actions are somewhat limited. More details here: actions/runner#646 Fixes replayio#4313
I really like the concept of composite actions, but there is a fundamental problem with composite actions and templates which are supposed to be used in the same repository. Let's look how it is currently done, to re-use composite actions, which are located in the very same repository.
So far so good, BUT: In our CI / CD workflow, we check out the repository only once in the very first job. We do that, because the checkout takes some time, as our mono repository is quite large. From this initial checkout, we then pass relevant parts of our repository to the corresponding, following jobs as artifacts. And it works fine and makes our CI / CD architecture much faster. But the issue with that, is that I basically can't use template actions from the repositories An additional option like
would really help to solve this problem. It would be basically a sparse checkout, without doing all this overhead definitions to make them re-usable in every job. In fact, my expectation was, that the Either this is already possible, and I am doing something wrong, or this is really something, which needs to be supported as well. I would love to find a better approach for handling templates, as I'm right now very annoyed with this. |
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true). - Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well. - Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times. - Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment. - Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests. - Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible. - By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure. - Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose. - Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now. - Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task. - Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc. - Add README instructions about `.env` files and new test run scripts. That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem! I'm disabling the tests that seem to *always* fail for now. I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR: - *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts. - When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions). - The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this. - Actions are somewhat limited. More details here: actions/runner#646 Fixes replayio#4313
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true). - Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well. - Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times. - Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment. - Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests. - Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible. - By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure. - Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose. - Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now. - Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task. - Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc. - Add README instructions about `.env` files and new test run scripts. That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem! I'm disabling the tests that seem to *always* fail for now. I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR: - *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts. - When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions). - The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this. - Actions are somewhat limited. More details here: actions/runner#646 Fixes replayio#4313
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true). - Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well. - Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times. - Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment. - Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests. - Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible. - By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure. - Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose. - Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now. - Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task. - Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc. - Add README instructions about `.env` files and new test run scripts. That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem! I'm disabling the tests that seem to *always* fail for now. I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR: - *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts. - When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions). - The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this. - Actions are somewhat limited. More details here: actions/runner#646 Fixes replayio#4313
@aramfe, what about "other jobs" consuming the action as if it was external?
That is uncomfortable in the PRs where the action itself is modified. However, it should work for any other modification. Overall, I agree with your feature request, as I'm running into a similar use case. I would like to include the |
way nicer this way! see actions/runner#646 (comment)
way nicer this way! see actions/runner#646 (comment)
Is the composite action mentioned in #646 (comment) the same thing as re-usable workflows mentioned in https://docs.github.com/en/actions/learn-github-actions/reusing-workflows ? |
@pllim No, those are separate things with different purposes (with some overlap). The documentation for composite actions is here: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action |
…expected value 'continue-on-error'`. Implementation details: actions/runner#646
* Change only loading part of state when fetching workbench * Pass context data which has setup_complete * Handle anonymous users * Add unit tests * Build/install client during CI to generate templates * Add build client step for template generation to CI * Refactor github actions to client install is defined in standalone action * Run actions/checkout in local action * Fix action name/path * Revert attempt at composite action Not supported yet: actions/runner#646 * Copy template for unit testing instead of client building it * Fix copy command
Now that GitHub supports nested composite actions: actions/runner#646 (comment)
Now that GitHub supports nested composite actions: actions/runner#646 (comment)
@aramfe, you could maintain a |
…expected value 'continue-on-error'`. Implementation details: actions/runner#646
NOTE:
For those who are not aware, I'm a college student who interned at GitHub for Summer 2020. Thank you all for all the thoughtful comments and feedback on this thread but since I'm not working at GitHub currently, I will not be responding to any new comments below in this thread.
================================================
Hi there 👋
We just released a new feature called composite run steps actions(https://docs.github.com/en/actions/creating-actions/about-actions#composite-run-steps-actions).
This issue thread will serve as a point of reference for people who are curious about the next steps after this feature and what we plan to work on next. There are probably quite a few questions about when we are going to support in the future like "when are you going to support 'uses' steps in a composite action"? In this Issue thread, I'll talk about the technical problems that we are planning to solve to accomplish this. Throughout the next week, I'll update it with any additional updates.
tl;dr in this issue, I'll go over what we currently support for composite run steps and how we plan on developing fully functioning composite actions.
Composite run steps background
What is a composite run steps action?
Many of you have been asking us to build a feature that enables them to nest actions within actions (ex: #438).
An important first step for supporting nesting within action is to first start supporting the execution of multiple run steps in an action.
For some developers, they may want to execute a variety of scripts and commands in different languages and shells in a single action and map those outputs to the workflow. We’ve also heard from our community that many developers want to reuse parts of their workflows into other workflows. This new action type, composite run steps, can help developers do just that! For example, developers can abstract parts of their workflows as composite run steps actions and reuse them for other workflows.
Thus, we created this feature called composite run steps which allows users to run multiple run steps in an action!
What does composite run steps currently support?
For each run step in a composite action, we support:
In addition, we support mapping input and outputs throughout the action.
See docs for more info.
What does Composite Run Steps Not Support
We don't support setting conditionals, continue-on-error, timeout-minutes, "uses", and secrets on individual steps within a composite action right now.
(Note: we do support these attributes being set in workflows for a step that uses a composite run steps action)
Examples of how you can use composite run steps actions
Next Steps for Developing Fully Functioning Composite Run Steps
In this section, I'll describe some current features that we are currently working on or are planning to work on. You can see more details about each of these in https://github.com/actions/runner/blob/users/ethanchewy/compositeADR/docs/adrs/0549-composite-run-steps.md.
If conditionals
see: https://github.com/actions/runner/blob/main/docs/adrs/0549-composite-run-steps.md#if-condition
For if conditionals, we treat the composite action as a completely new slate. Each proceeding step depends on the proceeding step's states (failure, cancelled, or success).
To keep track of these states, we can create a new ExecutionContext for composite actions that keep track of the attributes we need such as the current state of a composite action step before evaluating. When we support nesting, we need to follow some sort of inheritance model (maybe something like
state = (parent state && child state)
?Uses
Related questions that we need to answer first before we can support
uses
:Current experimentation for this: #612
timeout-minutes
It makes a ton of sense creating a separate execution context for composite actions especially in the cases for keeping track of the conditional status of the step. This would make it easier to some sort of "time element" to easily get the "correct" condition of the step in log time (something like this algorithm: https://maksimdan.gitbook.io/interview-practice-problems/leetcode_sessions/design/permission-management-for-a-general-tree)
For the nested timeout-minutes support, couldn't we just link all children tokens to their parents? The time left will trickle down to the children. For example,
Let's say the A composite action whole step has a timeout-minutes fo 30 minutes and the step B (which is a step inside the composite action) has a timeout-minutes of 15 minutes and the step B exceeds that time, we can easily fail that thread and differentiate that between a cancellation. Then, we would move onto the next step C which doesn't have a timeout-minutes set and since it has ~15 minutes left, it will still start running. If the next step C takes more than 15 minutes, then we fail the whole action since the whole action has taken more than 30 minutes (the step C will automatically fail as well as the whole composite action step since they are linked)
Areas that we are also thinking about
There are many more layers of complexity for fully functioning composite actions.
For example at a high level,
Runner specific questions:
{context.ScopeName}.{context.ContextName}
)TODOs
Extra Information
How we built it
The text was updated successfully, but these errors were encountered: