-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fuzzbucket in CI #211
Fuzzbucket in CI #211
Conversation
@kgartland-rstudio - If fuzzbucket goes down, will the CI/CD fail? I'm generally wary of depending on external services as part of pull request checks. Should we consider isolating these tests to run via a different mechanism? For instance, only on |
Understood. We've been running tests in fuzzbucket elsewhere for a few years now. I haven't seen fuzzbucket itself fail anywhere but I see your concern. I'd be concerned about merging a PR to I do like the idea of running a larger set of longer running tests on a schedule rather than in CI but I think we need a way to test with Windows regularly on each PR and I couldn't find a way to do that with Docker in GH Actions. |
I also removed |
Reconsidering this... |
Could we run the PR branches with
and run the nightly test with end to end publishing tests only on all platforms with fuzzbucket? |
This is my preferred approach. Considering these "acceptance tests" for the sake of conversation, we can make acceptance tests a pre-condition for releases, a typical CI/CD pattern. If running these tests on pull requests makes more sense, I'm open to doing so. I want to ensure that we have a "break the glass" option if fuzzbucket or AWS goes down. This afternoon, I would like to start discussing our release process so that we can begin producing pre-alpha distributions. |
We could. For this PR, maybe I'll rip out Connect and the publishing tests from |
This PR now...
nightly workflow: https://github.com/rstudio/publishing-client/actions/runs/6266164364 In a follow-up PR, I'll add Connect in Docker for Linux builds in the main.yaml workflow |
If I am tracking this correctly in our test justfile we do |
I don't think it makes sense to run our UI unit tests on all platforms each time? For unit tests in the frontend our environment is jsdom which shouldn't change between OSes. For our e2e tests browser based tests will be dependent on what browser is being used (and really what browser / rendering engine is being used). However we don't really split up e2e tests for browser specific features and full features across the UI and CLI. The CLI seems like the dependent-on-the-OS part of this project. Feel free to let me know if I'm off base though. |
This PR doesn't impact the unit tests. Those are run once in linux I believe. These are the integration tests which start the UI / CLI and runs Cypress tests and CLI tests respectively on each OS. We've been bitten by Windows bugs in rsconnect-python because we didn't have an automated way to run against a Windows environments.
File paths are treated differently in Windows vs Unix systems so I think it's best to cover that in both the CLI and UI, though admittedly both will use the same files api. However, since we have multiple OSes up for the CLI, we may as well ensure the UI works properly in each binary too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a summary or table in the README of what tests are performed on which OSes for each of the GHA workflows? I'm a little unclear on that. We have agent (go) unit tests, UI unit tests, CLI deployment tests, and e2e deployment tests (at least... I feel like I'm missing another kind of UI test?).
@@ -0,0 +1 @@ | |||
{"account_name":"","server_type":"","server_url":"","content_id":"","content_name":"","bundle_id":null,"deployed_at":null} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if it's correct to commit this metadata. In real user workflows, it's probably correct to allow for collaborators to redeploy to the same target. But, this target data is blank, and we probably want a new deployment anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was not intended to be added. I'll remove it.
# - UX e2e tests | ||
|
||
jobs: | ||
build-docker-image: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth looking at how Taylor was able to reuse GHA parts across workflows. Maybe some of this can be factored out? Does not need to be part of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like Taylor's release PR is in progress. I'll hold off on this PR and use his approach once that lands. It looks like some additional changes will be required for the tests to work with the refactor anyway.
name: web-ux | ||
path: ./web/dist | ||
|
||
validate-codebase: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some steps might not be needed in the nightly test since they run on PRs and on main? Like validate-codebase
or agent-tests
.
Thanks for the explanation @kgartland-rstudio! |
Closing in favor of #335 |
Intent
(don't worry most of those 223 files are from adding all our jumpstart examples. feel free to ignore all files in
test/sample-content/
)This PR introduces Connect in Fuzzbucket so we can run deploy tests to a real installation of Connect.
In the pull-request.yaml workflow, this will add a
start-connect
job that will create the fuzzbucket instance. It saves aconnect_ip
artifact to be re-used by other jobs for our other OS'es. That will likely be removed if we get a domain for our fuzzbucket in the future.It adds an
environment
directory that will contain various environment files where we will store flags and other variables we may want to test against in the future. Currently, we only have a.basic
environment but I imagine we'll have many other flag combinations we'll need to test against.deployment_helper
which currently just crawls the sample-content directory so we can iterate over any content type we want. this will only run a single deploy test in CI for now but once we get logs in the UI, i'll update it to test all python content.if you want to run the tests locally, you can run
just test/init-connect-and-publish darwin-arm64
which will start Connect in fuzzbucket and run the publishing tests against it (replace darwin-arm64 with whatever architecture you're running, you will need tojust build
first).due to some issues accessing environment variables in Windows, I had to add a second target in
web/justfile
andweb/package.json
specifically for Windows.the cypress tests added are still under
home.cy.ts
, they don't do much now because I can't yet access the deployment URL to actually test the deployment on Connect. right now we just make sure we can reach Connect and see that something has been deployed. i'll likely be adding a different file/folder structure once we're ready to add more tests.Type of Change
Approach
Automated Tests
these are them
Directions for Reviewers
fuzzubucket-client login
FUZZBUCKET_URL=https://g3z4r6ej53.execute-api.us-east-1.amazonaws.com/prod
FUZZBUCKET_CREDENTIALS=[github_username]:[password]
(password is retrieved during the login step)run the tests locally
just test/init-connect-and-publish darwin-arm64
to run against all the deployments:
TEST_SCENARIO=all_content just test/init-connect-and-publish darwin-arm64
Checklist