Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fuzzbucket in CI #211

Closed
wants to merge 59 commits into from
Closed

Fuzzbucket in CI #211

wants to merge 59 commits into from

Conversation

kgartland-rstudio
Copy link
Collaborator

@kgartland-rstudio kgartland-rstudio commented Sep 12, 2023

Intent

(don't worry most of those 223 files are from adding all our jumpstart examples. feel free to ignore all files in test/sample-content/)

This PR introduces Connect in Fuzzbucket so we can run deploy tests to a real installation of Connect.

  • In the pull-request.yaml workflow, this will add a start-connect job that will create the fuzzbucket instance. It saves a connect_ip artifact to be re-used by other jobs for our other OS'es. That will likely be removed if we get a domain for our fuzzbucket in the future.

  • It adds an environment directory that will contain various environment files where we will store flags and other variables we may want to test against in the future. Currently, we only have a .basic environment but I imagine we'll have many other flag combinations we'll need to test against.

  • deployment_helper which currently just crawls the sample-content directory so we can iterate over any content type we want. this will only run a single deploy test in CI for now but once we get logs in the UI, i'll update it to test all python content.

  • if you want to run the tests locally, you can run just test/init-connect-and-publish darwin-arm64 which will start Connect in fuzzbucket and run the publishing tests against it (replace darwin-arm64 with whatever architecture you're running, you will need to just build first).

  • due to some issues accessing environment variables in Windows, I had to add a second target in web/justfile and web/package.json specifically for Windows.

  • the cypress tests added are still under home.cy.ts, they don't do much now because I can't yet access the deployment URL to actually test the deployment on Connect. right now we just make sure we can reach Connect and see that something has been deployed. i'll likely be adding a different file/folder structure once we're ready to add more tests.

Type of Change

  • Bug Fix
  • New Feature
  • Breaking Change
  • CI

Approach

Automated Tests

these are them

Directions for Reviewers

  • make sure you have fuzzbucket-client installed (pip install fuzzbucket-client)
  • and are logged in: fuzzubucket-client login
  • have the variables stored
    FUZZBUCKET_URL=https://g3z4r6ej53.execute-api.us-east-1.amazonaws.com/prod
    FUZZBUCKET_CREDENTIALS=[github_username]:[password] (password is retrieved during the login step)

run the tests locally
just test/init-connect-and-publish darwin-arm64

to run against all the deployments:
TEST_SCENARIO=all_content just test/init-connect-and-publish darwin-arm64

Checklist

@tdstein
Copy link
Collaborator

tdstein commented Sep 21, 2023

@kgartland-rstudio - If fuzzbucket goes down, will the CI/CD fail? I'm generally wary of depending on external services as part of pull request checks.

Should we consider isolating these tests to run via a different mechanism? For instance, only on main or on a schedule?

@kgartland-rstudio
Copy link
Collaborator Author

@kgartland-rstudio - If fuzzbucket goes down, will the CI/CD fail? I'm generally wary of depending on external services as part of pull request checks.

Should we consider isolating these tests to run via a different mechanism? For instance, only on main or on a schedule?

Understood. We've been running tests in fuzzbucket elsewhere for a few years now. I haven't seen fuzzbucket itself fail anywhere but I see your concern. I'd be concerned about merging a PR to main that wasn't tested in Windows specifically since a large percentage of our users will likely be running Windows and I'd rather not have to test every PR against Windows manually.

I do like the idea of running a larger set of longer running tests on a schedule rather than in CI but I think we need a way to test with Windows regularly on each PR and I couldn't find a way to do that with Docker in GH Actions.

@kgartland-rstudio
Copy link
Collaborator Author

I also removed ux-e2e-tests since those were redundant with the linux testing we do later on. Does anyone have any objections to that? If not, I'll remove ux-e2e-tests from the pipeline requirements.

@kgartland-rstudio
Copy link
Collaborator Author

kgartland-rstudio commented Sep 21, 2023

Should we consider isolating these tests to run via a different mechanism? For instance, only on main or on a schedule?

Reconsidering this...
if we only run the CLI/UI tests (without actually publishing) in pull_request.yaml and then ran the full deploy tests in the main.yaml, would it be a big issue if they failed on the main run? Would a new PR would have to be opened? Would that get messy?

@mmarchetti
Copy link
Contributor

mmarchetti commented Sep 21, 2023

Could we run the PR branches with

  • CLI/UI tests for all platforms
  • End to end publishing tests only on Linux

and run the nightly test with end to end publishing tests only on all platforms with fuzzbucket?

@tdstein
Copy link
Collaborator

tdstein commented Sep 21, 2023

Should we consider isolating these tests to run via a different mechanism? For instance, only on main or on a schedule?

Reconsidering this... if we only run the CLI/UI tests (without actually publishing) in pull_request.yaml and then ran the full deploy tests in the main.yaml, would it be a big issue if they failed on the main run? Would a new PR would have to be opened? Would that get messy?

This is my preferred approach. Considering these "acceptance tests" for the sake of conversation, we can make acceptance tests a pre-condition for releases, a typical CI/CD pattern.

If running these tests on pull requests makes more sense, I'm open to doing so. I want to ensure that we have a "break the glass" option if fuzzbucket or AWS goes down.

This afternoon, I would like to start discussing our release process so that we can begin producing pre-alpha distributions.

@kgartland-rstudio
Copy link
Collaborator Author

Could we run the PR branches with

  • CLI/UI tests for all platforms
  • End to end publishing tests only on Linux

and run the nightly test with end to end publishing tests only on all platforms with fuzzbucket?

We could. For this PR, maybe I'll rip out Connect and the publishing tests from pull_request.yaml and add them to a nightly run. In a follow-up PR, I'll bring Docker back in and run Connect in that for the Linux tests.

@kgartland-rstudio
Copy link
Collaborator Author

This PR now...

  • runs the ui-only tests in the pull_request.yaml
  • the full deploy tests with Connect in fuzzbucket in the new nightly.yaml

nightly workflow: https://github.com/rstudio/publishing-client/actions/runs/6266164364
pull_request workflow: https://github.com/rstudio/publishing-client/actions/runs/6266186318

In a follow-up PR, I'll add Connect in Docker for Linux builds in the main.yaml workflow

@dotNomad
Copy link
Collaborator

I also removed ux-e2e-tests since those were redundant with the linux testing we do later on. Does anyone have any objections to that? If not, I'll remove ux-e2e-tests from the pipeline requirements.

If I am tracking this correctly in our test justfile we do just test/ui-client ${os} which will run the e2e tests on each OS. It makes sense to remove the extra ux-e2e-tests then so we don't do it twice.

@dotNomad
Copy link
Collaborator

  • CLI/UI tests for all platforms

I don't think it makes sense to run our UI unit tests on all platforms each time?

For unit tests in the frontend our environment is jsdom which shouldn't change between OSes.

For our e2e tests browser based tests will be dependent on what browser is being used (and really what browser / rendering engine is being used). However we don't really split up e2e tests for browser specific features and full features across the UI and CLI. The CLI seems like the dependent-on-the-OS part of this project.

Feel free to let me know if I'm off base though.

@kgartland-rstudio
Copy link
Collaborator Author

kgartland-rstudio commented Sep 28, 2023

I don't think it makes sense to run our UI unit tests on all platforms each time?

This PR doesn't impact the unit tests. Those are run once in linux I believe. These are the integration tests which start the UI / CLI and runs Cypress tests and CLI tests respectively on each OS. We've been bitten by Windows bugs in rsconnect-python because we didn't have an automated way to run against a Windows environments.

For our e2e tests browser based tests will be dependent on what browser is being used (and really what browser / rendering engine is being used). However we don't really split up e2e tests for browser specific features and full features across the UI and CLI. The CLI seems like the dependent-on-the-OS part of this project.

File paths are treated differently in Windows vs Unix systems so I think it's best to cover that in both the CLI and UI, though admittedly both will use the same files api. However, since we have multiple OSes up for the CLI, we may as well ensure the UI works properly in each binary too.

Copy link
Contributor

@mmarchetti mmarchetti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a summary or table in the README of what tests are performed on which OSes for each of the GHA workflows? I'm a little unclear on that. We have agent (go) unit tests, UI unit tests, CLI deployment tests, and e2e deployment tests (at least... I feel like I'm missing another kind of UI test?).

@@ -0,0 +1 @@
{"account_name":"","server_type":"","server_url":"","content_id":"","content_name":"","bundle_id":null,"deployed_at":null}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if it's correct to commit this metadata. In real user workflows, it's probably correct to allow for collaborators to redeploy to the same target. But, this target data is blank, and we probably want a new deployment anyway.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was not intended to be added. I'll remove it.

# - UX e2e tests

jobs:
build-docker-image:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth looking at how Taylor was able to reuse GHA parts across workflows. Maybe some of this can be factored out? Does not need to be part of this PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like Taylor's release PR is in progress. I'll hold off on this PR and use his approach once that lands. It looks like some additional changes will be required for the tests to work with the refactor anyway.

name: web-ux
path: ./web/dist

validate-codebase:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some steps might not be needed in the nightly test since they run on PRs and on main? Like validate-codebase or agent-tests.

@dotNomad
Copy link
Collaborator

Thanks for the explanation @kgartland-rstudio!

@kgartland-rstudio
Copy link
Collaborator Author

Closing in favor of #335

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants