Skip to content

Commit

Permalink
Merge pull request #3003 from uswds/al-html-proofer-dec
Browse files Browse the repository at this point in the history
USWDS-Site: HTML proofer and snyk fixes
  • Loading branch information
annepetersen authored Dec 10, 2024
2 parents 10387b5 + 71245c8 commit 478a71a
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 16 deletions.
11 changes: 8 additions & 3 deletions .snyk
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities.
version: v1.25.0
version: v1.25.1
# ignores vulnerabilities until expiry date; change duration by modifying expiry date
ignore:
'npm:chownr:20180731':
Expand Down Expand Up @@ -3523,8 +3523,8 @@ ignore:
SNYK-JS-INFLIGHT-6095116:
- '*':
reason: No available upgrade or patch
expires: 2025-01-01T21:08:46.836Z
created: 2024-12-02T21:08:46.867Z
expires: 2025-01-08T18:16:33.491Z
created: 2024-12-09T18:16:33.532Z
SNYK-JS-BRACES-6838727:
- '*':
reason: No available upgrade or patch
Expand All @@ -3535,6 +3535,11 @@ ignore:
reason: No available upgrade or patch
expires: 2024-09-21T17:53:15.173Z
created: 2024-08-22T17:53:15.213Z
SNYK-JS-NANOID-8492085:
- '*':
reason: No available upgrade or patch
expires: 2025-01-08T18:15:22.221Z
created: 2024-12-09T18:15:22.260Z
# patches apply the minimum changes required to fix a vulnerability
patch:
'npm:minimatch:20160620':
Expand Down
4 changes: 2 additions & 2 deletions pages/documentation/guidance/performance/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ The following gives a brief overview of a handful of metrics. If you're looking

### Onload

[The `load` event](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) (as shown in MDN Web Docs) fires at the end of the document loading process. At this point, all of the objects in the document are in the DOM, and all the images, scripts, links, and sub-frames have finished loading.
[The `load` event](https://developer.mozilla.org/en-US/docs/Web/API/Window/load_event) (as shown in MDN Web Docs) fires at the end of the document loading process. At this point, all of the objects in the document are in the DOM, and all the images, scripts, links, and sub-frames have finished loading.

#### Pros
- Easy to calculate for most pages
Expand Down Expand Up @@ -389,6 +389,6 @@ var numberOfElements = document.getElementsByTagName('*').length;
[critical CSS]: https://www.smashingmagazine.com/2015/08/understanding-critical-css/
[WebPagetest]: https://www.webpagetest.org/
[SpeedCurve]: https://speedcurve.com/
[Lighthouse]: https://developers.google.com/web/tools/lighthouse/
[Lighthouse]: https://developer.chrome.com/docs/lighthouse/overview/
[performance timing API]: https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming
[HTTP/2]: https://en.wikipedia.org/wiki/HTTP/2
16 changes: 8 additions & 8 deletions pages/documentation/guidance/performance/how.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,12 @@ It is possible to use two different tools to track all your metrics, but not rec

Besides checking that the tool tracks most of the main metrics your team is interested in, it’s also important to consider how the tool will be run. The following scenarios show why certain tools are better based on the conditions of the site and the team:

- If the site is not public and you have to login to use it, performance tools that can be run in the browser, such as [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/), make the process of testing much easier. Otherwise tools have to be configured to login to the site in an automated fashion, or have login cookies set up so the tool can access the site.
- If the site is not public and you have to login to use it, performance tools that can be run in the browser, such as [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/), make the process of testing much easier. Otherwise tools have to be configured to login to the site in an automated fashion, or have login cookies set up so the tool can access the site.
- If your team doesn’t use the Chrome web browser, then using [Sitespeed.io](https://www.sitespeed.io/) as a tool is a great choice. It reports on most of the same metrics as Lighthouse but doesn’t require a particular browser.
- If your team has limited ability to install the Chrome web browser, Chrome extensions, or CLI applications, then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option.

{% capture example_tool %}
In the cloud.gov dashboard, we decided to use [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/). Google Chrome Lighthouse reports on most of the metrics we’re interested in and is able to run in both CLI and as a browser extension. Since the dashboard isn’t public, and you need to be logged in to use it, Lighthouse’s ability to be run in the browser makes testing the site much easier.
In the cloud.gov dashboard, we decided to use [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/). Google Chrome Lighthouse reports on most of the metrics we’re interested in and is able to run in both CLI and as a browser extension. Since the dashboard isn’t public, and you need to be logged in to use it, Lighthouse’s ability to be run in the browser makes testing the site much easier.
{% endcapture %}
{% include perf_example.html
text=example_tool
Expand All @@ -137,7 +137,7 @@ Focusing on these three metrics provides a good overview of how well your site i
Based on these metrics, we’ve also recommended tools that are able to track these three metrics.
#### Default tool recommendation:

[Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/): a Google Chrome library that can be run in the CLI or as a browser extension. Google Chrome Lighthouse can be run in a CLI, or developer environment. It’s relatively easy to setup, and use, and includes harder to track metrics, like [Speed index](../glossary/#speed-index). Its only downside is that it requires both the Chrome browser and the ability to install Chrome extensions. If your team is unable to install Chrome or Chrome extensions then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option.
[Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/): a Google Chrome library that can be run in the CLI or as a browser extension. Google Chrome Lighthouse can be run in a CLI, or developer environment. It’s relatively easy to set up, and use, and includes harder to track metrics, like [Speed index](../glossary/#speed-index). Its only downside is that it requires both the Chrome browser and the ability to install Chrome extensions. If your team is unable to install Chrome or Chrome extensions then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option.

Once you have chosen metrics and a tool to track the metrics, the next step is to define your performance goals and potential budgets, or limits, for the project.

Expand All @@ -151,13 +151,13 @@ The first step in setting budgets and potentially, goals, is to have an idea of

The best way to do comparisons is to run your preferred tool(s) against each chosen comparison site. To do the comparison, your team should choose three to six comparison sites to run testing against. The tool should be run against one to three pages from each comparison site, based on how different each page is from one another.

- To do this in [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/), visit one to three pages for each comparison site, run the Lighthouse extension, and export the results. Lighthouse also has a secondary tool, [lighthouse-batch](https://www.npmjs.com/package/lighthouse-batch), to run against multiple URLs to make this process faster.
- To do this in [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/), visit one to three pages for each comparison site, run the Lighthouse extension, and export the results. Lighthouse also has a secondary tool, [lighthouse-batch](https://www.npmjs.com/package/lighthouse-batch), to run against multiple URLs to make this process faster.
- A similar process can be used for [Sitespeed.io](https://www.sitespeed.io/) by running the CLI command for each page of each comparison site. Sitespeed.io also has an option to run against multiple URLs, similar to lighthouse-batch, built into the actual tool.

Once all the data is collected for each page, of each site, you’ll want to compare the data and get an idea of the max and min values for each of your chosen metrics for each site. It’s up to you how you want to do this. One way is to make a Google spreadsheet comparing each metric to each site page.

{% capture example_comparison %}
In the cloud.gov dashboard, we used [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/) to test against four comparison sites: AWS, IBM Bluemix, Pivotal, and Azure. For each, we ran a test on the landing page and a specific app page. We saved all the results and created a spreadsheet in Google documents alongside our own Lighthouse test on cloud.gov. We also included results from our own cloud.gov dashboard.
In the cloud.gov dashboard, we used [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) to test against four comparison sites: AWS, IBM Bluemix, Pivotal, and Azure. For each, we ran a test on the landing page and a specific app page. We saved all the results and created a spreadsheet in Google documents alongside our own Lighthouse test on cloud.gov. We also included results from our own cloud.gov dashboard.
<img src="{{ site.baseurl }}/img/performance/example-comparisons.png" alt="cloud.gov example comparison document">
{% endcapture %}
{% include perf_example.html
Expand All @@ -176,7 +176,7 @@ A good idea for selecting budgets is selecting the fastest performing site for e
- The lowest value, or minimum, is **4085**.
- We’ll take 20% off that value for a [Speed index](../glossary/#speed-index) budget of **3268**.

This value can also be compared to the recommended values from your tool. [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/) provides a target value for each metric. If the budget is far off the target, it might be a good idea to bring it closer to that target. You can also round the numbers up or down to make them easier for the team to remember.
This value can also be compared to the recommended values from your tool. [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) provides a target value for each metric. If the budget is far off the target, it might be a good idea to bring it closer to that target. You can also round the numbers up or down to make them easier for the team to remember.

Not all metrics require setting a budget. Any metrics that can’t be compared, such as [Custom timing events](../glossary/#custom-timing-events), should not have a budget. Other budgets are simply a yes or no value, such as whether the site is mobile friendly, which doesn’t require a budget. Additionally, it’s up to your team to analyze the data and set budgets for what makes sense.

Expand Down Expand Up @@ -221,7 +221,7 @@ If your metrics have a large gap between the budget and goal, it might be wise t

## Adding site tracking

Once you’ve determined the metrics to track, the tools to use, and what your performance budgets and goals are, it’s time to setup the systems to do the actual tracking of your metrics. How and when to track is up to your team and workflow. It’s a good idea to have a discussion with the team about the how, when, and what of performance.
Once you’ve determined the metrics to track, the tools to use, and what your performance budgets and goals are, it’s time to set up the systems to do the actual tracking of your metrics. How and when to track is up to your team and workflow. It’s a good idea to have a discussion with the team about the how, when, and what of performance.

- How do people want to hear about performance?
- Through what channels do they want to be notified?
Expand Down Expand Up @@ -289,7 +289,7 @@ A Content Management Solution (CMS) tracker should only be used if your site has
Depending on how often the code of the site gets updated, the team might need two tracking solutions: one for the CMS and one for the code updates, leading to a more complicated system.

{% capture example_tracking %}
In the cloud.gov dashboard, we decided to do CI testing because we already had a reliable CI setup and the team is primarily made up of developers, meaning CI would be where performance gets the most attention. We setup [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/) in our build process, ensuring that if a recent code change goes over budget, it would stop the build and report the problems to the developers. Additionally, we used a Github service to receive performance reports over time.
In the cloud.gov dashboard, we decided to do CI testing because we already had a reliable CI setup and the team is primarily made up of developers, meaning CI would be where performance gets the most attention. We set up [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) in our build process, ensuring that if a recent code change goes over budget, it would stop the build and report the problems to the developers. Additionally, we used a Github service to receive performance reports over time.
{% endcapture %}
{% include perf_example.html
text=example_tracking
Expand Down
4 changes: 2 additions & 2 deletions pages/documentation/website-standards.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ layout: styleguide
title: Website standards are now at standards.digital.gov
category: How to use USWDS
lead: |
Federal website standards are now at <a href="http://standards.digital.gov">standards.digital.gov</a>. Federal website standards will help agencies provide high-quality, consistent digital experiences for everyone. The standards cover common visual and technical elements and reflect user experience best practices. The new site launched September 26, 2024.
Federal website standards are now at <a href="https://standards.digital.gov/">standards.digital.gov</a>. Federal website standards will help agencies provide high-quality, consistent digital experiences for everyone. The standards cover common visual and technical elements and reflect user experience best practices. The new site launched September 26, 2024.
changelog:
key: 'docs-web-standards'
---
Expand All @@ -15,7 +15,7 @@ Agencies can more easily build accessible, mobile-friendly websites, and comply

Federal agencies are required to comply with website standards per the 21st Century Integrated Digital Experience Act (IDEA). Standards will align with the 21st Century IDEA, OMB’s memo on Delivering a Digital-First Public Experience (M-23-22), and other relevant policy requirements and best practices.

[Understand the policy framework and requirements in the 21st Century IDEA and M-23-22](https://digital.gov/resources/delivering-digital-first-public-experience/).
[Understand the policy framework and requirements in the 21st Century IDEA and M-23-22](https://digital.gov/resources/delivering-digital-first-public-experience/).

## Get started with USWDS

Expand Down
2 changes: 1 addition & 1 deletion pages/patterns/create-a-profile/pronouns.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Provide a text entry field that supports a rich array of special characters and
- Asking about gender in online forms. (September 18, 2015) Retrieved on July 19, 2022, from `http://www.practicemakesprogress.org/blog/2015/9/18/asking-about-gender-on-online-forms` [This link is no longer active. [Archived copy of practicemakesprogress.org](https://web.archive.org/web/20220120033201/http://www.practicemakesprogress.org/blog/2015/9/18/asking-about-gender-on-online-forms)]
- DOL policies on gender identity: rights and responsibi (n.d.) Retrieved on November 1, 2022, from <https://www.dol.gov/agencies/oasam/centers-offices/civil-rights-center/internal/policies/gender-identity>
- Gender pronouns. (October 23, 2017) Retrieved on November 1, 2022, from
[https://www1.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf](https://www1.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf)
[https://www.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf](https://www.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf)
- Gender pronouns & their use in workplace communications. (n.d.) Retrieved on November 1, 2022 from, [https://dpcpsi.nih.gov/sgmro/gender-pronouns-resource](https://dpcpsi.nih.gov/sgmro/gender-pronouns-resource)
- Gender-inclusive language. (n.d.) Retrieved on July 19, 2022, from <https://www.digital.govt.nz/standards-and-guidance/design-and-ux/content-design-guidance/inclusive-language/gender-inclusive-language>
- The importance of personal pronouns. (September 16, 2022) Retrieved on November 1, 2022 from, [https://digital.va.gov/people-excellence/the-importance-of-personal-pronouns/](https://digital.va.gov/people-excellence/the-importance-of-personal-pronouns/)
Expand Down

0 comments on commit 478a71a

Please sign in to comment.