From f25e734b15e4e574994810b6b21aa55a06f4eb80 Mon Sep 17 00:00:00 2001 From: amyleadem Date: Mon, 9 Dec 2024 09:31:45 -0700 Subject: [PATCH 1/6] Fix google lighthouse 301 links --- .../documentation/guidance/performance/glossary.md | 2 +- pages/documentation/guidance/performance/how.md | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/pages/documentation/guidance/performance/glossary.md b/pages/documentation/guidance/performance/glossary.md index 64bec2ba2..92cde9b59 100644 --- a/pages/documentation/guidance/performance/glossary.md +++ b/pages/documentation/guidance/performance/glossary.md @@ -389,6 +389,6 @@ var numberOfElements = document.getElementsByTagName('*').length; [critical CSS]: https://www.smashingmagazine.com/2015/08/understanding-critical-css/ [WebPagetest]: https://www.webpagetest.org/ [SpeedCurve]: https://speedcurve.com/ -[Lighthouse]: https://developers.google.com/web/tools/lighthouse/ +[Lighthouse]: https://developer.chrome.com/docs/lighthouse/overview/ [performance timing API]: https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming [HTTP/2]: https://en.wikipedia.org/wiki/HTTP/2 diff --git a/pages/documentation/guidance/performance/how.md b/pages/documentation/guidance/performance/how.md index 04ae28382..9c4cff7a1 100644 --- a/pages/documentation/guidance/performance/how.md +++ b/pages/documentation/guidance/performance/how.md @@ -111,12 +111,12 @@ It is possible to use two different tools to track all your metrics, but not rec Besides checking that the tool tracks most of the main metrics your team is interested in, it’s also important to consider how the tool will be run. The following scenarios show why certain tools are better based on the conditions of the site and the team: -- If the site is not public and you have to login to use it, performance tools that can be run in the browser, such as [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/), make the process of testing much easier. Otherwise tools have to be configured to login to the site in an automated fashion, or have login cookies set up so the tool can access the site. +- If the site is not public and you have to login to use it, performance tools that can be run in the browser, such as [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/), make the process of testing much easier. Otherwise tools have to be configured to login to the site in an automated fashion, or have login cookies set up so the tool can access the site. - If your team doesn’t use the Chrome web browser, then using [Sitespeed.io](https://www.sitespeed.io/) as a tool is a great choice. It reports on most of the same metrics as Lighthouse but doesn’t require a particular browser. - If your team has limited ability to install the Chrome web browser, Chrome extensions, or CLI applications, then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option. {% capture example_tool %} - In the cloud.gov dashboard, we decided to use [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/). Google Chrome Lighthouse reports on most of the metrics we’re interested in and is able to run in both CLI and as a browser extension. Since the dashboard isn’t public, and you need to be logged in to use it, Lighthouse’s ability to be run in the browser makes testing the site much easier. + In the cloud.gov dashboard, we decided to use [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/). Google Chrome Lighthouse reports on most of the metrics we’re interested in and is able to run in both CLI and as a browser extension. Since the dashboard isn’t public, and you need to be logged in to use it, Lighthouse’s ability to be run in the browser makes testing the site much easier. {% endcapture %} {% include perf_example.html text=example_tool @@ -137,7 +137,7 @@ Focusing on these three metrics provides a good overview of how well your site i Based on these metrics, we’ve also recommended tools that are able to track these three metrics. #### Default tool recommendation: -[Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/): a Google Chrome library that can be run in the CLI or as a browser extension. Google Chrome Lighthouse can be run in a CLI, or developer environment. It’s relatively easy to setup, and use, and includes harder to track metrics, like [Speed index](../glossary/#speed-index). Its only downside is that it requires both the Chrome browser and the ability to install Chrome extensions. If your team is unable to install Chrome or Chrome extensions then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option. +[Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/): a Google Chrome library that can be run in the CLI or as a browser extension. Google Chrome Lighthouse can be run in a CLI, or developer environment. It’s relatively easy to setup, and use, and includes harder to track metrics, like [Speed index](../glossary/#speed-index). Its only downside is that it requires both the Chrome browser and the ability to install Chrome extensions. If your team is unable to install Chrome or Chrome extensions then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option. Once you have chosen metrics and a tool to track the metrics, the next step is to define your performance goals and potential budgets, or limits, for the project. @@ -151,13 +151,13 @@ The first step in setting budgets and potentially, goals, is to have an idea of The best way to do comparisons is to run your preferred tool(s) against each chosen comparison site. To do the comparison, your team should choose three to six comparison sites to run testing against. The tool should be run against one to three pages from each comparison site, based on how different each page is from one another. -- To do this in [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/), visit one to three pages for each comparison site, run the Lighthouse extension, and export the results. Lighthouse also has a secondary tool, [lighthouse-batch](https://www.npmjs.com/package/lighthouse-batch), to run against multiple URLs to make this process faster. +- To do this in [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/), visit one to three pages for each comparison site, run the Lighthouse extension, and export the results. Lighthouse also has a secondary tool, [lighthouse-batch](https://www.npmjs.com/package/lighthouse-batch), to run against multiple URLs to make this process faster. - A similar process can be used for [Sitespeed.io](https://www.sitespeed.io/) by running the CLI command for each page of each comparison site. Sitespeed.io also has an option to run against multiple URLs, similar to lighthouse-batch, built into the actual tool. Once all the data is collected for each page, of each site, you’ll want to compare the data and get an idea of the max and min values for each of your chosen metrics for each site. It’s up to you how you want to do this. One way is to make a Google spreadsheet comparing each metric to each site page. {% capture example_comparison %} - In the cloud.gov dashboard, we used [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/) to test against four comparison sites: AWS, IBM Bluemix, Pivotal, and Azure. For each, we ran a test on the landing page and a specific app page. We saved all the results and created a spreadsheet in Google documents alongside our own Lighthouse test on cloud.gov. We also included results from our own cloud.gov dashboard. + In the cloud.gov dashboard, we used [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) to test against four comparison sites: AWS, IBM Bluemix, Pivotal, and Azure. For each, we ran a test on the landing page and a specific app page. We saved all the results and created a spreadsheet in Google documents alongside our own Lighthouse test on cloud.gov. We also included results from our own cloud.gov dashboard. cloud.gov example comparison document {% endcapture %} {% include perf_example.html @@ -176,7 +176,7 @@ A good idea for selecting budgets is selecting the fastest performing site for e - The lowest value, or minimum, is **4085**. - We’ll take 20% off that value for a [Speed index](../glossary/#speed-index) budget of **3268**. -This value can also be compared to the recommended values from your tool. [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/) provides a target value for each metric. If the budget is far off the target, it might be a good idea to bring it closer to that target. You can also round the numbers up or down to make them easier for the team to remember. +This value can also be compared to the recommended values from your tool. [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) provides a target value for each metric. If the budget is far off the target, it might be a good idea to bring it closer to that target. You can also round the numbers up or down to make them easier for the team to remember. Not all metrics require setting a budget. Any metrics that can’t be compared, such as [Custom timing events](../glossary/#custom-timing-events), should not have a budget. Other budgets are simply a yes or no value, such as whether the site is mobile friendly, which doesn’t require a budget. Additionally, it’s up to your team to analyze the data and set budgets for what makes sense. @@ -289,7 +289,7 @@ A Content Management Solution (CMS) tracker should only be used if your site has Depending on how often the code of the site gets updated, the team might need two tracking solutions: one for the CMS and one for the code updates, leading to a more complicated system. {% capture example_tracking %} - In the cloud.gov dashboard, we decided to do CI testing because we already had a reliable CI setup and the team is primarily made up of developers, meaning CI would be where performance gets the most attention. We setup [Google Chrome Lighthouse](https://developers.google.com/web/tools/lighthouse/) in our build process, ensuring that if a recent code change goes over budget, it would stop the build and report the problems to the developers. Additionally, we used a Github service to receive performance reports over time. + In the cloud.gov dashboard, we decided to do CI testing because we already had a reliable CI setup and the team is primarily made up of developers, meaning CI would be where performance gets the most attention. We setup [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) in our build process, ensuring that if a recent code change goes over budget, it would stop the build and report the problems to the developers. Additionally, we used a Github service to receive performance reports over time. {% endcapture %} {% include perf_example.html text=example_tracking From 275ad4be6791f997b6f197c6965be08059aaa467 Mon Sep 17 00:00:00 2001 From: amyleadem Date: Mon, 9 Dec 2024 09:33:22 -0700 Subject: [PATCH 2/6] Update 301 MDN load link --- pages/documentation/guidance/performance/glossary.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/documentation/guidance/performance/glossary.md b/pages/documentation/guidance/performance/glossary.md index 92cde9b59..12c88c76c 100644 --- a/pages/documentation/guidance/performance/glossary.md +++ b/pages/documentation/guidance/performance/glossary.md @@ -82,7 +82,7 @@ The following gives a brief overview of a handful of metrics. If you're looking ### Onload -[The `load` event](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) (as shown in MDN Web Docs) fires at the end of the document loading process. At this point, all of the objects in the document are in the DOM, and all the images, scripts, links, and sub-frames have finished loading. +[The `load` event](https://developer.mozilla.org/en-US/docs/Web/API/Window/load_event) (as shown in MDN Web Docs) fires at the end of the document loading process. At this point, all of the objects in the document are in the DOM, and all the images, scripts, links, and sub-frames have finished loading. #### Pros - Easy to calculate for most pages From d01f83a956180644733cba7edd691194cba5cfef Mon Sep 17 00:00:00 2001 From: amyleadem Date: Mon, 9 Dec 2024 09:34:27 -0700 Subject: [PATCH 3/6] Fix 301 nyc.gov link --- pages/patterns/create-a-profile/pronouns.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/patterns/create-a-profile/pronouns.md b/pages/patterns/create-a-profile/pronouns.md index a6cb4ae97..13767eb4d 100644 --- a/pages/patterns/create-a-profile/pronouns.md +++ b/pages/patterns/create-a-profile/pronouns.md @@ -127,7 +127,7 @@ Provide a text entry field that supports a rich array of special characters and - Asking about gender in online forms. (September 18, 2015) Retrieved on July 19, 2022, from `http://www.practicemakesprogress.org/blog/2015/9/18/asking-about-gender-on-online-forms` [This link is no longer active. [Archived copy of practicemakesprogress.org](https://web.archive.org/web/20220120033201/http://www.practicemakesprogress.org/blog/2015/9/18/asking-about-gender-on-online-forms)] - DOL policies on gender identity: rights and responsibi (n.d.) Retrieved on November 1, 2022, from - Gender pronouns. (October 23, 2017) Retrieved on November 1, 2022, from -[https://www1.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf](https://www1.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf) +[https://www.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf](https://www.nyc.gov/assets/hra/downloads/pdf/services/lgbtqi/Gender%20Pronouns%20final%20draft%2010.23.17.pdf) - Gender pronouns & their use in workplace communications. (n.d.) Retrieved on November 1, 2022 from, [https://dpcpsi.nih.gov/sgmro/gender-pronouns-resource](https://dpcpsi.nih.gov/sgmro/gender-pronouns-resource) - Gender-inclusive language. (n.d.) Retrieved on July 19, 2022, from - The importance of personal pronouns. (September 16, 2022) Retrieved on November 1, 2022 from, [https://digital.va.gov/people-excellence/the-importance-of-personal-pronouns/](https://digital.va.gov/people-excellence/the-importance-of-personal-pronouns/) From 14340c77cd4c85308dbb29ccdf4152997d9abc41 Mon Sep 17 00:00:00 2001 From: amyleadem Date: Mon, 9 Dec 2024 11:17:14 -0700 Subject: [PATCH 4/6] Add snyk ignore for nanoid and inflight --- .snyk | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/.snyk b/.snyk index e843e8d4a..4275b6d56 100644 --- a/.snyk +++ b/.snyk @@ -1,5 +1,5 @@ # Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities. -version: v1.25.0 +version: v1.25.1 # ignores vulnerabilities until expiry date; change duration by modifying expiry date ignore: 'npm:chownr:20180731': @@ -3523,8 +3523,8 @@ ignore: SNYK-JS-INFLIGHT-6095116: - '*': reason: No available upgrade or patch - expires: 2025-01-01T21:08:46.836Z - created: 2024-12-02T21:08:46.867Z + expires: 2025-01-08T18:16:33.491Z + created: 2024-12-09T18:16:33.532Z SNYK-JS-BRACES-6838727: - '*': reason: No available upgrade or patch @@ -3535,6 +3535,11 @@ ignore: reason: No available upgrade or patch expires: 2024-09-21T17:53:15.173Z created: 2024-08-22T17:53:15.213Z + SNYK-JS-NANOID-8492085: + - '*': + reason: No available upgrade or patch + expires: 2025-01-08T18:15:22.221Z + created: 2024-12-09T18:15:22.260Z # patches apply the minimum changes required to fix a vulnerability patch: 'npm:minimatch:20160620': From 8a76e82aea57563f9b356fc700b6350aa8cfdc4b Mon Sep 17 00:00:00 2001 From: "Anne Petersen (they/them)" Date: Mon, 9 Dec 2024 16:30:51 -0600 Subject: [PATCH 5/6] Update how.md correcting setup vs set up instances --- pages/documentation/guidance/performance/how.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/documentation/guidance/performance/how.md b/pages/documentation/guidance/performance/how.md index 9c4cff7a1..1fef7f177 100644 --- a/pages/documentation/guidance/performance/how.md +++ b/pages/documentation/guidance/performance/how.md @@ -137,7 +137,7 @@ Focusing on these three metrics provides a good overview of how well your site i Based on these metrics, we’ve also recommended tools that are able to track these three metrics. #### Default tool recommendation: -[Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/): a Google Chrome library that can be run in the CLI or as a browser extension. Google Chrome Lighthouse can be run in a CLI, or developer environment. It’s relatively easy to setup, and use, and includes harder to track metrics, like [Speed index](../glossary/#speed-index). Its only downside is that it requires both the Chrome browser and the ability to install Chrome extensions. If your team is unable to install Chrome or Chrome extensions then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option. +[Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/): a Google Chrome library that can be run in the CLI or as a browser extension. Google Chrome Lighthouse can be run in a CLI, or developer environment. It’s relatively easy to set up, and use, and includes harder to track metrics, like [Speed index](../glossary/#speed-index). Its only downside is that it requires both the Chrome browser and the ability to install Chrome extensions. If your team is unable to install Chrome or Chrome extensions then [webpagetest](https://www.webpagetest.org/) can be run in any browser and is a good option. Once you have chosen metrics and a tool to track the metrics, the next step is to define your performance goals and potential budgets, or limits, for the project. @@ -221,7 +221,7 @@ If your metrics have a large gap between the budget and goal, it might be wise t ## Adding site tracking -Once you’ve determined the metrics to track, the tools to use, and what your performance budgets and goals are, it’s time to setup the systems to do the actual tracking of your metrics. How and when to track is up to your team and workflow. It’s a good idea to have a discussion with the team about the how, when, and what of performance. +Once you’ve determined the metrics to track, the tools to use, and what your performance budgets and goals are, it’s time to set up the systems to do the actual tracking of your metrics. How and when to track is up to your team and workflow. It’s a good idea to have a discussion with the team about the how, when, and what of performance. - How do people want to hear about performance? - Through what channels do they want to be notified? @@ -289,7 +289,7 @@ A Content Management Solution (CMS) tracker should only be used if your site has Depending on how often the code of the site gets updated, the team might need two tracking solutions: one for the CMS and one for the code updates, leading to a more complicated system. {% capture example_tracking %} - In the cloud.gov dashboard, we decided to do CI testing because we already had a reliable CI setup and the team is primarily made up of developers, meaning CI would be where performance gets the most attention. We setup [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) in our build process, ensuring that if a recent code change goes over budget, it would stop the build and report the problems to the developers. Additionally, we used a Github service to receive performance reports over time. + In the cloud.gov dashboard, we decided to do CI testing because we already had a reliable CI setup and the team is primarily made up of developers, meaning CI would be where performance gets the most attention. We set up [Google Chrome Lighthouse](https://developer.chrome.com/docs/lighthouse/overview/) in our build process, ensuring that if a recent code change goes over budget, it would stop the build and report the problems to the developers. Additionally, we used a Github service to receive performance reports over time. {% endcapture %} {% include perf_example.html text=example_tracking From 71245c8a5e6def10c2ed0a7dfdeb783e0a90eb9e Mon Sep 17 00:00:00 2001 From: amyleadem Date: Mon, 9 Dec 2024 15:46:00 -0700 Subject: [PATCH 6/6] Fix 301 link for https://standards.digital.gov/ --- pages/documentation/website-standards.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/documentation/website-standards.md b/pages/documentation/website-standards.md index 6a3fab518..8c0bf1027 100644 --- a/pages/documentation/website-standards.md +++ b/pages/documentation/website-standards.md @@ -4,7 +4,7 @@ layout: styleguide title: Website standards are now at standards.digital.gov category: How to use USWDS lead: | - Federal website standards are now at standards.digital.gov. Federal website standards will help agencies provide high-quality, consistent digital experiences for everyone. The standards cover common visual and technical elements and reflect user experience best practices. The new site launched September 26, 2024. + Federal website standards are now at standards.digital.gov. Federal website standards will help agencies provide high-quality, consistent digital experiences for everyone. The standards cover common visual and technical elements and reflect user experience best practices. The new site launched September 26, 2024. changelog: key: 'docs-web-standards' --- @@ -15,7 +15,7 @@ Agencies can more easily build accessible, mobile-friendly websites, and comply Federal agencies are required to comply with website standards per the 21st Century Integrated Digital Experience Act (IDEA). Standards will align with the 21st Century IDEA, OMB’s memo on Delivering a Digital-First Public Experience (M-23-22), and other relevant policy requirements and best practices. -[Understand the policy framework and requirements in the 21st Century IDEA and M-23-22](https://digital.gov/resources/delivering-digital-first-public-experience/). +[Understand the policy framework and requirements in the 21st Century IDEA and M-23-22](https://digital.gov/resources/delivering-digital-first-public-experience/). ## Get started with USWDS