Skip to content

Commit

Permalink
Merge pull request #119 from TheGiraffe3/typo-fixes
Browse files Browse the repository at this point in the history
Fix many typos
  • Loading branch information
seancolsen authored Dec 6, 2024
2 parents d7c14da + 630cc19 commit ba68970
Show file tree
Hide file tree
Showing 39 changed files with 75 additions and 76 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ My project during the Google Summer of Code was to enhance the capability of Mat

- Implementation and tests for column mapper: https://github.com/mathesar-foundation/mathesar/pull/1506
- Constraint violation handling during import: https://github.com/mathesar-foundation/mathesar/pull/1548
- Implemetation and tests for suggesting column mappings: https://github.com/mathesar-foundation/mathesar/pull/1698
- Implementation and tests for suggesting column mappings: https://github.com/mathesar-foundation/mathesar/pull/1698

## Additional context

Expand Down
6 changes: 3 additions & 3 deletions docs/community/gsoc/past/2023/list-datatype.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,15 +64,15 @@ Also while trying to integrate this class to the project, I faced difficulties s
The difficulty of introducing this decorator in the codebase and the type of changes required are indicative of the type of problems that could be found porting other pseudo data types.

#### Custom adapter
It would give us more control if we develop a module that works directly with psycopg2, where we could fully handle the postgres-python (and viceversa) mapping of arrays. This module will also (probably) help us fix format issues when aggregating records of date like data types. See issues [#2962](https://github.com/mathesar-foundation/mathesar/issues/2962), [#2966](https://github.com/mathesar-foundation/mathesar/issues/2966). Custom adapters for date-related data types are discussed in the psycopg2 documentation, as some exact mappings are not possible [3].
It would give us more control if we develop a module that works directly with psycopg2, where we could fully handle the postgres-python (and vice-versa) mapping of arrays. This module will also (probably) help us fix format issues when aggregating records of date like data types. See issues [#2962](https://github.com/mathesar-foundation/mathesar/issues/2962), [#2966](https://github.com/mathesar-foundation/mathesar/issues/2966). Custom adapters for date-related data types are discussed in the psycopg2 documentation, as some exact mappings are not possible [3].

This option will however, require more time both for planning and implementation, as this would be a new way of implementing a data type in Mathesar, possibly requiring modifications in several parts of the backend code; e.g. integration in the codebase will be more complex. Moreover, it works mostly on Python’s side, meaning we are not enforcing anything on the DB side.

### Supporting n-dimensional arrays
Given that none of the ideas we had to attempt restricting arrays to 1 dimension were sucessful, we now move to consider supporting multidimensional ones.
Given that none of the ideas we had to attempt restricting arrays to 1 dimension were successful, we now move to consider supporting multidimensional ones.

**Filters**
As reviewed earlier, opearations over n-dimensional arrays become confusing.
As reviewed earlier, operations over n-dimensional arrays become confusing.

- Length: it needs to know over what dimension to count.
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ The automatic reflection is not essential, but it could be a significant quality
- Figure out when to reflect and how to cache the reflections so as to minimally burden the wider system with more state;
- Do the implementation.

I would expect the above tasks to be performed (at least somewhat) asynchroniously.
I would expect the above tasks to be performed (at least somewhat) asynchronously.

## Expected Outcome
An automatic PostgreSQL function (and possibly type) property reflection mechanism tailored to automatically finding useful hints for the hint system.
Expand All @@ -82,4 +82,4 @@ I'd say a good candidate would be one that is comfortable taking the time to exp
- **Primary Mentor**: Dominykas Mostauskis
- **Backup Mentor**: Brent Moran

See our [Team Members](/team/members) page for Matrix and GitHub handles of mentors.
See our [Team Members](/team/members) page for Matrix and GitHub handles of mentors.
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ An additional option to visualize the grouped results in the form of graphs or a
- Research and come up UX design specs and wireframes for grouped data visualization.
- Create necessary issues based on the finalized specs after review.
- Research graphing libraries and identify the one most suitable with Mathesar's architecture and goals.
- Identify missing APIs or changes required in exising APIs and implement the necessary changes on the backend.
- Identify missing APIs or changes required in existing APIs and implement the necessary changes on the backend.
- Implement the frontend data visualization interface.

## Expected Outcome
Expand All @@ -31,4 +31,4 @@ A good candidate would be someone who is able to empathize and think from the pe
- **Primary Mentor**: Pavish Kumar Ramani Gopal
- **Backup Mentor**: Sean Colsen

See our [Team Members](/team/members) page for Matrix and GitHub handles of mentors.
See our [Team Members](/team/members) page for Matrix and GitHub handles of mentors.
5 changes: 2 additions & 3 deletions docs/engineering/markdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This page recommends guidelines to follow when writing Markdown in order to keep
- Use **four spaces** for all indentation.

!!! question "Rationale: 💼 Portability"
Most Markdown rendering platforms handle other indentation styles with some degree of consistency for _simple_ content, so at first this guideline may appear to be unecessary. But as content gets more complex, various edge cases tend to crop up which lead to inconsistencies. Maintaining four-space indentation across the board is the best way to ensure your indentation is **always** consistent.
Most Markdown rendering platforms handle other indentation styles with some degree of consistency for _simple_ content, so at first this guideline may appear to be unnecessary. But as content gets more complex, various edge cases tend to crop up which lead to inconsistencies. Maintaining four-space indentation across the board is the best way to ensure your indentation is **always** consistent.

For example, when some list items are intended by only _two_ spaces:

Expand Down Expand Up @@ -151,5 +151,4 @@ When a Markdown page links to another Markdown page, follow these patterns:
Lorem ipsum dolor sit amet...
```

Giving custom names to your heading anchors is nice because it allows us to change the heading text without breaking the crossreference. Plus it allows for shorter URLs.

Giving custom names to your heading anchors is nice because it allows us to change the heading text without breaking the cross-reference. In addition, it allows for shorter URLs.
4 changes: 2 additions & 2 deletions docs/engineering/research/formulas.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ This is a report which details some research that Sean did in 2023-06 to vet the

- ⚖️ **Deleting a referenced column**

- It's _possible_ to delete a referenced colum, but the resulting behavior may catch users off guard
- It's _possible_ to delete a referenced column, but the resulting behavior may catch users off guard
```sql
alter table formulas drop column a;
Expand Down Expand Up @@ -204,7 +204,7 @@ This is a report which details some research that Sean did in 2023-06 to vet the

- Formula columns are _virtual_, not _stored_. That is, they are computed on the fly when the table results are displayed.

- Formulas are implemented at the _application layer_, not the _database layer_. This means the formula definition is stored in application-specific metatdata, and the formula column is not visible within the underlying database. If I update a referenced value outside NocoDB, then the result of the formula that NocoDB displays _will_ update, but only due to the virtual nature of the formula. The source data is read/write accessible outside NocoDB, but not the computed data.
- Formulas are implemented at the _application layer_, not the _database layer_. This means the formula definition is stored in application-specific metadata, and the formula column is not visible within the underlying database. If I update a referenced value outside NocoDB, then the result of the formula that NocoDB displays _will_ update, but only due to the virtual nature of the formula. The source data is read/write accessible outside NocoDB, but not the computed data.

- NocoDB has its own special formula syntax and functions.

Expand Down
2 changes: 1 addition & 1 deletion docs/engineering/specs/internationalization.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ The bundle footprint of the suggested library is [low](https://github.com/ivanho
Translations files will be loaded in parallel with the FE code. There are two approaches to it:

1. Detecting the language and adding the translations for that language in the common_data. But this will lead increase in the size of the common_data since the translations will grow with time.
2. Loading the translations via a script tag. This will require having a global loader in the index.html which get's hidden once the translations are loaded.
2. Loading the translations via a script tag. This will require having a global loader in the index.html which gets hidden once the translations are loaded.

The final approach will be decided during the implementation and the tech spec will be updated accordingly.

Expand Down
2 changes: 1 addition & 1 deletion docs/engineering/specs/worksheets-technical-specs.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ TODO
>
> Before worksheets, Mathesar associated record summaries with each _table_ so that all FKs which referenced the same table would automatically use the same record summary. With worksheets, it is not (yet?) possible to configure a default per-table record summary template to always be used for references to that table. This behavior simplifies some things, but also has the following consequences:
>
> - In some cases, there might be some more tedium associated with creating new worksheets because you can't easily re-use a record summary template that you created elsewhere.
> - In some cases, there might be some more tedium associated with creating new worksheets because you can't easily reuse a record summary template that you created elsewhere.
> - The Record Page can't show a record summary for the record.
>
> I think these are acceptable tradeoffs though. The benefit of the worksheet approach is that different worksheets can have different record summary templates to refer to the same table.
Expand Down
2 changes: 1 addition & 1 deletion docs/jobs/past/2021-04-designer.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ We are a fully distributed team, and you can be located anywhere in the world, a
## Qualifications
We're looking for a designer that has extensive experience with translating complex concepts into intuitive web interfaces for non-technical users and working with engineering teams to ship them. You should be an expert in design best practices, especially in accessibility and mobile-friendliness. You should also have experience conducting effective user interviews and usability testing, and generally be an advocate for creating an exceptional user experience.

Excellent communication skills in English (both written and verbal) are essential, since this position is fully remote. You should be able to work indpendently, build good working relationships remotely, and be a proactive communicator. You should also enjoy writing documentation, helping others, and building a community. You're probably also curious and enjoy learning new things.
Excellent communication skills in English (both written and verbal) are essential, since this position is fully remote. You should be able to work independently, build good working relationships remotely, and be a proactive communicator. You should also enjoy writing documentation, helping others, and building a community. You're probably also curious and enjoy learning new things.

Some nice-to-haves:

Expand Down
2 changes: 1 addition & 1 deletion docs/jobs/past/2021-04-frontend.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ We are a fully distributed team, and you can be located anywhere in the world, a
## Qualifications
We're looking for an experienced engineer that has architected, built, and deployed complex frontend applications that deal with large amounts of data. Ensuring scalability, accessibility, and performance are critical, and ideally, you have strong opinions on how to make that happen. We also expect you to be an advocate for creating an exceptional user experience.

Excellent communication skills in English (both written and verbal) are essential, since this position is fully remote. You should be able to work indpendently, build good working relationships remotely, and be a proactive communicator. You should also enjoy writing documentation, helping others, and building a community. You're probably also curious and enjoy learning new things.
Excellent communication skills in English (both written and verbal) are essential, since this position is fully remote. You should be able to work independently, build good working relationships remotely, and be a proactive communicator. You should also enjoy writing documentation, helping others, and building a community. You're probably also curious and enjoy learning new things.

Some nice-to-haves:

Expand Down
2 changes: 1 addition & 1 deletion docs/jobs/past/2021-09-frontend.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ We expect you to:
- have experience working professionally within an engineering team.
- have excellent verbal communication skills in English.
- enjoy explaining your ideas quickly, clearly, and comprehensively in writing.
- work indpendently, build good working relationships remotely, and communicate proactively.
- work independently, build good working relationships remotely, and communicate proactively.
- be an advocate for an exceptional user experience.
- be interested in building an open source community and helping others contribute to the project.
- be curious and enjoy learning new things.
Expand Down
4 changes: 2 additions & 2 deletions docs/product/specs/2022-01-views/03-the-query-builder.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,10 @@ Here's a flowchart of decisions that need to be made when adding columns. This i
In addition to selecting output columns, the user should be able to add to the query in the following ways.

### Filtering
The user can add filters to filter down the results of the query to a subset of rows. They can use any of the query's ouput columns in filters. The filters available for the column will depend its data type and will offer a similar experience to table or view filters.
The user can add filters to filter down the results of the query to a subset of rows. They can use any of the query's output columns in filters. The filters available for the column will depend its data type and will offer a similar experience to table or view filters.

### Sorting
The user can sort the query results by one or more of the query's ouput columns. Query sorting should provide a similar experience to table or view sorting.
The user can sort the query results by one or more of the query's output columns. Query sorting should provide a similar experience to table or view sorting.

### Summarization
The user should be able to summarize the query by one of the query's output columns. This involves the following steps:
Expand Down
4 changes: 2 additions & 2 deletions docs/team/meeting-notes/2021/05/2021-05.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Next week - on call, may have to drop out during meeting.
- Meetings so far are frequent, we are relying too much on synchronous discussion
- Async discussions are better, we can think about things more, and it's documented
- Design reviews - it's still useful to have those synchronously
- We can schedule additional synchonous meetings as needed ad-hoc
- We can schedule additional synchronous meetings as needed ad-hoc
- Only Tuesdays starting next week
- Will also give us more uninterrupted time
- Team events after Pavish starts full time for team building
Expand Down Expand Up @@ -852,7 +852,7 @@ Ghislaine will match CSS class names to Figma components.
### This week's plan

#### Pavish
- Implement routing on client (take over from server after intial page load)
- Implement routing on client (take over from server after initial page load)
- Table view using Svelte

#### Ghislaine
Expand Down
4 changes: 2 additions & 2 deletions docs/team/meeting-notes/2021/09/2021-09.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Current process: Kriti making tickets

- Lots of tickets, huge volume
- Can't hash out the details ahead of time
- e.g. money type ticket had potentially forseeable issues, but only with serious time investment
- e.g. money type ticket had potentially foreseeable issues, but only with serious time investment
- Missing details
- Interdependencies
- Not enough frontend tickets
Expand All @@ -57,7 +57,7 @@ Current process: Kriti making tickets
- Some decisions need to be made with that in mind.
- Two stage process:
- Test with users and make improvements before release
- Continue to make imporvements after release.
- Continue to make improvements after release.
- Once we have all the pieces implemented, we can think through user scenarios and we'll be thinking about things as a "user flow" rather than a feature.
- e.g. user may not "create a view", they'll end up with a view
- The good thing is that we're not doing anything too abstract right now, we do need to consider user flow when creating abstract things.
Expand Down
4 changes: 2 additions & 2 deletions docs/team/meeting-notes/2022/02/2022-02.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ Sean will implement front end changes to match the following specs:
- According to Pavish `E2E` and `integration` test would be written in the same fashion on the frontend with the only difference being integration test would mock the backend API, so with regards to the terminology, we should be calling our current testing strategy with playwright as End-to-End testing. And Sean seems to be on the same page
- Brent did not want to have a dogmatic approach to the naming convention, but he wanted to have a distinction.
- Mukesh wanted to have distinction between integration test and E2E, where playwright should be used for writing E2E test and integration test should be based on jsdom and api mocks
- Mukesh expressed concerns that E2E test are flaky as it has to deal with unpredictable things like http calls, cache, async queues. Moreover as the app adds in additional layer like `async queue` or a `caching layer`, the set-up and teardown would become complex and increases both the development/maintanence time as well as the time to run the tests(which won't be much of a concern, as we run only specific test related to the feature we are working on during development). So we should be writing E2E tests that should test high level features like deleting a row and more specific tests like deleting multiple rows should be done with the integration layer.
- Mukesh expressed concerns that the E2E test is flaky as it has to deal with unpredictable things like http calls, cache, async queues. Moreover as the app adds in additional layer like `async queue` or a `caching layer`, the set-up and teardown would become complex and increases both the development/maintenence time as well as the time to run the tests(which won't be much of a concern, as we run only specific test related to the feature we are working on during development). So we should be writing E2E tests that should test high level features like deleting a row and more specific tests like deleting multiple rows should be done with the integration layer.
- For time being, Mukesh agreed to have integration test written with UI automation using `playwright`.


Expand Down Expand Up @@ -346,4 +346,4 @@ Notes:
- Idea: Exporting data from Mathesar
- Idea: Async infrastructure for Mathesar
- Better done by core team, will not add
- We also need caching infrastructure, separately
- We also need caching infrastructure, separately
Loading

0 comments on commit ba68970

Please sign in to comment.