Skip to content

Commit

Permalink
Implementation of the first performance test scenario (#785)
Browse files Browse the repository at this point in the history
* feat: add basic project for performance pool generation, along with config in json file and some models for output

* feat: creation of identities and devices

* feat: add writing results to console

* feat: split messages declaration and add calculation of proportions to generator

* feat: work on relationship stable marriages

* chore: add some notes on working solution for light pool

* feat: add DistributeRelationshipsV2
still not working for actuall pool config, but runs and assigns nearly all relationships

* chore: minor fixes

* refactor: extract Distributors and add offsets for sent vs received messages

* chore: further development on relationships and messages distributors

* chore: update light pool

* chore: improve error message

* chore: further filtering
this seems to have introduced a bug

* chore: furhter fine-tune algorithms. Add 2nd light pool

* feat: try a scenario where all possible relationships are established

* feat: bootstrap graph Pools generator

* feat: add seemingly working solution

* chore: make way for copilot solution

* chore: include github copilot's solution

* refactor: organise files

* fix: bad refactor

* chore: add prompt for copilot

* feat: work on SimulatedAnnealing

* feat: use lighter data structure. Start working on message and relationship count & heuristics

* feat: improvements to relationship and messages distributors. Hook that solution on to Simulated Annealing

* feat: paralellize. Fix offset pols not used

* chore: tweaks to parallel

* chore: add writer for SolutionRepresentation → CSV

* refactor: rename parameter

* chore: add selected solution

* chore: add light solution

* feat: prepare the creation of identities from a csv config

* fix: missing messages in RAM load

* feat: creation and relationships and messages working

* feat: add creation of challenges using parallelism

* chore: comment docs

* feat: csv outputters

* refactor: manually map to ChallengeDTO

* feat: datawallet modifications based on 1 3 6 rule

* fix: wrong number of DatawalletModifications for appHeavy pool

* feat: add gitignore for generated solutions

* chore: remove outliers from selected solution

* feat: start work on creation of rt before rs

* fix: uons not matching correct identity

* fix: lacking compensation pool

* feat: make outputter more flexible

* feat: change Creator to use pool of HttpClients

* chore: make progress bar larger for increased resolution

* chore: refactors and new configs

* refactor: remove unused classes. Rename variables

* chore: further removal of unused code

* chore: further small fixes

* chore: fix formatting

* feat: add snapshot 2

* chore: update snapshot folder structure

* chore: update snapshot

* chore: several code review fixes

* chore: fix sln

* chore: fix Reshaper warnings on SA

* chore: several code review fixes

* chore: several code review fixes

* fix: Identity.Pool.Creator.csproj

* refactor: change folder and project name

* refactor: change folder and project name

* chore: rename pool config file

* chore: remove bad logging filter

* chore: remove solutions

* chore: furhter review fixes

* chore: delete unused files

* chore: remove unused lib

* chore: update drept

* feat: work on DREPT loader

* feat: finish setup for first test scenario

* chore: trim down needless setup steps

* chore: add readme with scenarios

* chore: rename scenario

* chorE: move out-of-place files

* feat: finish work on scenario 01

* feat: add possibility to pass snapshot name. Add readme.

* chore: fix caution block

* chore: fix previous commit

* feat: better management of threads/vus. Add loading of datawallet modifications

* fix: loadDREPT not working

* feat: add remaining loaders

* feat: finish loading all entities

* chore: tweak preallocatedVUs and DREPTLoads

* chore: fix merge

* chore: clear ts error

* feat: add light snapshot

* chore: add context on snapshots' location

* feat: add basic script to summarize k6 json results

* chore: add k6 out param to readme

* feat: change analyzer to load csv files instead

* feat: reorganize output

* feat: allow defining input file

* feat: add note regarding the analyzer

* feat: add scripts for running the test, including the results analyzer

* chore: include method

* chore: Update README.md

* fix: linux script checking wrong exit status

* chore: replace acronyms for phrases

* Revert "chore: replace acronyms for phrases"

This reverts commit 2b6b653.

* chore: replace acronyms for phrases

* chore: remove unwanted gitignore files

* feat: k6-outputs folder created automatically

* chore: rename PerformanceSnapshotCreator folder and project

* chore: review fixes

* fix: bad paths in scripts

* feat: add client with token management

* fix: same identity being loaded once for each device

* refactor: remove index.ts

* chore: further review fixes

* chore: add docs

* chore: update load test packages

* chore: several review fixes

* feat: add fluent request for stateless client

* chore: fix names and add times for loading

* chore: revert to old relationship loading model

* chore: review fixes

* refactor: remove IDataRepresentation

* chore: add enum for laoding pools and rename data structures

* feat: add a Configuration class which centralises Client Configs

* chore: make run-test.sh script executable

* chore: improve user prompt in run-test.sh

* refactor: configuration

* chore: update package-lock.json

* chore: minor README.md changes

* chore: formatting of script files

* feat: add new dump script for postgres

* feat: add further scripts for dump management

* chore: move dump scripts to scripts folder

* fix: bad project path and namespaces

* chore: fix formatting

* fix: readme file path

* chore: minor changes to README

* chore: add initial version of load_postgres.sh

* chore: change zip file folder structure

* feat: add load spanshot ps1 script

* chore: update snapshot with missing tiers in quotas schema

* chore: update readme

* chore: further fixes

* feat: add/refactor linux scripts

* chore: further fixes

* fix: volume on wrong command

* chore: add webpack requirement to global npm install

* chore: minor fixes

* chore: use env var in script like everywhere else

* chore: remove unnecessary folder from snapshot zip

* refactor: simplify load-snapshot and load-postgres scripts

* chore: move stuff to global gitignore

* refactor: simplify load scripts

* refactor: simplify scripts even further

* chore: fix launchSettings of snapshot-creator

* chore: update README

* fix: configure one relationship template for light connectors in each pool config so that entity generation doesn't throw

* fix: fix parameter name in "Create Entities Light" launch config

* fix: remove double backslash of in Printer

* chore: fix/improve README

---------

Co-authored-by: Timo Notheisen <[email protected]>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Timo Notheisen <[email protected]>
  • Loading branch information
4 people authored Oct 2, 2024
1 parent 35c8a8f commit 4b71eb4
Show file tree
Hide file tree
Showing 86 changed files with 2,871 additions and 927 deletions.
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -359,9 +359,12 @@ out.yaml
# helm
**/Chart.lock

#Performance Tests
# npm
dist

# Performance Tests
scripts/windows/dumps/dump-files

# ci
.ci/package-lock.json

Expand All @@ -374,4 +377,4 @@ dist
!Applications/AdminUi/packages/*

# Hardlinks for appsettings.override.json
Applications/**/appsettings.override.json
Applications/**/appsettings.override.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
k6-outputs/

This file was deleted.

This file was deleted.

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -6,31 +6,31 @@

The generation of a snapshot is a convoluted process which can be described in the following steps:

#### 1. Creation of a RaM distribution
#### 1. Creation of a _Relationships & Messages_ distribution

Using the `Identity.Pool.Creator` project, run the command with the arguemnts:
Run the `ConsumerApi.Tests.Performance.SnapshotCreator` project with the following arguemnts:

`GeneratePools --poolsFile <pools.json>`
`GeneratePools --poolsFile <path-to-pools-config-json>`

Where `<pools.json>` is a pools configuration file. You can find examples of this file in this repository.

This will use a mix of a manual algorithm with simulated annealing to create a solution. The time it takes to run the algorith depends mostly on the number of SA iterations. That number can be changed in code.

The execution creates a `ram.csv` file which must be passed to the next step.
The execution creates a `relationshipsAndMessages.csv` file which must be passed to the next step.

#### 2. Creation of entities

Using the `Identity.Pool.Creator` project, run the command with the arguemnts:
Run the `ConsumerApi.Tests.Performance.SnapshotCreator` project with the following arguemnts:

`CreateEntities --baseAddress http://localhost:8081 --clientId test --clientSecret test --poolsFile pools.json --ram selected-solution\\ram.csv`
`CreateEntities --baseAddress <base-address> --clientId <client-id> --clientSecret <client-secret> --poolsFile <path-to-pools-config-json> --relationshipsAndMessages <path-to-relationshipsAndMessages.csv>`

where:

- `--baseAddress` is the address of the ConsumerSdk.
- `--clientId` is the client Id to use when addressing the API.
- `--clientSecret` is the client Secret to use when addressing the API.
- `--poolsFile` is the same file used in the step above.
- `--ram` the Relationships & Messages configuration created in the step above (csv file).
- `--relationshipsAndMessages` the Relationships & Messages configuration created in the step above (csv file).

This command will create the following entities:

Expand All @@ -54,41 +54,29 @@ The time it takes to run this command depends mostly on the number of entities t

#### 3. Dumping the database

The specific way of doing this depends greatly on the way the database management system is running. This guide assumes docker is used.
Depending on which database you are using, you must run the appropriate script to dump the database to a file.

##### When using SQL Server:
When using SQL Server:

```sh
cd docker-compose
.\dump_sqlserver_bak.bat
scripts/windows/dumps/dump-sqlserver.ps1
```

##### When using PostgreSQL:
When using PostgreSQL:

```sh
cd docker-compose
.\dump_postgres.bat
scripts/windows/dumps/dump-postgres.ps1
```

#### 4. Persisting the results

Now that the database has been exported to a file, you can zip it and move it to a newly created folder for the snapshot you just created. You should also put the step 2. csv files there as well (entity csv files). Don't forget to update the **list** below.
Now that the database has been exported to a file, you can zip it and move it to a newly created folder for the snapshot you just created. You should add the csv files generated in step 2 to the zip as well. Don't forget to update the **list** below.

### Usage

In order to use a snapshot, you must:

1. from the snapshot:
1. extract the entity CSV files to the place where the performance tests expect to find them.
2. recreate the database. You may need to merge split zip files. This can be done on the Powershell by running `cmd.exe /c copy /b postgres-enmeshed.zip.* postgres-enmeshed.zip`
1. start the consumer API
1. run the performance tests

### List
### List of snapshots

#### Snapshot Heavy (snapshot.heavy.zip, downloadable from file hosting)

RaM Generation: ~8h
_Relationships & Messages_ Generation: ~8h

Entity Creation: ~10h

Expand All @@ -101,7 +89,7 @@ Entity Creation: ~10h

#### Snapshot Light (snapshot.light.zip)

RaM Generation: ~20min
_Relationships & Messages_ Generation: ~20min

Entity Creation: ~1h

Expand All @@ -111,3 +99,53 @@ Entity Creation: ~1h
| RelationshipTemplates | 750 |
| Relationships | 1747 |
| Messages | 16113 |

# How to run k6 performance tests

In order to run the performance tests, you must load an appropriate snapshot of the database. These snapshots are bundled with the usernames and passwords of the created identities/devices, meaning you can authenticate as such users and do API calls in their stead.

Test snapshots are currently only available for **Postgres**. For windows, make sure that you're using PowerShell 7. It can be installed by running `winget install --id Microsoft.PowerShell --source winget`.

1. **Install k6**

1. You must install k6 if you haven't already. Please download it from the [official website](https://k6.io/open-source/).

1. **Select a snapshot:**

1. Select one of the available snapshots. You can find more information on the available snapshots in the [scenarios README](src/scenarios/README.md) file.

1. **Load the snapshot and the CSVs:**

1. `cd Applications/ConsumerApi/test/ConsumerApi.Tests.Performance`
1. Ensure that the Postgres server where the snapshot should be loaded is running.
1. Run the following command (the snapshot name **must not** contain the extension)
```sh
# scripts/windows/load-snapshot.ps1 -SnapshotName "snapshot" [-Hostname "custom.hostname"] [-Username "dbuser"] [-Password "dbpass"] [-DbName "dbname"]
```

1. **Run the test(s)**

```sh
# scripts/windows/run-test.ps1 <scenario-name>
```

where \<scenario-name> is for example `s01`.

> [!NOTE]
> The command can be appended with `-- <extra> <parameters>`. Extra parameters are k6 parameters, some of which are explained below.

1. You must tweak the way the test is run to ensure it conforms with your preferences. The following CLI parameters are available:

| Key | Default | Possible Values |
| --------------------- | ------------------- | ------------------------------------------- |
| `--duration` | depends on the test | `60m`, `4h`, etc. |
| `--address` | `localhost:8081` | any valid URL, e.g. `load-test.enmeshed.eu` |
| `--env snapshot=` | `light` | `heavy` |
| `--env clientId=` | `test` | any string |
| `--env clientSecret=` | `test` | any string |

Example:

```sh
$ scripts/windows/run-test.sh s01 -- --address test.enmeshed.eu:443 --duration 4h
```
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,19 @@ declare module "https://jslib.k6.io/httpx/0.1.0/index.js" {
export class Httpx {
constructor(options: Options);

get<RT extends ResponseType | undefined>(
url: string | HttpURL,
params?: RefinedParams<RT> | null,
): RefinedResponse<RT>;
get<RT>(url: string | HttpURL, params?: RefinedParams<RT> | null): TypedResponse<RT>;

post<RT extends ResponseType | undefined>(
url: string | HttpURL,
body?: RequestBody | null,
params?: RefinedParams<RT> | null,
): RefinedResponse<RT>;
delete<RT>(url: string | HttpURL, params?: RefinedParams<RT> | null): TypedResponse<RT>;

put<RT extends ResponseType | undefined>(
url: string | HttpURL,
body?: RequestBody | null,
params?: RefinedParams<RT> | null,
): RefinedResponse<RT>;
post<RT>(url: string | HttpURL, body?: RequestBody | null, params?: RefinedParams<RT> | null): TypedResponse<RT>;

put<RT>(url: string | HttpURL, body?: RequestBody | null, params?: RefinedParams<RT> | null): TypedResponse<RT>;

patch<RT>(url: string | HttpURL, body?: RequestBody | null, params?: RefinedParams<RT> | null): TypedResponse<RT>;
}

export class RefinedResponse<RT> {
json(path?: string): any;
export class TypedResponse<RT> extends Response {
json(path?: string): RT;
}

export interface Options {
Expand Down
Loading

0 comments on commit 4b71eb4

Please sign in to comment.