Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DF-838:ETNA-ROSETTA: DO NOT MERGE UNTIL #1483

Closed
wants to merge 17 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ KONG_CLIENT_VERIFY_CERTIFICATES=True
KONG_CLIENT_TEST_MODE=False
KONG_CLIENT_TEST_FILENAME=records.json
PLATFORMSH_CLI_TOKEN=your-api-token-here
API_CLIENT_NAME_PREFIX=
ROSETTA_CLIENT_BASE_URL=
59 changes: 59 additions & 0 deletions .github/workflows/platformsh-cd-etna-rosetta.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
name: CD - etna-rosetta (platform.sh)

on:
workflow_dispatch:
push:
branches:
- etna-rosetta
paths:
# Host config
- '.plaform/**'
- '.platform.app.yaml'
- 'gunicorn.conf.py'
# Python config
- 'poetry.lock'
- 'pyproject.toml'
# NPM config
- 'package.json'
- 'package-lock.json'
- 'webpack.config.js'
# App changes
- 'config/**'
- 'sass/**'
- 'scripts/**'
- 'templates/**'
- 'etna/**'

jobs:
ci:
name: CI
uses: ./.github/workflows/_tests.yml
with:
python-version: ${{ vars.CI_PYTHON_VERSION }}
poetry-version: ${{ vars.CI_POETRY_VERSION }}

deploy:
runs-on: ubuntu-latest
needs: ci
steps:
- uses: actions/checkout@v3
- name: Extract branch name
run: echo "BRANCH=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}" >> $GITHUB_OUTPUT
id: extract_branch
- uses: axelerant/platformsh-deploy-action@v1
with:
project-id: ${{ secrets.PLATFORM_PROJECT_ID }}
cli-token: ${{ secrets.PLATFORM_CLI_TOKEN }}
ssh-private-key: ${{ secrets.PLATFORM_SSH_KEY }}
force-push: true
environment-name: etna-rosetta

notify-slack:
runs-on: ubuntu-latest
needs: deploy
steps:
- uses: actions/checkout@v3
- uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_TITLE: "A deployment to etna-rosetta is complete"
33 changes: 33 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,39 @@ Create a super user:
dj createsuperuser
```

## ETNA-ROSETTA WORKFLOW

### entna-rosetta Branch, Environment:
- etna-rosetta is branched off `ds-wagtail:develop`
- is a long running branch which contains changes made for ROSETTA API (develop contains changes for KONG API)
- it is also an environment which is branched off develop
- it will be merged into develop when KONG API is decommissioned to use, or for some other reason.
- brings in new changes from develop
- adds any specific changes for Rosetta
- a CD will initiate to deploy to `etna-rosetta` for any merges into `etna-rosetta`


### Syncing entna-rosetta with channges from develop

Option 1
- create a branch off `etna-rosetta` eg: chore/sync-ddmmyyy
- merge develop branch into chore/sync-ddmmyyy
- fix conflicts
- create PR into `etna-rosetta`

Option 2
- merge develop into `etna-rosetta`
- fix conflicts


### Feature changes/fixes for entna-rosetta

- create feature,fix ticket branches off `etna-rosetta`
- merge latest changes from `etna-rosetta` into the feature branch
- create PR into `etna-rosetta`
- test feature in a spare environment if available


## Issues with your local environment?

Check out the [Local development gotchas](https://nationalarchives.github.io/ds-wagtail/developer-guide/local-development-gotchas/) page for solutions to common issues.
Expand Down
12 changes: 8 additions & 4 deletions config/settings/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -315,13 +315,17 @@
EVENTBRITE_PUBLIC_TOKEN = os.getenv("EVENTBRITE_PUBLIC_TOKEN")

# API Client
API_CLIENT_NAME_PREFIX = os.getenv(
"API_CLIENT_NAME_PREFIX"
) # mandatory name to identify the client URL

CLIENT_BASE_URL = os.getenv("KONG_CLIENT_BASE_URL")
CLIENT_KEY = os.getenv("KONG_CLIENT_KEY")
CLIENT_BASE_URL = os.getenv(f"{API_CLIENT_NAME_PREFIX}_CLIENT_BASE_URL")
CLIENT_KEY = os.getenv(f"{API_CLIENT_NAME_PREFIX}_CLIENT_KEY")
CLIENT_VERIFY_CERTIFICATES = strtobool(
os.getenv("KONG_CLIENT_VERIFY_CERTIFICATES", "True")
os.getenv(f"{API_CLIENT_NAME_PREFIX}_CLIENT_VERIFY_CERTIFICATES", "True")
)
IMAGE_PREVIEW_BASE_URL = os.getenv("KONG_IMAGE_PREVIEW_BASE_URL")
IMAGE_PREVIEW_BASE_URL = os.getenv(f"{API_CLIENT_NAME_PREFIX}_IMAGE_PREVIEW_BASE_URL")


# Rich Text Features
# https://docs.wagtail.io/en/stable/advanced_topics/customisation/page_editing_interface.html#limiting-features-in-a-rich-text-field
Expand Down
8 changes: 4 additions & 4 deletions config/urls.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from etna.whatson import views as whatson_views

register_converter(converters.ReferenceNumberConverter, "reference_number")
register_converter(converters.IAIDConverter, "iaid")
register_converter(converters.IDConverter, "id")


# Used by /sentry-debug/
Expand Down Expand Up @@ -58,7 +58,7 @@ def trigger_error(request):
# Public URLs that are meant to be cached.
public_urls = [
path(
r"catalogue/id/<iaid:iaid>/",
r"catalogue/id/<id:id>/",
setting_controlled_login_required(
records_views.record_detail_view, "RECORD_DETAIL_REQUIRE_LOGIN"
),
Expand All @@ -77,14 +77,14 @@ def trigger_error(request):
name="image-serve",
),
path(
r"records/images/<iaid:iaid>/<str:sort>/",
r"records/images/<id:id>/<str:sort>/",
setting_controlled_login_required(
records_views.image_viewer, "IMAGE_VIEWER_REQUIRE_LOGIN"
),
name="image-viewer",
),
path(
r"records/images/<iaid:iaid>/",
r"records/images/<id:id>/",
setting_controlled_login_required(
records_views.image_browse, "IMAGE_VIEWER_REQUIRE_LOGIN"
),
Expand Down
107 changes: 44 additions & 63 deletions etna/ciim/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -239,17 +239,20 @@ def resultlist_from_response(
item_type: Type = Record,
) -> ResultList:
try:
hits = response_data["hits"]["hits"]
hits = response_data["metadata"]
except KeyError:
hits = []
try:
total_count = response_data["hits"]["total"]["value"]
total_count = response_data["stats"]["total"]
except KeyError:
total_count = len(hits)

aggregations_data = response_data.get("aggregations", {})
aggregations_data = response_data.get("aggregations", [])
if bucket_counts is None:
bucket_counts = aggregations_data.get("group", {}).get("buckets", [])
if not aggregations_data:
bucket_counts = []
else:
pass # TODO:Rosetta

return ResultList(
hits=hits,
Expand All @@ -262,34 +265,20 @@ def resultlist_from_response(
def fetch(
self,
*,
iaid: Optional[str] = None,
id: Optional[str] = None,
template: Optional[Template] = None,
expand: Optional[bool] = None,
) -> Record:
"""Make request and return response for Client API's /fetch endpoint.

Used to fetch a single item by its identifier.

Keyword arguments:

iaid:
Return match on Information Asset Identifier - iaid (or similar primary identifier)
id:
Generic identifier. Matches on references_number or iaid
template:
@template data to include with response
expand:
include @next and @previous record with response. Client API defaults to false
Generic identifier. Matches various id's
Ex: returns match on Information Asset Identifier - iaid (or similar primary identifier), creator records faid
"""
params = {
# Yes 'metadata_id' is inconsistent with the 'iaid' argument name, but this
# API argument name is temporary, and 'iaid' will be replaced more broadly with
# something more generic soon
"metadataId": iaid,
"id": id,
"template": template,
"expand": expand,
}

# Get HTTP response from the API
Expand All @@ -311,18 +300,16 @@ def search(
self,
*,
q: Optional[str] = None,
web_reference: Optional[str] = None,
opening_start_date: Optional[Union[date, datetime]] = None,
opening_end_date: Optional[Union[date, datetime]] = None,
created_start_date: Optional[Union[date, datetime]] = None,
created_end_date: Optional[Union[date, datetime]] = None,
opening_start_date: Optional[Union[date, datetime]] = None, # TODO:Rosetta
opening_end_date: Optional[Union[date, datetime]] = None, # TODO:Rosetta
created_start_date: Optional[Union[date, datetime]] = None, # TODO:Rosetta
created_end_date: Optional[Union[date, datetime]] = None, # TODO:Rosetta
stream: Optional[Stream] = None,
sort_by: Optional[SortBy] = None,
sort_order: Optional[SortOrder] = None,
template: Optional[Template] = None,
aggregations: Optional[list[Aggregation]] = None,
filter_aggregations: Optional[list[str]] = None,
filter_keyword: Optional[str] = None,
sort_by: Optional[SortBy] = None, # TODO:Rosetta
sort_order: Optional[SortOrder] = None, # TODO:Rosetta
aggregations: Optional[list[Aggregation]] = None, # TODO:Rosetta
filter_aggregations: Optional[list[str]] = None, # TODO:Rosetta
filter_keyword: Optional[str] = None, # TODO:Rosetta
offset: Optional[int] = None,
size: Optional[int] = None,
) -> ResultList:
Expand All @@ -337,16 +324,12 @@ def search(

q:
String to query all indexed fields
web_reference:
Return matches on references_number
stream:
Restrict results to given stream
sort_by:
Field to sort results.
sortOrder:
Order of sorted results
template:
@template data to include with response
aggregations:
aggregations to include with response. Number returned can be set
by optional count suffix: <aggregation>:<number-to-return>
Expand All @@ -361,37 +344,32 @@ def search(
"""
params = {
"q": q,
"webReference": web_reference,
"stream": stream,
"sort": sort_by,
"sortOrder": sort_order,
"template": template,
"aggregations": aggregations,
"filterAggregations": prepare_filter_aggregations(filter_aggregations),
"filter": filter_keyword,
"fields": f"stream:{stream}",
"aggs": aggregations,
"filter": prepare_filter_aggregations(filter_aggregations),
"from": offset,
"size": size,
}

if opening_start_date:
params["openingStartDate"] = self.format_datetime(
opening_start_date, supplementary_time=time.min
)
# if opening_start_date:
# params["openingStartDate"] = self.format_datetime(
# opening_start_date, supplementary_time=time.min
# )

if opening_end_date:
params["openingEndDate"] = self.format_datetime(
opening_end_date, supplementary_time=time.max
)
# if opening_end_date:
# params["openingEndDate"] = self.format_datetime(
# opening_end_date, supplementary_time=time.max
# )

if created_start_date:
params["createdStartDate"] = self.format_datetime(
created_start_date, supplementary_time=time.min
)
# if created_start_date:
# params["createdStartDate"] = self.format_datetime(
# created_start_date, supplementary_time=time.min
# )

if created_end_date:
params["createdEndDate"] = self.format_datetime(
created_end_date, supplementary_time=time.max
)
# if created_end_date:
# params["createdEndDate"] = self.format_datetime(
# created_end_date, supplementary_time=time.max
# )

# Get HTTP response from the API
response = self.make_request(f"{self.base_url}/search", params=params)
Expand All @@ -400,15 +378,18 @@ def search(
response_data = response.json()

# Pull out the separate ES responses
bucket_counts_data, results_data = response_data["responses"]
bucket_counts_data = []
aggregations = response_data["aggregations"]
for aggregation in aggregations:
if aggregation.get("name", "") == "group":
bucket_counts_data = aggregation.get("entries", [])
results_data = response_data

# Return a single ResultList, using bucket counts from the first ES response,
# and full hit/aggregation data from the second.
return self.resultlist_from_response(
results_data,
bucket_counts=bucket_counts_data["aggregations"]
.get("group", {})
.get("buckets", ()),
bucket_counts=bucket_counts_data,
)

def search_all(
Expand Down
18 changes: 10 additions & 8 deletions etna/ciim/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ class Aggregation(StrEnum):

DEFAULT_AGGREGATIONS = [
Aggregation.GROUP
+ ":30", # Fetch more 'groups' so that we receive counts for any bucket/tab options we might be showing.
# TODO:Rosetta + ":30", # Fetch more 'groups' so that we receive counts for any bucket/tab options we might be showing.
]


Expand All @@ -74,10 +74,12 @@ def aggregations_normalised(self) -> List[str]:
values = []
for aggregation in self.aggregations:
bits = aggregation.split(":")
if len(bits) == 2:
values.append(bits[0] + ":" + bits[1])
else:
values.append(bits[0] + ":10")
# TODO:Rosetta
values.append(bits[0])
# if len(bits) == 2:
# values.append(bits[0] + ":" + bits[1])
# else:
# values.append(bits[0] + ":10")
return values

def __post_init__(self):
Expand Down Expand Up @@ -720,9 +722,9 @@ class Display(StrEnum):
TNA_URLS = {
"discovery_browse": "https://discovery.nationalarchives.gov.uk/browse/r/h",
"tna_accessions": "https://www.nationalarchives.gov.uk/accessions",
"discovery_rec_default_fmt": "https://discovery.nationalarchives.gov.uk/details/r/{iaid}",
"discovery_rec_archon_fmt": "https://discovery.nationalarchives.gov.uk/details/a/{iaid}",
"discovery_rec_creators_fmt": "https://discovery.nationalarchives.gov.uk/details/c/{iaid}",
"discovery_rec_default_fmt": "https://discovery.nationalarchives.gov.uk/details/r/{id}",
"discovery_rec_archon_fmt": "https://discovery.nationalarchives.gov.uk/details/a/{id}",
"discovery_rec_creators_fmt": "https://discovery.nationalarchives.gov.uk/details/c/{id}",
}

# associate readable names with api identifiers
Expand Down
Loading
Loading