-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Place messages from misconfigured IDs in the replay queue #711
Conversation
Signed-off-by: constanca <[email protected]>
Signed-off-by: constanca <[email protected]>
Signed-off-by: constanca <[email protected]>
Signed-off-by: constanca <[email protected]>
@@ -462,20 +464,6 @@ def test_lambda_handler_noop(self) -> None: | |||
del lambda_event["Records"][0]["messageAttributes"]["originalEventSourceARN"] | |||
assert handler(lambda_event, ctx) == "completed" # type:ignore | |||
|
|||
with self.subTest("no input defined for kinesis-data-stream"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this test because it does not make sense. Reference.
We have now tests in test_integrations
to check if inputs are defined.
Signed-off-by: constanca <[email protected]>
@@ -399,6 +399,8 @@ def test_lambda_handler_noop(self) -> None: | |||
with self.subTest("no originalEventSourceARN in messageAttributes"): | |||
ctx = ContextMock() | |||
os.environ["S3_CONFIG_FILE"] = "s3://s3_config_file_bucket/s3_config_file_object_key" | |||
os.environ["SQS_REPLAY_URL"] = "https://sqs.us-east-2.amazonaws.com/123456789012/replay_queue" | |||
os.environ["SQS_CONTINUE_URL"] = "https://sqs.us-east-2.amazonaws.com/123456789012/continue_queue" | |||
lambda_event = deepcopy(_dummy_lambda_event) | |||
del lambda_event["Records"][0]["messageAttributes"]["originalEventSourceARN"] | |||
assert handler(lambda_event, ctx) == "completed" # type:ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Struggling to understand why this assertion is completed?
The originalEventSourceARN == arn:aws:sqs:eu-central-1:123456789:sqs-queue" but here we have a different region. Should we raise another status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not using the originalEventSourceARN
attribute here, but the arn:aws:sqs:eu-central-1:123456789:s3-sqs-queue
instead (so the eventSourceARN
attribute). And the queues are not being used... But that is a great point. I suspect it would fail with different regions, but it is never being tested in this file (there are no timeout tests or misconfigured ids here, so no reason to use those queues). I had to add the line because of this part:
elastic-serverless-forwarder/handlers/aws/handler.py
Lines 135 to 136 in 9c6d224
sqs_replaying_queue = os.environ["SQS_REPLAY_URL"] | |
sqs_continuing_queue = os.environ["SQS_CONTINUE_URL"] |
I will fix this as it does not make sense indeed
- for: | ||
var: REQUIREMENTS | ||
split: ',' | ||
cmd: pip3.9 install -r ../{{ .ITEM }} -t $DIR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider taking this from mypy.ini and python_version = 3.9 variable
In my case this was failing in my local tests (because did not have versin 3.9)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While building it locally I see in lamda
[ERROR] Runtime.ImportModuleError: Unable to import module 'main_aws': No module named 'orjson.orjson'
Traceback (most recent call last):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used python 3.9 here because we use this for the tests and for our ESF terraform files:
python-version: '3.9' # As defined in tests/scripts/docker/run_tests.sh |
There is an open issue to offer support to version 3.12 as well: #656.
Signed-off-by: constanca <[email protected]>
Signed-off-by: constanca <[email protected]>
Signed-off-by: constanca <[email protected]>
…)" This reverts commit 45e6c35.
What does this PR do?
Prints error message if the input id from the trigger is not present in
config.yaml
, and places the message in the replay queue.Why is this PR important?
Issues linked explained it well, but to sum it up:
config.yaml
is set incorrectly, but ESF is associated with the right trigger inputs ids (read next section to get better understanding on how this can happen), we should print an error that the ID is not present in theconfig.yaml
so there is no output available for it. This way, all these messages will go to the replay queue, and we will count these events aserror_events
.Checklist
CHANGELOG.md
How to test this PR locally
I added a
how-to-test
directory in the commits. Please follow the README.md file from that directory.Important: You need to use it with the code from this PR that is currently not merged, otherwise it will fail.
I also added some tests to check the undefined input that will run in one of the GitHub workflows.
Related issues
Tests and Results
SQS trigger
config.yaml
looks like this:And ESF has
aws_lambda_event_source_mapping
configured correctly with the SQS ARN.ESF logs. To make it easier, I annotated the logs with
<-
in the relevant lines.The replaying queue,
constanca-test-esf-replay-queue
, hasMessages available: 1
.Cloudwatch logs trigger
config.yaml
looks like this:And ESF has
aws_cloudwatch_log_subscription_filter
configured correctly with the Cloudwatch Logs ARN.ESF logs. To make it easier, I annotated the logs with
<-
in the relevant lines.Kinesis data stream trigger
config.yaml
:And configured
aws_lambda_event_source_mapping
with the correct Kinesis data stream ARN.ESF logs. To make it easier, I annotated the logs with
<-
in the relevant lines.Replay Queue
After all the three failed triggers, the replay queue has
Messages available: 3
.