-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature(kafka-localstack): introducing docker-compose base kafka setup #6946
Conversation
5180b59
to
bb0382b
Compare
c19d5f9
to
7b51aeb
Compare
7b51aeb
to
9ef7278
Compare
are there kafka metrics worth to add to monitoring? If yes, can be done in followup task. |
this one is a local setup of kafka, I don't think monitoring is needed, as least not yet. (we have monitoring data of the sct-runner) |
they look nice: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-kafka/ |
9ef7278
to
7d8333d
Compare
JMX never looks nice... it's too early for this, once we'll have VMs and a full cluster, we might consider installation of those. now I care more on the functional side of things, and how this setup integrates with a longevity test. |
362b307
to
e3cbcac
Compare
b5c99c5
to
5030786
Compare
jenkins-pipelines/oss/kafka_connectors/longevity-kafka-cdc-aws.jenkinsfile
Outdated
Show resolved
Hide resolved
jenkins-pipelines/oss/kafka_connectors/longevity-kafka-cdc-aws.jenkinsfile
Outdated
Show resolved
Hide resolved
So the longevity code we have basically works But it hangs cause we don't have code to stop the Kafka reading thread, might use the idea of teardown validator to validate and stop the reading thread |
59d17ac
to
0dae67c
Compare
I don't understand why we cannot add this verification to teardown itself? Why teardown validator is required? |
It was an idea, validators seemed like a natural place for it I'm now trying a different approach of adding this logic to the reader thread itself. |
|
||
|
||
class LocalKafkaCluster(cluster.BaseCluster): | ||
def __init__(self, remoter=LOCALRUNNER): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will sct_runner survive high load on kafka?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might, we can scale the runner as needed.
The idea is to have a setup that can work completely locally with docker backed for development
The next stage is building a cluster of kafka instead of the docker compose setup, and then some Kafka SaaS.
So for this initial step we can about building functionally, not yet about scale. scaling would be tested on VMs or on SaaS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
worth adding a note that it's on sct runner and its size should be increased.
Generally small docstring would be great.
8388aa2
to
15dc9ec
Compare
Two Jobs introduced are passing now One small pre-commit issue, and it's good to go |
6892f33
to
8c64e87
Compare
Since we want to be able to run scylla kafka connectors with scylla clusters create by SCT, we are introducing here the first of kafka backend that would be used for local development (with SCT docker backend) * inculde a way to configure the connector as needed (also multi ones) * get it intsall from hub or by url **Note**: this doesn't yet include any code that can read out of kafka
with this thread we'll be able to read the data written by the connector, and validate we are getting the information we expect (number of rows as the first validation)
first pipelines, based on docker and aws backends
I would recommend you try it again, to get familiar with it. |
name of a property was changed from `version` to `source` and was missed on one of the configurtion files introduce in scylladb#6946 and it started failing on test case linting right after the merge
name of a property was changed from `version` to `source` and was missed on one of the configurtion files introduce in #6946 and it started failing on test case linting right after the merge
Since we want to be able to run scylla kafka connectors with scylla clusters create by SCT, we are introducing here the first of kafka backend that would be used for local development (with SCT docker backend)
Testing