Skip to content

Latest commit

 

History

History
63 lines (50 loc) · 3.01 KB

README.md

File metadata and controls

63 lines (50 loc) · 3.01 KB

Build Status codecov.io

telemetry-streaming

Spark Streaming ETL jobs for Mozilla Telemetry

This service currently contains jobs that aggregate error data on 5 minute intervals. It is responsible for generating the (internal only) error_aggregates and experiment_error_aggregates parquet tables at Mozilla.

Issue Tracking

Please file bugs related to the error aggregates streaming job in the Datasets: Error Aggregates component.

Deployment

The jobs defined in this repository are generally deployed as streaming jobs within our hosted Databricks account, but some are deployed as periodic batch jobs via Airflow using wrappers codified in telemetry-airflow that spin up EMR clusters whose configuration is governed by emr-bootstrap-spark. Changes in production behavior that don't seem to correspond to changes in this repository's code could be related to changes in those other projects.

Amplitude Event Configuration

Some of the jobs defined in telemetry-streaming exist to transform telemetry events and republish to Amplitude for further analysis. Filtering and transforming events is accomplished via JSON configurations. If you're creating or updating such a schema, see:

Development

The recommended workflow for running tests is to use your favorite editor for editing the source code and running the tests via sbt. Some common invocations for sbt:

  • sbt test # run the basic set of tests (good enough for most purposes)
  • sbt "testOnly *ErrorAgg*" # run the tests only for packages matching ErrorAgg
  • sbt "testOnly *ErrorAgg* -- -z version" # run the tests only for packages matching ErrorAgg, limited to test cases with "version" in them
  • sbt dockerComposeTest # run the docker compose tests (slow)
  • sbt "dockerComposeTest -tags:DockerComposeTag" # run only tests with DockerComposeTag (while using docker)
  • sbt scalastyle test:scalastyle # run linter
  • sbt ci # run the full set of continuous integration tests

Some tests need Kafka to run. If one prefers to run them via IDE, it's required to run the test cluster:

sbt dockerComposeUp

or via plain docker-compose:

export DOCKER_KAFKA_HOST=$(./docker_setup.sh)
docker-compose -f docker/docker-compose.yml up

It's also good to shut down the cluster afterwards:

sbt dockerComposeStop