-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix pulumi.DependsOn
in test-infra-definitions
#31393
Conversation
786119e
to
14b8f05
Compare
[Fast Unit Tests Report] On pipeline 51412609 (CI Visibility). The following jobs did not run any unit tests: Jobs:
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=51412609 --os-family=ubuntu Note: This applies to commit 234fbf8 |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 6fb76d5 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_idle_all_features | memory utilization | +1.62 | [+1.52, +1.73] | 1 | Logs bounds checks dashboard |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.77 | [+0.71, +0.83] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | +0.08 | [-0.38, +0.54] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.06 | [-0.73, +0.84] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | +0.03 | [-0.02, +0.07] | 1 | Logs bounds checks dashboard |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.02 | [-0.09, +0.13] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.01 | [-0.62, +0.63] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.00 | [-0.70, +0.71] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.00 | [-0.88, +0.88] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | -0.02 | [-0.91, +0.88] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.02 | [-0.79, +0.74] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | -0.07 | [-0.88, +0.74] | 1 | Logs |
➖ | file_tree | memory utilization | -0.29 | [-0.42, -0.16] | 1 | Logs |
➖ | otel_to_otel_logs | ingress throughput | -0.37 | [-1.03, +0.29] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -1.45 | [-2.18, -0.72] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | -1.80 | [-4.71, +1.11] | 1 | Logs |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | file_to_blackhole_0ms_latency | lost_bytes | 9/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
0dece34
to
3821b3e
Compare
3821b3e
to
5c2dbf1
Compare
Package size comparisonComparison with ancestor Diff per package
Decision✅ Passed |
…naic/fix_depends_on_in_test-infra
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision |
/merge |
Devflow running:
|
What does this PR do?
Update
test-infra-definitions
to use a version that contains DataDog/test-infra-definitions#1261.This new version of
test-infra-definitions
guarantees that pods that need to go through the admission controller are created only once the Datadog cluster agent is up and running.As a consequence, we can remove the hacky pod deletion that was introduced to guarantee the pod went through the admission controller.
Motivation
Clean
Describe how to test/QA your changes
Validate that
containers
e2e tests are not flaky.Possible Drawbacks / Trade-offs
Additional Notes