Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decouple suspension of propagation and resourcebinding #5974

Merged
merged 1 commit into from
Dec 28, 2024

Conversation

Monokaix
Copy link
Contributor

What type of PR is this?
/kind cleanup

What this PR does / why we need it:
decouple suspension of propagation and resourcebinding, because rb and pp's suspension should be independent.
Which issue(s) this PR fixes:
Fixes #
Part of #5937
Special notes for your reviewer:
should merge this one first before #5937
Does this PR introduce a user-facing change?:


@karmada-bot karmada-bot added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label Dec 25, 2024
@karmada-bot karmada-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Dec 25, 2024
@karmada-bot karmada-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 25, 2024
@codecov-commenter
Copy link

codecov-commenter commented Dec 25, 2024

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 25.00000% with 6 lines in your changes missing coverage. Please review.

Project coverage is 48.26%. Comparing base (7112723) to head (eb2a4bd).
Report is 6 commits behind head on master.

Files with missing lines Patch % Lines
pkg/detector/detector.go 14.28% 4 Missing and 2 partials ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5974      +/-   ##
==========================================
+ Coverage   48.25%   48.26%   +0.01%     
==========================================
  Files         664      665       +1     
  Lines       54749    54793      +44     
==========================================
+ Hits        26417    26445      +28     
- Misses      26618    26633      +15     
- Partials     1714     1715       +1     
Flag Coverage Δ
unittests 48.26% <25.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/assign

@@ -322,6 +322,11 @@ type BindingSnapshot struct {
Clusters []TargetCluster `json:"clusters,omitempty"`
}

// Suspension defines the policy for suspending of propagation and suspension of resource binding itself.
type Suspension struct {
*policyv1alpha1.Suspension `json:",inline"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remind me why use a pointer type of pkg/apis/work/v1alpha2/binding_types.go?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can seperate it's not set or set a zero value by user.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question is, if it is to understand the coupling, why not re-declar all the fields in the binding. What if ScheduleSuspension is also exposed in the PropagationPolicy in the future, according to the current processing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @RainbowMango mentioned, pp should not expose bindingSuspension now, and we can declare an independent struct to express rb's suspension, this can be done in the future.

{
name: "false for nil dispatching",
args: args{
suspension: &policyv1alpha1.Suspension{Dispatching: nil},
suspension: &workv1alpha2.Suspension{Suspension: &policyv1alpha1.Suspension{Dispatching: nil}},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
suspension: &workv1alpha2.Suspension{Suspension: &policyv1alpha1.Suspension{Dispatching: nil}},
suspension: &workv1alpha2.Suspension{Suspension: policyv1alpha1.Suspension{Dispatching: nil}},

@RainbowMango RainbowMango added this to the v1.13 milestone Dec 28, 2024
Copy link
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Dec 28, 2024
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RainbowMango

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 28, 2024
@RainbowMango
Copy link
Member

/retest

The failing test is unreleated:

• [FAILED] [335.806 seconds]
Multi-Cluster Service testing EndpointSlices change testing [It] Update Deployment's replicas
/home/runner/work/karmada/karmada/test/e2e/mcs_test.go:392

  Captured StdOut/StdErr Output >>
  I1228 03:52:25.323655   54560 customresourcedefine.go:72] Waiting for crd present on cluster(member1)
  I1228 03:52:35.399289   54560 customresourcedefine.go:72] Waiting for crd present on cluster(member2)
  I1228 03:52:35.408423   54560 customresourcedefine.go:72] Waiting for crd present on cluster(member3)
  I1228 03:52:35.416411   54560 customresourcedefine.go:72] Waiting for crd present on cluster(member1)
  I1228 03:52:35.425310   54560 customresourcedefine.go:72] Waiting for crd present on cluster(member2)
  I1228 03:52:35.441565   54560 customresourcedefine.go:72] Waiting for crd present on cluster(member3)
  I1228 03:52:35.464638   54560 mcs_test.go:255] Create Deployment(karmadatest-42lkp/hello-nbkpl) in member1 cluster
  I1228 03:52:35.494617   54560 mcs_test.go:258] Create Service(karmadatest-42lkp/hello-nbkpl) in member1 cluster
  I1228 03:53:00.700020   54560 mcs_test.go:442] Update Deployment's replicas in member1 cluster
  I1228 03:58:00.871904   54560 mcs_test.go:277] Delete Deployment(karmadatest-42lkp/hello-nbkpl) in member1 cluster
  I1228 03:58:00.884286   54560 mcs_test.go:280] Delete Service(karmadatest-42lkp/hello-nbkpl) in member1 cluster
  << Captured StdOut/StdErr Output

  Timeline >>
  STEP: Creating ClusterPropagationPolicy(serviceexports-8t2d8-policy) @ 12/28/24 03:52:25.205
  STEP: Creating ClusterPropagationPolicy(serviceimports-6qtbw-policy) @ 12/28/24 03:52:25.256
  STEP: Check if crd(multicluster.x-k8s.io/v1alpha1/ServiceExport) present on member clusters @ 12/28/24 03:52:25.323
  STEP: Check if crd(multicluster.x-k8s.io/v1alpha1/ServiceImport) present on member clusters @ 12/28/24 03:52:35.416
  STEP: Creating Deployment(karmadatest-42lkp/hello-nbkpl) @ 12/28/24 03:52:35.464
  STEP: Creating Service(karmadatest-42lkp/hello-nbkpl) @ 12/28/24 03:52:35.494
  STEP: Wait Service(karmadatest-42lkp/hello-nbkpl)'s EndpointSlice exist in member1 cluster @ 12/28/24 03:52:35.563
  STEP: Create ServiceExport(karmadatest-42lkp/hello-nbkpl) @ 12/28/24 03:52:50.637
  STEP: Creating PropagationPolicy(karmadatest-42lkp/export-hello-nbkpl-policy) @ 12/28/24 03:52:50.65
  STEP: Wait EndpointSlices collected to namespace(karmadatest-42lkp) in controller-plane @ 12/28/24 03:52:50.669
  STEP: Create ServiceImport(karmadatest-42lkp/hello-nbkpl) @ 12/28/24 03:52:55.681
  STEP: Creating PropagationPolicy(karmadatest-42lkp/import-hello-nbkpl-policy) @ 12/28/24 03:52:55.685
  STEP: Wait EndpointSlice exist in member2 cluster @ 12/28/24 03:52:55.693
  STEP: Updating Deployment(karmadatest-42lkp/hello-nbkpl)'s replicas to 2 @ 12/28/24 03:53:00.7
  STEP: Wait EndpointSlice update in member2 cluster @ 12/28/24 03:53:00.713
  [FAILED] in [It] - /home/runner/work/karmada/karmada/test/e2e/mcs_test.go:457 @ 12/28/24 03:58:00.714
  STEP: Cleanup @ 12/28/24 03:58:00.846
  STEP: Removing PropagationPolicy(karmadatest-42lkp/export-hello-nbkpl-policy) @ 12/28/24 03:58:00.855
  STEP: Removing PropagationPolicy(karmadatest-42lkp/import-hello-nbkpl-policy) @ 12/28/24 03:58:00.864
  STEP: Removing Deployment(karmadatest-42lkp/hello-nbkpl) @ 12/28/24 03:58:00.871
  STEP: Removing Service(karmadatest-42lkp/hello-nbkpl) @ 12/28/24 03:58:00.884
  STEP: Removing ClusterPropagationPolicy(serviceimports-6qtbw-policy) @ 12/28/24 03:58:00.904
  STEP: Removing ClusterPropagationPolicy(serviceexports-8t2d8-policy) @ 12/28/24 03:58:00.958
  << Timeline

  [FAILED] Timed out after 300.000s.
  Expected
      <int>: 1
  to equal
      <int>: 2
  In [It] at: /home/runner/work/karmada/karmada/test/e2e/mcs_test.go:457 @ 12/28/24 03:58:00.714

  Full Stack Trace
    github.com/karmada-io/karmada/test/e2e.init.func31.6.1.5()
    	/home/runner/work/karmada/karmada/test/e2e/mcs_test.go:457 +0x257
    github.com/karmada-io/karmada/test/e2e.init.func31.6.1()
    	/home/runner/work/karmada/karmada/test/e2e/mcs_test.go:445 +0xc25

@karmada-bot karmada-bot merged commit 337c27b into karmada-io:master Dec 28, 2024
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants