-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix github action e2e test. #555
Fix github action e2e test. #555
Conversation
Hi @liangyuanpeng. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the e2e test only runs on one k8s version. I also want e2e to run on multiple k8s versions at the same time, such as v1.27 v1.28 v1.29.
Maybe it can be completed in the next PR.
f072aa8
to
25677da
Compare
/ok-to-test |
I think EGREE_SELECTOR_CONFIGURATION_PATH should be named EGRESS_SELECTOR_CONFIGURATION_PATH |
9d1a954
to
2343185
Compare
I have Revert it.
...
pod/coredns-5d78c9869d-k9c2h condition met
pod/coredns-5d78c9869d-xbwc7 condition met
pod/test created
pod/test condition met
Error from server: Get "https://172.18.0.2:10250/containerLogs/default/test/test": No agent available
Error: Process completed with exit code 1. There is a problem waiting to be solved. I will come back to work for it tomorrow, so it's not ready to merge it yet. |
@aojea and @liangyuanpeng, please note we temporarily disabled the action tests at #559. Hopefully this PR is able to make progress and then restore it. Let me know if you get stuck and want a second set of eyes on the "no agent available" problem. |
2343185
to
42ddc56
Compare
This is a flaky problem when ipFamily is ipv4, I am checking. |
42ddc56
to
1103099
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I will be on vacation next week, I hope to use a temporary method to merge this PR. After half a month of vacation, if the problem still needs to be handle, I can continue to look at it.
@aojea @jkh52 @tallclair PTAL,Thanks!
ffb522e
to
1d12f9f
Compare
Signed-off-by: Lan Liang <[email protected]>
1d12f9f
to
68449bc
Compare
/lgtm /assign @jkh52 I still don't see this action running 🤔 |
Need the admin of this repo to approve this GitHub workflow. maybe @jkh52 Seems like it's always need the approve, Because this is not my first contribution, below is a sample screenshot. |
/usr/local/bin/kubectl run test --image httpd:2 | ||
/usr/local/bin/kubectl wait --timeout=1m --for=condition=ready pods test | ||
/usr/local/bin/kubectl get pods -A -owide | ||
/usr/local/bin/kubectl wait --timeout=1m --for=condition=ready pods --namespace=kube-system -l k8s-app=konnectivity-agent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be worth also waiting for -l k8s-app=konnectivity-server
to be ready.
(In this 1 control-plane node cluster is is nearly the same, but in a multi control-plane cluster it would be needed.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming you're talking about waiting the pod of ANP server with
readiness probe.(now, the pod of ANP server have not readiness probe)
konnectivity-server'readiness indicates that at least one Konnectivity Agent is connected.
konnectivity-agent'readiness indicates that the client is connected to at least one proxy server.
Therefore, if readiness is added at the same time, both will enter an infinite loop.
The reason is ANP agent is using kubernetes svc to connect ANP server and ANP server is running with daemonset.
Remind me if i missed something,Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming you're talking about waiting the pod of ANP server with readiness probe.(now, the pod of ANP server have not readiness probe)
konnectivity-server'readiness indicates that at least one Konnectivity Agent is connected.
konnectivity-agent'readiness indicates that the client is connected to at least one proxy server.
Yes to above.
Therefore, if readiness is added at the same time, both will enter an infinite loop.
I don't expect this, because waiting on agent (or any) readiness does not depend on control plane egress.
The reason is ANP agent is using kubernetes svc to connect ANP server and ANP server is running with daemonset.
Remind me if i missed something,Thanks.
What really matters (for a given proxy request ,like kubectl logs
) is whether the apiserver that handles the request has a konnectivity-server with at least one useful agent. Since the agent readiness only covers "at least 1", it is insufficient in this case. (There was discussion on that feature to add agent readiness mode "connected to all servers" but it is not implemented.) This would be the gap, fixed by waiting for all konnectivity-server to be ready.
/lgtm Hold to allow Lan time to inspect logs, etc; feel free to un-hold when ready. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jkh52, liangyuanpeng The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I quickly tested the multi control-plane cluster ( the pod of ANP server without readiness probe) ) scenario using the current configuration and concluded that the test would fail. There are three ANP servers and two ANP agents. When kubectl logs is requested to ANP serverA, serverA does not have any agent to connect to it, so the request fails. But the test scenario is necessary and I will create an issue for it later. /hold cnacel |
/hold cancel |
Hooray, nice coverage @liangyuanpeng and @aojea ! |
👏 |
This PR fixes the problem of e2e github action.
The main changes are as follows:
- Fix wrong shell command- The kind config used in github action e2e uses the same file as exmaples/kind/kind.config