You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 19, 2023. It is now read-only.
What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)
I am needing to administer multiple Kubernetes clusters. If I switch contexts more than 1 or 2 times, Octant will either crash (Launched by CLI), or simply hang (Launched by GUI). When this happens my only action is to close, if launched by GUI, and relaunch the program to get to the context I need.
What did you expect to happen:
The application should be able to switch contexts without locking up like this.
Here are the console output/logs when launched via CLI (Partial due to char. limit):
2022-10-07T08:51:30.909-0500 ERROR api/content_manager.go:159 generate content {"client-id": "cdbb5ed8-4646-11ed-a414-f01898e82da4", "err": "generate content: preferred version for StreamTemplate.jetstream.nats.io: unknown version for StreamTemplate.jetstream.nats.io", "content-path": "overview/namespace/apollo-dev"}
github.com/vmware-tanzu/octant/internal/api.(*ContentManager).runUpdate.func1
github.com/vmware-tanzu/octant/internal/api/content_manager.go:159
github.com/vmware-tanzu/octant/internal/api.(*InterruptiblePoller).Run.func1
github.com/vmware-tanzu/octant/internal/api/poller.go:86
github.com/vmware-tanzu/octant/internal/api.(*InterruptiblePoller).Run
github.com/vmware-tanzu/octant/internal/api/poller.go:95
github.com/vmware-tanzu/octant/internal/api.(*ContentManager).Start
github.com/vmware-tanzu/octant/internal/api/content_manager.go:133
E1007 08:51:31.155137 25950 runtime.go:78] Observed a panic: "close of closed channel" (close of closed channel)
goroutine 73582 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x58a9c40, 0xb692470})
k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
panic({0x58a9c40, 0xb692470})
runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
E1007 08:51:31.155132 25950 runtime.go:78] Observed a panic: "close of closed channel" (close of closed channel)
goroutine 73556 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x58a9c40, 0xb692470})
k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
panic({0x58a9c40, 0xb692470})
runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
panic: close of closed channel [recovered]
panic: close of closed channel
goroutine 73582 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x58a9c40, 0xb692470})
runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
panic: close of closed channel [recovered]
panic: close of closed channel
goroutine 73556 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x58a9c40, 0xb692470})
runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Console output posted above ^^^
Environment:
CLI version of Octant:
octant version
Version: 0.25.1
Git commit: f16cbb951905f1f8549469dfc116ca16cf679d46
Built: 2022-02-24T21:59:56Z
GUI application of Actant:
Version | (dev-version)
-- | --
f16cbb9
2022-02-24T22:39:43Z
What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)
I am needing to administer multiple Kubernetes clusters. If I switch contexts more than 1 or 2 times, Octant will either crash (Launched by CLI), or simply hang (Launched by GUI). When this happens my only action is to close, if launched by GUI, and relaunch the program to get to the context I need.
What did you expect to happen:
The application should be able to switch contexts without locking up like this.
Here are the console output/logs when launched via CLI (Partial due to char. limit):
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Console output posted above ^^^
Environment:
CLI version of Octant:
GUI application of Actant:
KUBECTL Version:
OS:
octant version
):kubectl version
):The text was updated successfully, but these errors were encountered: