Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metals regularly disconnects from build server #1229

Open
kylechadha opened this issue Nov 18, 2022 · 18 comments
Open

Metals regularly disconnects from build server #1229

kylechadha opened this issue Nov 18, 2022 · 18 comments

Comments

@kylechadha
Copy link

Describe the bug

Each time I come back to my machine after it has gone to sleep, I have to reconnect to the metals build server with Metals: Connect to build server

To Reproduce Steps to reproduce the behavior:

  1. Open an sbt repository, import the build
  2. Everything works great
  3. Laptop sleeps
  4. When I come back, metals features (gotodef etc) do not work
  5. Select Metals: Connect to build server, after a minute things start working again

Expected behavior

Metals would remain connected or auto-reconnect to build server

Installation:

  • Operating system: macOS
  • VSCode version:
Version: 1.72.2
Commit: d045a5eda657f4d7b676dedbfa7aab8207f8a075
Date: 2022-10-12T22:16:30.254Z (1 mo ago)
Electron: 19.0.17
Chromium: 102.0.5005.167
Node.js: 16.14.2
V8: 10.2.154.15-electron.0
OS: Darwin x64 21.6.0
Sandboxed: No
  • VSCode extension version: v1.20.0
  • Metals version: 0.11.9
@kpodsiad
Copy link
Member

kpodsiad commented Nov 19, 2022

Hi @kylechadha, thanks for reporting. I've been using Metals on macOS and I've never encountered such issue. Does it happens always when you follow to reproduce steps? I can think of one thing currently, maybe your mac is killing some background processes before going to sleep?

  1. Could you run Run Doctor command and paste that piece of information here?

Screenshot 2022-11-19 at 12 15 01

2. Could you:
  • close all vs code instances, run killAll java, thanks to that all unwanted processes will be killed
  • open one repository with vs code
  • run jps to get all java processes
  • sleep
  • wake mac, run jps again to check and compare now vs in the past

@kylechadha
Copy link
Author

HI @kpodsiad, thanks for taking a look! Glad to hear it's not expected behavior.

Here's the results of metals doctor:

Metals Doctor
Build server currently being used is Bloop v1.5.4.

Metals Java: 1.8.0_265 from Azul Systems, Inc. located at /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre

Metals Server version: 0.11.9

Performed all the steps in step 2 and see the same results from jps:

➜  home jps
12346 Server
13084 Jps
12221 Main
➜  home jps
13136 Jps
12346 Server
12221 Main

However I noticed in the output tab that the workspace was re-compiled when I came back from sleep. There was a short period of time where gotodef wasn't working, but it's back now, so I guess it doesn't always happen deterministically after sleeping 🤔 .

So perhaps it's some combination of sleep + time passing that results in the process getting killed?

@kpodsiad
Copy link
Member

Thing with process being killed was just a guess, because I can't see other reason for disconnecting. @tgodzik me if I'm wrong, but Metals doesn't disconnect from build server without a reason (like explicit command from the user)

@tgodzik
Copy link
Contributor

tgodzik commented Nov 23, 2022

Thing with process being killed was just a guess, because I can't see other reason for disconnecting. @tgodzik me if I'm wrong, but Metals doesn't disconnect from build server without a reason (like explicit command from the user)

It doesn't disconnect and should try to connect automatically if there is an exception. If you try manually running clean compile, does anything happen in the logs? I remember there was an issue with some bad state that Metals had, which was preventing things from getting compiled 🤔

@dubaut
Copy link

dubaut commented Sep 20, 2024

I have the same issue with macOS 14 and 15 with M2 Pro and M3 Max.

@ritschwumm
Copy link

i've seen the same in the last weeks on linux/x64. i'll try to check whether i can find anything in the logs next time.

@dubaut
Copy link

dubaut commented Sep 21, 2024

This is the output:

Error: Connection is disposed.
at throwIfClosedOrDisposed (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-jsonrpc/lib/common/connection.js:832:19)
at Object.sendRequest (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-jsonrpc/lib/common/connection.js:1000:13)
at LanguageClient.sendRequest (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-languageclient/lib/common/client.js:331:27)
at async _provideDocumentSymbols (/Users/username/.vscode/extensions/scalameta.metals-1.39.0/node_modules/vscode-languageclient/lib/common/documentSymbol.js:73:38)
at async h.provideDocumentSymbols (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:161:103267)

@kpodsiad
Copy link
Member

I've also started to see Metals disconnecting. I think I found yet another issue, this one was with viewing bulk edit changes, but as you can see there is also OOM error.

org.eclipse.lsp4j.jsonrpc.RemoteEndpoint handleNotification
WARNING: Notification threw an exception: {
  "jsonrpc": "2.0",
  "method": "metals/didFocusTextDocument",
  "params": [
    "vscode-bulkeditpreview-editor://ddabbe04-0dd0-472e-a541-01120363d78a/path/to/my/scala/file.scala?file%3A%2F%2F%2Fpath%2Fto%2Fmy%2Fscala%2Ffile.scala"
  ]
}
java.nio.file.FileSystemNotFoundException: Provider "vscode-bulkeditpreview-editor" not installed
	at java.base/java.nio.file.Path.of(Path.java:213)
	at java.base/java.nio.file.Paths.get(Paths.java:98)
	at scala.meta.internal.mtags.MtagsEnrichments$XtensionURIMtags.toAbsolutePath(MtagsEnrichments.scala:130)
	at scala.meta.internal.mtags.MtagsEnrichments$XtensionStringMtags.toAbsolutePath(MtagsEnrichments.scala:187)
	at scala.meta.internal.metals.MetalsEnrichments$XtensionString.toAbsolutePath(MetalsEnrichments.scala:773)
	at scala.meta.internal.metals.MetalsEnrichments$XtensionString.toAbsolutePath(MetalsEnrichments.scala:770)
	at scala.meta.internal.metals.WorkspaceLspService.didFocus(WorkspaceLspService.scala:698)
	at scala.meta.metals.lsp.DelegatingScalaService.didFocus(DelegatingScalaService.scala:43)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.lambda$recursiveFindRpcMethods$0(GenericEndpoint.java:65)
	at org.eclipse.lsp4j.jsonrpc.services.GenericEndpoint.notify(GenericEndpoint.java:160)
	at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleNotification(RemoteEndpoint.java:231)
	at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:198)
	at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:185)
	at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:97)
	at org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:114)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1583)

Exception in thread "Thread-49" java.lang.OutOfMemoryError: Java heap space
	at java.base/jdk.internal.misc.Unsafe.allocateInstance(Native Method)
	at java.base/java.lang.invoke.DirectMethodHandle.allocateInstance(DirectMethodHandle.java:501)
	at java.base/java.lang.invoke.DirectMethodHandle$Holder.newInvokeSpecial(DirectMethodHandle$Holder)
	at java.base/java.lang.invoke.Invokers$Holder.linkToTargetMethod(Invokers$Holder)
	at snailgun.protocol.Protocol.$anonfun$createHeartbeatAndShutdownThread$1(Protocol.scala:219)
	at snailgun.protocol.Protocol$$Lambda/0x0000008001642960.apply$mcV$sp(Unknown Source)
	at snailgun.protocol.Protocol$$anon$1.run(Protocol.scala:302)
Exception in thread "pool-1-thread-207" java.lang.OutOfMemoryError: Java heap space

@dubaut
Copy link

dubaut commented Sep 21, 2024

I'm surprised the developers have not paid more attention to this issue. It's been open since 2022 and renders Metals basically unusable.

@tgodzik
Copy link
Contributor

tgodzik commented Sep 22, 2024

Getting oom exception might come from a lot of different places, so this is really a catch all issue. I will take a look at the possible problems here, but you can also try to increase xmx in metals.serverProperties or even use a different GC that can return memory to the system.

If possible, you can also I think set a parameter to dump heap on oom, which might be useful to indentify your issues in particular

@dubaut
Copy link

dubaut commented Sep 22, 2024

@tgodzik I will increase xmx and see if this will improve things. Where can I find this file? But to be honest, the reference development environment for Scala should work out of the box.

@tgodzik
Copy link
Contributor

tgodzik commented Sep 23, 2024

@tgodzik I will increase xmx and see if this will improve things. Where can I find this file? But to be honest, the reference development environment for Scala should work out of the box.

As it's an open source project we always accept contributions that can help us achieve that ideal goal.

@dubaut
Copy link

dubaut commented Sep 23, 2024

@tgodzik Where can I find this setting to increase the memory?

@tgodzik
Copy link
Contributor

tgodzik commented Sep 24, 2024

That's under metals.serverProperties

@dubaut
Copy link

dubaut commented Sep 24, 2024

That's under metals.serverProperties

Where can I find this property? Is there a config file?

@tgodzik
Copy link
Contributor

tgodzik commented Sep 25, 2024

It's in the usual VS Code settings, where most other extensions and VS Code itself also have their settings.

@dubaut
Copy link

dubaut commented Sep 26, 2024

Setting xmx zu 8 GiB did not improve the situation really.

@tgodzik
Copy link
Contributor

tgodzik commented Sep 27, 2024

So there is no info about OOM errors? IT might be a different cause then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants