Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

so unbelievably slow #6658

Open
willjhenry opened this issue Nov 12, 2024 · 23 comments
Open

so unbelievably slow #6658

willjhenry opened this issue Nov 12, 2024 · 23 comments
Assignees
Labels
needs repro Issue has not been reproduced yet

Comments

@willjhenry
Copy link

20% of my dev time is now waiting for pylance to finish "3 files and 0 cells to analyze".

@github-actions github-actions bot added the needs repro Issue has not been reproduced yet label Nov 12, 2024
@heejaechang
Copy link
Contributor

sorry to hear that.

if you provide us some profile data described in https://github.com/microsoft/pylance-release/wiki/Collecting-data-for-an-investigation.#collecting-cpuprofiles we might be able to dig in to find out what is going on.

also, you could look at https://github.com/microsoft/pylance-release/wiki/Opening-Large-Workspaces-in-VS-Code to see whether pylance is properly set up for your code base.

if all you want is some notepad with completion, you can try this https://github.com/microsoft/pylance-release/blob/main/docs/settings/python_analysis_languageServerMode.md

@willjhenry
Copy link
Author

Thank you, I very much appreciate the help. Does pylance ignore symbolic links? This may be an issue for us. I see there is an open issue for this, so I am guessing it does not yet #5158. We may be able to work around this, but I am wondering if that would also break some of the features. Specifically, we have a common directory that, via symbolic links, is in multiple places in our code base. (The reason for this is how we need to deploy code). If this directory is ignored, when it is a symbolic link, then I am guess pylance would also not be able to do things like 'go to definition' for imports from the common directory?

@heejaechang
Copy link
Contributor

are you saying symbolic link is causing the perf issue? or are you talking about something else?

@willjhenry
Copy link
Author

I am wondering if it might be causing the issue. We have the same files symbolicly linked in many places in our workspaces. If pylance is indexing all of these, I could see it potentially causing and perf issue.

@bschnurr
Copy link
Member

try disabling indexing '"python.analysis.indexing": false,'

@RanitBehera
Copy link

I am also having the same issue. Just commenting a line makes me wait like 5 seconds before this "1 file and 0 cells to analyze" is finished and linter updates. Same with auto-suggestion box. Can't work with my project. This issue just started recently since last two days. I have tried old versions of Pylance but having same issue which was not there earlier. Can't figure out where is the problem. The Pylance itself takes like 25 seconds to start where earlier it was 4-5 seconds.

@heejaechang
Copy link
Contributor

my guess is some configuration mess up? probably log would give us something to look at. but that said, links I shared above would be good point of start.

I am wondering if it might be causing the issue. We have the same files symbolicly linked in many places in our workspaces. If pylance is indexing all of these, I could see it potentially causing and perf issue.

symbolic link can change import path so we can't just ignore it in all cases blindly. and most of time, files that are symbolic linked is not user files we index, and we usually don't index third party files symbolic linked unless explicitly asked.

as I said above, profile data would be best for us to pinpoint exact problem. otherwise, at least log as stated in the link above. otherwise, it is hard for us to dig in.

@RanitBehera
Copy link

Thanks for quick response. Here are log files for when I reload the VS Code and comment out a line in it.

reload.txt
commentout.txt

In local system the Pylance is fast and takes about 300ms to start, however in SSH remote server its very slow since last 3-4 days. It takes a long time start and then this "1 file and 0 cells to analyze" shows up after every keystroke including spaces. The linter doesn't update, or IntelliSense suggestion box doesn't appear until it's done which takes about 5-6 seconds.

Don't know if the problem is from "Python" and/or "Python Debugger" extensions as they too take a long time to start in SSH server compared to local system.

Appreciate the help.

@willjhenry
Copy link
Author

fwiw I am also working on remote docker container

@heejaechang
Copy link
Contributor

@willjhenry without any actual log or profile data, there is not much we can help from our side.

@RanitBehera in your case, it looks like SPM.py takes long time for us to analyze.

2024-11-16 03:32:07.086 [info] (2808884) [BG(1)] analyzing: file:///mnt/home/student/cuser/USER/Repo/galspy/src/galspec/SPM.py (35099ms)
 2024-11-16 03:34:00.998 [info] (2808884) [BG(1)]   checking: file:///mnt/home/student/cuser/USER/Repo/galspy/src/galspec/SPM.py (11510ms)

if you do this and share log with us, we might be able to see what cause this slowness on analysis.

https://github.com/microsoft/pylance-release/wiki/Collecting-data-for-an-investigation.#collecting-type-evaluation-related-data-in-the-log

basically, you do that same thing/steps as you collected the provided logs, but with these settings on

set python.analysis.logTypeEvaluationTime to true
set python.analysis.typeEvaluationTimeThreshold to 500

@RanitBehera
Copy link

@heejaechang Here are the logs.

reload.txt
commentout.txt

Thanks.

@heejaechang
Copy link
Contributor

so, this is the interesting part in the log.

2024-11-16 14:53:11.591 [info] (2929306) [BG(1)]   analyzing: file:///mnt/home/student/cuser/USER/Repo/galspy/src/galspec/SPM.py ...
2024-11-16 14:53:11.591 [info] (2929306) [BG(1)]     checking: file:///mnt/home/student/cuser/USER/Repo/galspy/src/galspec/SPM.py ...
2024-11-16 14:53:11.591 [info] (2929306) [BG(1)]       er [ImportFromAs '"FlatLambdaCDM"  [15:31]'  (..galspec.SPM) [15:31]] (904ms)
2024-11-16 14:53:13.188 [info] (2929306) [BG(1)]       tt ["self.star_bank.append((mass, a <shortened> " (..galspec.SPM) [42:9]] (696ms)
2024-11-16 14:53:15.793 [info] (2929306) [BG(1)]       tt ...
2024-11-16 14:53:15.794 [info] (2929306) [BG(1)]         rt ...
2024-11-16 14:53:15.794 [info] (2929306) [BG(1)]           rt ...
2024-11-16 14:53:15.794 [info] (2929306) [BG(1)]              ...
2024-11-16 14:53:15.794 [info] (2929306) [BG(1)]               rt ["open(self.filepath, "rb")" (..galspec.bpass) [186:14]] (599ms)
2024-11-16 14:53:15.891 [info] (2929306) [BG(1)]              [Function 'Read' (galspec.bpass)] (697ms)
2024-11-16 14:53:15.891 [info] (2929306) [BG(1)]           rt ["BPASSCache("cache/bpass_chab_3 <shortened> " (..galspec.SPM) [116:37]] (698ms)
2024-11-16 14:53:15.891 [info] (2929306) [BG(1)]         rt ["_SPMPixel._spec_cache" (..galspec.SPM) [115:12]] (699ms)
2024-11-16 14:53:15.891 [info] (2929306) [BG(1)]       tt ["_SPMPixel._spec_cache == None" (..galspec.SPM) [115:12]] (699ms)
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]       er ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]          ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]           rt ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]             rt ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                 tr ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                   er ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                     hr ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                       hr ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                         kn ...
2024-11-16 14:53:19.999 [info] (2929306) [BG(1)]                            [Function '_locate' (mpl_toolkits.axes_grid1.axes_divider)] (507ms)
2024-11-16 14:53:20.086 [info] (2929306) [BG(1)]                         kn ["functools.partial(self._locate <shortened> " (..mpl_toolkits.axes_grid1.axes_divider) [197:19]] (595ms)
2024-11-16 14:53:20.086 [info] (2929306) [BG(1)]                       hr ["functools.partial(self._locate <shortened> " (..mpl_toolkits.axes_grid1.axes_divider) [197:19]] (595ms)
2024-11-16 14:53:20.086 [info] (2929306) [BG(1)]                     hr ["functools.partial(self._locate <shortened> " (..mpl_toolkits.axes_grid1.axes_divider) [197:19]] (595ms)
2024-11-16 14:53:20.087 [info] (2929306) [BG(1)]                   er ["locator" (..mpl_toolkits.axes_grid1.axes_divider) [197:9]] (597ms)
2024-11-16 14:53:20.088 [info] (2929306) [BG(1)]                 tr ["locator" (..mpl_toolkits.axes_grid1.axes_divider) [197:9]] (597ms)
2024-11-16 14:53:20.088 [info] (2929306) [BG(1)]                [Function 'new_locator' (mpl_toolkits.axes_grid1.axes_divider)] (598ms)
2024-11-16 14:53:20.088 [info] (2929306) [BG(1)]             rt ["divider.new_locator" (..mpl_toolkits.axes_grid1.axes_divider) [672:15]] (599ms)
2024-11-16 14:53:20.088 [info] (2929306) [BG(1)]           rt ["axes.set_axes_locator" (..mpl_toolkits.axes_grid1.axes_divider) [673:5]] (599ms)
2024-11-16 14:53:20.088 [info] (2929306) [BG(1)]          [Function 'make_axes_locatable' (mpl_toolkits.axes_grid1.axes_divider)] (599ms)
2024-11-16 14:53:20.088 [info] (2929306) [BG(1)]       er [Assignment (..galspec.SPM) [461:9]] (600ms)
2024-11-16 14:53:22.596 [info] (2929306) [BG(1)]     checking: file:///mnt/home/student/cuser/USER/Repo/galspy/src/galspec/SPM.py (12410ms)

one thing I noticed is that total took SPM.py (12410ms) but there is only a few that's over 500, so 500 might be too high threshold for your case. we might want to try with slower number like 300 or 250.

that said, it shows some of parts in your code where pylance can benefit if you add type annotation explicitly since pylance doesn't need to spend time to infer its type. (hence improve performance on your code)

"_SPMPixel._spec_cache" (..galspec.SPM) [115:12]

what this means is that at the line 115, column 12 at the module that ends with ..galspec.SPM (SPM.py file), expression _SPMPixel._spec_cache took 699ms to find out its type information. if you can explicitly annotation that to specific type, then that 700ms will go away.

it is same for "axes.set_axes_locator" (..mpl_toolkits.axes_grid1.axes_divider) [673:5]

most of time, there are few offenders and once you explicitly add type annotations for them, perf issue will go away. you might be able to identify the common offenders once you lower the threshold to 250 and get the log again.

...

That said, if you can provide us with the actual code, we might be able to identify patterns that need improvement. This could allow us to enhance performance without requiring users to explicitly add type annotations. If you're able to share your code, please let us know.

@willjhenry
Copy link
Author

ok, here are my cpuprofile files. One is quite large, ~125mb. Is that expected? This was pretty short example, just deleted a variable definition, and then adding it back. Here is a link to a google drive folder with the files: cpuprofile files

@RanitBehera
Copy link

@heejaechang I have tested the same codes on local system and Pylance is incredibly fast. It's only slow in my university server which used to be fast one week back. I will talk with university system admin if they changed some configurations recently. I will update if issue is not from there side.

Thanks for the suggestions on type annotations.

@heejaechang
Copy link
Contributor

@RanitBehera I was wondering why some of parsing was so slow such as these

parsing: file:///mnt/home/student/cuser/USER/Repo/galspy/src/galspec/SPM.py (5399ms)
parsing: file:///mnt/home/student/cuser/USER/Repo/galspy/console/commands/mpgadget_run/csnap.py [fs read 95ms] (795ms)
parsing: file:///mnt/home/student/cuser/.vscode-server/extensions/ms-python.vscode-pylance-2024.11.2/dist/typeshed-fallback/stdlib/builtins.pyi (3602ms)

and more

I guess slow access to the server might have been the reason why the parsing was so slow.

That said, once the file is read and parsed, it is cached in memory, so the slowness should only occur once (until the cache is dumped). However, the type evaluation performance should remain consistent since we need to re-evaluate after changes (such as typing).

Adding type annotations should improve type eval performance.

@heejaechang
Copy link
Contributor

@willjhenry I looked at your profile. most of slowness was coming from type eval.

as you can see here

Image
or
Image

so, the same things I said (#6658 (comment) and #6658 (comment)) to Ranit should apply to you as well

if you can provide me the log with the setting on, I can help you pin point location in your code that would help pylance by having explicit type annotation, or if we can get hand on your code, we could improve our type evaluator to better handle your case/code pattern.

@willjhenry
Copy link
Author

willjhenry commented Nov 18, 2024

got it, I will try that,

Also, to be clear, you are saying explicit type hinting, in function signatures, would speed up pylance? e.g:

def add_numbers(a: int, b: int) -> int:
    return a + b

@heejaechang
Copy link
Contributor

heejaechang commented Nov 18, 2024

Yes. When they are missing, pylance will try to figure it out by analyzing the code, such as the function body. For example, if add_numbers was missing type hints, for add_numbers(10, 20). (after the dot here), pylance will look at the function body and arguments, determine that the return type is int, and show suggestions for int. Most of the time, it is fast, but the function body/expressions could be quite complex, or the pattern might be something we can optimize further. In such cases, adding type hints explicitly will help pylance analyze the code more effectively.

@willjhenry
Copy link
Author

I have these settings on, and will collect more logs:

set python.analysis.logTypeEvaluationTime to true
set python.analysis.typeEvaluationTimeThreshold to 500

@willjhenry
Copy link
Author

I also note that I am getting a lot of these messages "Emptying type cache to avoid heap overflow", example:

2024-11-24 19:01:15.001 [info] [Error - 7:01:15 PM] Request workspace/symbol failed.
2024-11-24 19:01:15.001 [info] Message: Request workspace/symbol failed with message: Maximum call stack size exceeded
Code: -32603
2024-11-24 19:01:47.851 [info] [Info - 7:01:47 PM] (18308) Emptying type cache to avoid heap overflow. Used 1800MB out of 2096MB.
2024-11-24 19:01:48.161 [info] [Info - 7:01:48 PM] (18308) Emptying type cache to avoid heap overflow. Used 1807MB out of 2096MB.

@willjhenry
Copy link
Author

I apologize if this is the wrong thread, but often I am stuck on this message, when trying to save a file.

"Saving 'create_oregon_people_competition_surveys.py': Getting code actions from ''Python', 'Ruff', 'ESLint', 'Jupyter', 'cursor-retrieval." Could this be pylance?

I did attempt to cpuprofile, but after stopping, I did not the popup that indicates where the file is. I will try again

@willjhenry
Copy link
Author

I reloaded the window, and was able to profile the save command. It ran much quicker this time, so it may not show the problem, and may not even be a Pylance inssue.

pylance_2024.8.1_bg_1_id_0.cpuprofile
pylance_2024.8.1_bg_2_id_0.cpuprofile
pylance_2024.8.1_fg_id_0.cpuprofile

@willjhenry
Copy link
Author

@heejaechang
I have collected another example. Would you mind taking a look?

https://drive.google.com/file/d/1u2b4cPuJKKem7-UvDmh61-JtyH19R5R_/view?usp=drive_link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs repro Issue has not been reproduced yet
Projects
None yet
Development

No branches or pull requests

5 participants