Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement engine="thread" in ChunkRecordingExecutor #3526

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

samuelgarcia
Copy link
Member

@samuelgarcia samuelgarcia commented Nov 8, 2024

@h-mayorquin @zm711 @alejoe91
I wanted to do this since a long time : implement thread for ChunkRecordingExecutor.

Now we can do :

# this use threads
job_kwargs = dict(chunk_duration="1s", pool_engine="thread", n_jobs=8, progress_bar=True)
# this use processes
job_kwargs = dict(chunk_duration="1s", pool_engine="process", n_jobs=8, progress_bar=True)

Maybe this will help a lot windows user and will avoid use "spawn" for multiprocessing (which is too slow at startup).
My bet is that thread will have IO (read data from disk) and computation withou the GIL without computing lib, so it should good enought for parralell computing. Lets see.

Here a very first as a proof of concept. (Without any real test).

TODO:

  • add engine thread
  • add worker_index mechanism in job_tools.py (this will be usefull for multi GPU)
  • rename max_threads_per_process to max_threads_per_worker. See note.
  • discuss renaming n_jobs to n_worker or num_worker or max_worker (Alessio will be unhappy)
  • make a get_best_job_kwargs() that will be platform dependant.

When using engine thread we will need to disambigu the concept of worker that can be thread and max_threads_per_worker which related to the hidden thread use by computing lib like numpy/scipy/blas/lapcak/sklearn.

EDIT:

  • on linux thread are a bit slower than process with fork but not so much.

Important notes:
many ProcessPoolExecutor are hard coded in several places (principal_components, split, merge...)
we should also do this but this will be a separated PR. because theses parallelisation are not on the recording axis.
Note that tricky stuff between loops, thread and process is initialzing variable to a private dict per worker. This is done in ChunkRecordingExecutor but this should be propagated for other use cases that do not need recording slices.

@alejoe91 alejoe91 added core Changes to core module concurrency Related to parallel processing labels Nov 11, 2024
@samuelgarcia
Copy link
Member Author

After a long fight this is now working.
The threadpool was making the progress_bar randomly freezing.
I found a simple trick but it took long time.

@samuelgarcia
Copy link
Member Author

@zm711 : if you have time could you make some test using windows ?
I will also do it.

@zm711
Copy link
Collaborator

zm711 commented Nov 20, 2024

@zm711 : if you have time could you make some test using windows ?

Yep. Will do once tests pass :)

@samuelgarcia samuelgarcia marked this pull request as ready for review November 20, 2024 13:35
@samuelgarcia
Copy link
Member Author

@alejoe91 @zm711 @chrishalcrow @h-mayorquin : ready for review.

Also we should discuss changing n_jobs to something else.
job is ambiguous worker is better.
This semantic come from joblib but I think the smantic is unclear.
A "job" is more something push to slurm.
In my mind job=task but NOT the the numer of people doing the task queue (they are workers). No ?

@alejoe91
Copy link
Member

failing tests are really weird. The arrays that are not equal look exactly the same from the values

Copy link
Collaborator

@zm711 zm711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

few tweaks as well.

I haven't look deeply but since Windows and Macs have a different mp context at baseline do we think that could be contributing or that this is just floating point issues?

src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
@zm711
Copy link
Collaborator

zm711 commented Nov 20, 2024

script in case people want to know

import spikeinterface.full as si
sorting, rec = si.generate_ground_truth_recording()
analyzer = si.create_sorting_analyzer(rec, sorting)
analyzer.compute(['random_spikes', 'waveforms'])
%timeit analyzer.compute(['principal_components']) 
# or
%timeit analyzer.compute(['principal_components'], n_jobs=2)

Okay with default settings on this branch using the generate_ground_truth_recording

n_jobs=1
67 ms ± 4.35 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n_jobs=2
47.9 s ± 971 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# explicitly setting pool_engine=='thread'
48.4 s ± 1.23 s per loop (mean ± std. dev. of 7 runs, 1 loop each)

compared to going back to main

n_jobs=1
78.3 ms ± 27.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n_jobs=2
48.6 s ± 890 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

I'm thinking this might be deeper than our code. Let me know if you want a different type of test Sam. For real data with n_jobs we often have PCA take 1-2 hours. So I'm not currently testing with real data. Do we want me to play around with n_job number and threads?

@samuelgarcia
Copy link
Member Author

Hi Zach and Alessio. Thanks fo testing. pca is not the best parralelisation we have.
for me detect_peaks or save() or better illustration. But this is ok.

Yes, this failing test are super strange because it is a floating point issue which is not the same with multiprocessing or thread. This hard to figure.

As a side note : I modified the waveforms_tools that estimate templates with both thread and process and this do not give the same results. I need to set decimals=4 for almost equal which is very easy to pass. I did not except this. Maybe we have somthing deep with parralisation that can change the result.
Lets discuss this with a call.

@zm711
Copy link
Collaborator

zm711 commented Nov 21, 2024

I can test save at some point this week on Windows.

@samuelgarcia
Copy link
Member Author

test are passing now

@zm711
Copy link
Collaborator

zm711 commented Nov 22, 2024

I forgot.... easy testing of save is not really possible on Windows due to the inability to overwrite files still open in the process so I can't use timeit. I will need to try to write a script. I can try detect_peaks though.

@zm711
Copy link
Collaborator

zm711 commented Nov 22, 2024

okay detect_peaks

on main

n_jobs=1
43.6 ms ± 335 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
n_jobs=2
3.86 s ± 361 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

with this pr

n_jobs=2
164 ms ± 3.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n_jobs=2, and explicitly setting pool_engine='thread'
162 ms ± 515 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

So an improvement but still worse than not using multiprocessing.

@zm711
Copy link
Collaborator

zm711 commented Nov 22, 2024

Now trying a longer simulated recording to see if the more realistic length if we see a big difference.

object is

GroundTruthRecording: 4 channels - 25.0kHz - 1 segments - 250,000,000 samples
                      10,000.00s (2.78 hours) - float32 dtype - 3.73 GiB
n_jobs=1
43.6 s ± 329 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
n_jobs=4, pool_engine='thread'
1min 21s ± 821 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

@zm711
Copy link
Collaborator

zm711 commented Nov 22, 2024

Now checking if we have to deal with more channels

GroundTruthRecording: 64 channels - 25.0kHz - 1 segments - 100,000,000 samples
                      4,000.00s (1.11 hours) - float32 dtype - 23.84 GiB
n_jobs=1
3min 51s ± 2.79 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
n_jobs=4, pool_engine='thread'
1min 45s ± 564 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

So we finally found the spot where the threading helps. As channel count increases we even get a benefit from the n_jobs>1 and using 'thread'. Cool. I guess we could set something up to try to find these types of conditions to help people choose when to do multiprocessing for some of these steps on Windows (& maybe Mac--which I haven't tested yet).

@samuelgarcia
Copy link
Member Author

Salut Zach,
thanks a gain for your time.

theses results are a bit disapointing... I would except better speedup.
Do you have many core on this windows ?
I will also do some tests.

@zm711
Copy link
Collaborator

zm711 commented Nov 22, 2024

Salut Sam,
I have 8 cores on this computer (it's a few years old at this point). Switching to n_jobs=8 with pool_engine='thread' looks like it will take 1 min 14 seconds to do detect_peaks on with a 64 channel 1 hour simulated recording. So the benefit doesn't seem to be scaling. You want me to run some tests changing the number of threads too?

Copy link
Collaborator

@zm711 zm711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few more comments that can be cleaned up now that we have the big push for this PR done.

I still think we should find a way either in docs or maybe some sort of CI to help determine when users of specific OSs should use n_jobs>1. It seems like different functionality will depend on this in different ways (e.g. detect_peaks cares about channels), so this thought isn't fully formed yet.

EDIT: Just ran quick short tests (so might be missing time effects) but seems like anything over 8 channels is improved by threading. Again maxing out at 2.1x ish improvement, but that's still something.

src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
src/spikeinterface/core/job_tools.py Outdated Show resolved Hide resolved
Co-authored-by: Zach McKenzie <[email protected]>
@samuelgarcia
Copy link
Member Author

merci Zach.

How to document this is a big topic. The parralelism was quite empirical until now. For big linux station, it faster with many channel and many core but do not scale we reach a plateau quite soon in speed.
We should have a benchmark script somewhere which is able to generate some figures from time to time to be injected in the doc.

@yger
Copy link
Collaborator

yger commented Nov 27, 2024

@samuelgarcia . i'm testing it and i have a bug

SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose)

File "/media/cure/Secondary/pierre/softwares/spikeinterface/src/spikeinterface/sorters/internal/spyking_circus2.py", line 190, in _run_from_folder
prototype = get_prototype(
^^^^^^^^^^^^^^
File "/media/cure/Secondary/pierre/softwares/spikeinterface/src/spikeinterface/sorters/internal/spyking_circus2.py", line 428, in get_prototype
res = detect_peaks(
^^^^^^^^^^^^^
File "/media/cure/Secondary/pierre/softwares/spikeinterface/src/spikeinterface/sortingcomponents/peak_detection.py", line 133, in detect_peaks
outs = run_node_pipeline(
^^^^^^^^^^^^^^^^^^
File "/media/cure/Secondary/pierre/softwares/spikeinterface/src/spikeinterface/core/node_pipeline.py", line 586, in run_node_pipeline
processor.run(recording_slices=recording_slices)
File "/media/cure/Secondary/pierre/softwares/spikeinterface/src/spikeinterface/core/job_tools.py", line 538, in run
recording_slices2 = [(thread_local_data, ) + args for args in recording_slices]
~~~~~~~~~~~~~~~~~~~~~~^~~~~~
TypeError: unsupported operand type(s) for +: '_thread._local' and 'int'

@yger
Copy link
Collaborator

yger commented Nov 27, 2024

It works only if you modify L537 in job_tools.py by

recording_slices2 = [(thread_local_data, ) + tuple(args) for args in recording_slices]

(need to add the tuple() )

@chrishalcrow
Copy link
Collaborator

Hello, just some quick benchmarking on a M4 Mac, doing detect_peaks on a 96 channel, 20 minute recording (just once per set-up)

THIS PR:
n_jobs = 1 : 22s
n_jobs = 4, "thread" : 26s
n_jobs = 4, "spawn" : 12s

CURRENT MAIN:
n_jobs = 1 : 26s
n_jobs = 4 : 11s

So the "thread" option doesn't seem to be working as expected for this system.

@yger
Copy link
Collaborator

yger commented Nov 28, 2024

Did some benchmarks on linux, and while it is slower than process, this is working and we do get a speedup. However, @samuelgarcia, I think you should also extend the possibilities to the get_poolexecutor() in job_tools.py. Some functions in components are relying on this, and maybe we should propagate the thread option there also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
concurrency Related to parallel processing core Changes to core module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants