-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document performance considerations? #125
Comments
I think a vectorized version of bw.values would be much better e.g.
which returns a list of numpy arrays, without iterating over the intervals in a loop. But I guess this is not implemented yet. |
@dpryan79 what is the fastest way to get arrays of values from a bigwig file for each of many genomic intervals (i.e. entries in a bed file)? |
For others, I found a better solution for the above-described task was to use the bigWigAverageOverBed tool from UCSC. |
|
I'd like to use pyBigWig to collect values at many intervals from many bigwigs, and I'd love to know what's performant.
and
If the former is optimal, is there any advantage to the
intervals
being sorted?Do you know relative performance of pyBigWig
entries()
queries of bigBed files versus tabix queries of gzipped bed files?The text was updated successfully, but these errors were encountered: