Releases: vanvalenlab/deepcell-tracking
0.6.5
🐛 Bug Fixes
Add epsilon to standard deviation normalization @msschwartz21 (#119)
Cast data to correct type in `get_image_features` @msschwartz21 (#114)
This PR fixes a bug in get_image_features
. If the X data is passed in with an integer type (instead of float), the output of crop_mode='fixed'
and norm=True
is incorrect. In the examples below, the first image is incorrect while the second is correct.
This PR eliminates the bug by casting X data to float32 and y data to int32 to avoid incorrect use of the function.
🧰 Maintenance
Bump version to 0.6.5 @msschwartz21 (#120)
Bump default action versions. @rossbar (#116)
Bump action versions to keep CI current, c.f. vanvalenlab/deepcell-tf#653
Reuse indexed array to avoid extra array copies. @rossbar (#115)
A minor change that should improve performance. Unfortunately NumPy isn't smart enough to cache indexing results for reuse, so each app[idx]
invocation of advanced indexing is actually creating a new array (as opposed to a view), resulting in both more memory usage and additional computation time for each allocation. There should be a significant performance boost by only performing the advanced indexing operation once and reusing the result.
Not critically important - just something I noticed while reviewing #114!
Lint with ruff @rossbar (#113)
Add the ruff linter to the development workflow.
Includes basic configuration as well as some minor tweaks/improvements to pass the linter. The configuration is minimal and reflects the original pytest-pep8
configuration.
Fix failures related to expired usage patterns in dependencies @rossbar (#112)
Removes the global warnings filter for deprecation warnings in the test suite and fixes the issues that appear as a result.
📚️ Documentation
Add references for the AA and TE scores @msschwartz21 (#118)
0.6.4
🚀 Features
Add support for ISBI inputs to the metrics package @msschwartz21 (#107)
This addition to the metrics package makes it possible to directly input ISBI style outputs into our metrics pipeline. This simplifies the process of benchmarking against competitor models which tend to output ISBI style tracks.
🐛 Bug Fixes
Implement correct usage of crop parameter in CellTracker @msschwartz21 (#108)
- Corrects an error where the crop parameter was not being set during tracking and inference
- Removes deprecated post processing functions from the CellTracker
🧰 Maintenance
0.6.3
🐛 Bug Fixes
Correct keys in `correct_shifted_divisions` @msschwartz21 (#105)
Different key names were used in correct_shifted_divisions
which caused issues in some downstream analyses in the model-registry. This PR standardizes all keys in the metrics package to avoid confusion.
0.6.2
🐛 Bug Fixes
Classify divisions that are +/- 1 frame as correct @msschwartz21 (#103)
Recent reviews of tracking predictions have identified a failure mode in the current metrics packages. Different segmentation predictions can sometimes lead to a cell dividing in one frame before or after the frame assigned to the division in the ground truth. Currently this leads to that division counting as both a false positive and a missed division. This PR introduces a new metrics function that identifies these events and corrects the metrics to classify this division as correct. Additionally a new metrics class for tracking (TrackingMetrics
) has been introduced to coordinate running all of the necessary metrics functions.
In the current test split, applying the new metrics pipeline led to the following changes in metrics:
Metric | Old | New |
---|---|---|
Total divisions | 181 | 181 |
Correct divisions | 139 | 154 |
False negative division | 27 | 13 |
False positive division | 40 | 26 |
Mismatch division | 15 | 14 |
Division Recall | 0.84 | 0.92 |
Division Precision | 0.78 | 0.86 |
Division F1 | 0.81 | 0.89 |
Mitotic branching correctness | 0.67 | 0.8 |
Fraction missed divisions | 0.15 | 0.07 |
Fix bug in calculation of fraction missed divisions @msschwartz21 (#102)
Closes #100
🧰 Maintenance
Bump to version 0.6.2 @msschwartz21 (#104)
0.6.1
🧰 Maintenance
Bump to 0.6.1 @msschwartz21 (#99)
Enable option for fixed size crops in get_image_features @vanvalen (#98)
What
Add a flag to get_image_features to allow for doing fixed sized crops rather than crop and resize.
Why
Crop and resize removes information about cell size, which is useful for cell tracking and also learning dynamic representations of cell behavior.
0.6.0
🚀 Features
Update metrics for evaluating tracking performance @msschwartz21 (#95)
This PR introduces several substantial changes
- Reorganization of functions with the addition of two new modules:
metrics
andtrk_io
. Backwards compatible imports were maintained whenever possible.load_trks
,trk_folder_to_trks
,save_trks
,save_trk
,save_track_data
fromutils
totrk_io
match_nodes
,contig_tracks
fromisbi_utils
toutils
classify_divisions
,calculate_summary_stats
fromisbi_utils
tometrics
benchmark_division_performance
deprecated inisbi_utils
and renamed tobenchmark_tracking_performance
inmetrics
- Fixes bugs in how we built graphs of tracks and compared between ground truth and predictions
- Originally we converted lineage data to isbi format prior to generating a graph. This intermediate step unintentionally removed any discontinuities that were present in a lineage. There is now a new function
deepcell_tracking.utils.trk_to_graph
that faithfully converts lineage data to a graph without any intermediate steps. - The use of a
node_key
generated bymatch_nodes
unintentionally dropped lineages if more than one predicted lineage was mapped onto a single ground truth lineage. Instead of mapping cell ids when we create the graph, we instead map cell ids on the fly when we are comparing graphs which eliminates the risk of accidentally dropping lineages from consideration.
- Originally we converted lineage data to isbi format prior to generating a graph. This intermediate step unintentionally removed any discontinuities that were present in a lineage. There is now a new function
- Introduces Association Accuracy as a new metric that evaluates how many edges in the tracking graph are correctly assigned. This score discounts edges involved with a division, but does detect discontinuities in lineages.
- Introduces Target Effectiveness as a new metric that evaluates how many cells in a lineage are correctly assigned to the lineage.
🧰 Maintenance
Bump version to 0.6.0 @msschwartz21 (#97)
0.5.7
🧰 Maintenance
Drop support for python 3.6 and bump deepcell-toolbox requirement @msschwartz21 (#94)
Updates deepcell-toolbox to ~=0.11.2.
0.5.6
🐛 Bug Fixes
Add additional metrics to report in classify_divisions @msschwartz21 (#92)
Addresses #91 and bumps version to 0.5.6 for the next patch release after this PR is complete. I tested the new functionality in the model-registry and those updates can be seen in this branch: https://github.com/vanvalenlab/model-registry/compare/mrgn/tracking-evaluation. I added rounding after running this test, but decimals will now be truncated to 2 digits.
🧰 Maintenance
Bump copyright to 2022 and improve error message @msschwartz21 (#90)
0.5.5
🐛 Bug Fixes
Update `utils.trk_stats` to return a dictionary of stats as output @msschwartz21 (#89)
The trk_stats
function originally printed stats, but did not return them to the user. This PR updates the function to return a dictionary of statistics and makes it possible to input X
, y
and lineages
instead of loading the data from filename
.
0.5.4
🐛 Bug Fixes
Fix bug in `is_valid_lineage`; daughters can be \< parent. @willgraf (#88)
No need to check if the daughters are in all_cells
, as we check that for each lineage. Just check that the daughter is in the lineage.
Don't return False as a shortcut, just warn and continue to the next label. This will enable all warnings to be shown.