Skip to content

Repository for collaboration on Celldom computer vision solutions

License

Notifications You must be signed in to change notification settings

hammerlab/SmartCount

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DOI

SmartCount

Repository for collaboration on Celldom computer vision solutions.

Screencasts

  • Basic Usage - Overview of the UI and visualization app
  • Cross Filtering - Visualize growth across an entire array with drill down interaction to images and per-hour cell counts for single apartments
  • Heatmaps Over Time - How the time-indexed heatmaps can be used for QC (e.g. identifying apartments that were not counted) or visualizing cell growth rates over time
  • Apartment Time Lapse - Visualize cell counts for individual apartments over time as well as export video time lapses of segmented objects within those apartments

Examples

  • Processing Raw Microscope Images - This example shows how an experiment producing raw images of cell apartments can be processed to accomplish the following:
    • Extract single apartment images from multi-apartment images
    • Extract individual cell images from apartment images
    • Quantify the cells in single apartments (counts, sizes, "roundness", etc.)
    • Interpret the database of information that results from processing (3 tables, one for raw images, apartments, and individual cells)
  • Processing CLI - This example shows how to accomplish the above using the CLI instead of python, as well as how to run a growth rate analysis
  • Generating Videos (basic | advanced) - These examples show how to get details about specific apartments (like videos), after using the pre-computed cell counts to help select interesting ones

Installation and Setup

To use the tools in this repo, you need nvidia-docker running on Ubuntu (preferably 16.04 but other versions may work too). Installing nvidia-docker will also involve installing NVIDIA Drivers as well as standard Docker.

After that, there isn't much to setup other than building and running the docker container. All that requires is to first clone this repository somewhere locally (e.g. ~/repos), and then run:

# Pull the latest docker image (can be run anywhere)
nvidia-docker pull eczech/celldom:latest

# Decide which locally directory you want to use within the container as
# the main storage directory (otherwise, everything you generate in the container is temporary)
export CELLDOM_DATA_DIR=/data/disk2/celldom

# Run the container, which will show a link to visit in your browser
# Port relationships: 8888 -> Jupyterlab, 6006 -> Tensorboard, 8050-8060 -> Dash App
nvidia-docker run --rm -ti -p 8888:8888 -p 6006:6006 -p 8050-8060:8050-8060 \
-v $CELLDOM_DATA_DIR:/lab/data/celldom \
--name celldom eczech/celldom:latest

The primary interface to the container is JupyterLab, which will be available on the localhost at port 8888.

Training

The training process for all 3 models types can be found in these notebooks:

  • Cell Model Training - This notebook shows how the Mask-RCNN cell model is trained to identify individual cell objects across several cell lines and chip form factor.
  • Marker Model Traiing - A "marker" can be any part of a chip apartment that is used to identify common pixel offsets. This can be any visual feature of the apartment, though there is typically a feature printed on the chips specifically for doing this kind of key point identification. See the notebook for example images and how large images containing many apartments are broken into individual apartment images. This model is also based on the Mask-RCNN architecture.
  • Digit Model Training - Digit images are extracted from raw microscope images after the "marker" for each apartment has been identified (using fixed pixel offsets). Exports of many of these images were annotated with the appropriate digit label and a 10-class SVHN classifier was trained in this notebook to recognize each digit.

Development Notes

Backups

To sync local annotations to Google Storage:

cd /data/disk2/celldom/dataset
gsutil rsync -r training gs://celldom/dataset/training

Models

Trained models stored as .h5 files are available at https://storage.googleapis.com/celldom/models.

Currently, both cell and digit recognition models (saved as cell_model.h5 and single_digit_model.h5 respectively) are agnostic to chip type which means that selecting a model to use for a new experiment is as simple as finding the most recently trained one. In other words, the model with with the highest "rX.X" designation should be the most recently trained version.

Marker models on the other hand can have a different target outcome that is chip-dependent. This relationship between chips and the most recently trained marker models is as follows:

  • G1: r0.7/marker_model.h5
  • G2: r0.6/marker_model.h5
  • G3: r0.6/marker_model.h5
  • ML: r0.8/marker_model.h5

About

Repository for collaboration on Celldom computer vision solutions

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages