Skip to content
This repository has been archived by the owner on Jun 22, 2022. It is now read-only.

User Guide

buus2 edited this page Feb 14, 2018 · 25 revisions

Minerva can work in either of two modes:

  • Submit mode
    Submit mode is the main Minerva mode where you exchange one step from our pipeline with your own solution and train and evaluate the modified model.
  • Dry mode
    In dry mode you can run or train the pipeline to make sure that everything is working correctly.

Also, three types of Neptune support are available:

  • No Neptune
    Choose this option if you want to work without Neptune support.
  • Neptune locally
    Choose this option if you want to run the pipeline locally and use Neptune to visualize the results.
  • Neptune's cloud
    Choose this option if you want to run the pipeline on the cloud available through Neptune.

This user guide is organized in the following way:

1. Submit mode
    1.1. No Neptune
    1.2. Neptune locally
    1.3. Neptune's cloud
2. Dry mode
    2.1. Dry eval
        2.1.1. No Neptune
        2.1.2. Neptune locally
        2.1.3. Neptune's cloud
    2.2. Dry train
        2.2.1. No Neptune
        2.2.2. Neptune locally
        2.2.3. Neptune's cloud

1. Submit mode

Submit mode is the main Minerva mode where you exchange one step from our pipeline with your own solution and train and evaluate the modified model.

Choose a task, for example task1.ipynb. Write your implementation to the task by filling CONFIG dictionary or the body of the solution function according to the instructions:

CONFIG = {}
def solution():
    return something

You can submit your solution with 3 types of Neptune support.

1.1. No Neptune

Choose this option if you want to work without Neptune support.

  1. Download data (once):

    For fashion mnist problem:

    Data are downloaded automatically.

    For whales problem:

    • Download file imgs.zip from Right Whale Recognition challenge site on kaggle (you must be logged in to kaggle to do that).
    • Extract imgs.zip to resources/whales/data/.
    • After that, folder resources/whales/data/ should contain two elements: file metadata.csv and folder imgs with images.
  2. In neptune.yaml file:

    • Comment pip-requirements-file line.

    • Uncomment Local setup paths and set them as follows:

      For fashion mnist problem:

      • data_dir: doesn't matter,
      • solution_dir: resources/fashion_mnist/solution.

      For whales problem:

      • data_dir: resources/whales/data/,
      • solution_dir: resources/whales/solution/.

      Note: you can also set another solution_dir if you previously trained another pipeline instance with use of dry train sub-mode.

    • Comment Cloud setup paths.

  3. Type:

    For fashion mnist problem:

    python main.py -- submit --problem fashion_mnist --task_nr 1

    Run time with GPU: TODO

    For whales problem:

    python main.py -- submit --problem whales --task_nr 1

    Run time with GPU: TODO

    Note: if you want to submit a task using a notebook different than the default one, add --filepath path/to/your/notebook in the end of the command.

1.2. Neptune locally

Choose this option if you want to run the pipeline locally and use Neptune to visualize the results.

  1. Download data in the same way as for no Neptune support.
  2. Edit neptune.yaml file in the same way as for no Neptune support.
  3. Type:

    For fashion mnist problem:

    neptune run -- submit --problem fashion_mnist --task_nr 1
    Run time with GPU: TODO

    For whales problem:

    neptune run -- submit --problem whales --task_nr 1
    Run time with GPU: TODO

1.3. Neptune's cloud

Choose this option if you want to run the pipeline on the cloud available through Neptune.

  1. In neptune.yaml file:

    • Uncomment pip-requirements-file line.

    • Comment Local setup paths.

    • Uncomment Cloud setup paths and set them as follows:

      For fashion mnist problem:

      • data_dir: doesn't matter,
      • solution_dir: /public/minerva/resources/fashion_mnist/solution.

      For whales problem:

      • data_dir: /public/whales,
      • solution_dir: /public/minerva/resources/whales/solution.
  2. Type:

    For fashion mnist problem:

    neptune send \
    --environment keras-2.0-gpu-py3 \
    --worker gcp-gpu-medium \
    -- submit --problem fashion_mnist --task_nr 1

    Run time with GPU: TODO

    For whales problem:

    neptune send \
    --environment pytorch-0.2.0-gpu-py3 \
    --worker gcp-gpu-medium \
    -- submit --problem whales --task_nr 1

    Run time with GPU: TODO

    Note: make sure you typed correct --environment and --worker. If you miss them the script won't run.

2. Dry mode

In dry mode you can run or train the pipeline to make sure that everything is working correctly. We provided two dry sub-modes:

  • Dry eval
    In dry eval sub-mode you can run an existing pipeline and evaluate it.
  • Dry train
    In dry train sub-mode you can train a new instance of the pipeline and then evaluate it.

2.1. Dry eval

In dry eval mode you can run our pipeline to make sure that everything is working correctly.

2.1.1. No Neptune

  1. Download data in the same way as in submit mode.
  2. Edit neptune.yaml file in the same way as in submit mode.
  3. Type:

    For fashion mnist problem:

    python main.py -- dry_eval --problem fashion_mnist
    Run time with GPU: less than 1 minute.

    For whales problem:

    python main.py -- dry_eval --problem whales
    Run time with GPU: about 3 minutes.

2.1.2. Neptune locally

  1. Download data in the same way as in submit mode.
  2. Edit neptune.yaml file in the same way as in submit mode.
  3. Type:

    For fashion mnist problem:

    neptune run -- dry_eval --problem fashion_mnist
    Run time with GPU: less than 1 minute.

    For whales problem:

    neptune run -- dry_eval --problem whales
    Run time with GPU: about 3 minutes.

2.1.3. Neptune's cloud

  1. Edit neptune.yaml file in the same way as in submit mode.
  2. Type:

    For fashion mnist problem:

    neptune send \
    --environment keras-2.0-gpu-py3 \
    --worker gcp-gpu-medium \
    -- dry_eval --problem fashion_mnist
    Run time with GPU: less than 1 minute.

    For whales problem:

    neptune send \
    --environment pytorch-0.2.0-gpu-py3 \
    --worker gcp-gpu-medium \
    -- dry_eval --problem whales
    Run time with GPU: TODO

2.2. Dry train

In dry train sub-mode you can train a new instance of the pipeline and then evaluate it.

2.2.1. No Neptune

  1. Download data in the same way as in submit mode.

  2. Edit neptune.yaml file in the same way as in submit mode except solution_dir. Set solution_dir as the local path where the new pipeline instance will be stored, e.g. output/trained_solution.

    Note: make sure that you chose path which doesn't contain a pipeline. Minerva doesn't overwrite existing models.

  3. Type:

    For fashion mnist problem:

    python main.py -- dry_train --problem fashion_mnist

    Run time with GPU: TODO

    For whales problem:

    python main.py -- dry_train --problem whales

    Run time with GPU: TODO

2.2.2. Neptune locally

  1. Download data in the same way as in submit mode.
  2. Edit neptune.yaml file in the same way as for no Neptune support. Remember to set a new solution_dir.
  3. Type:

    For fashion mnist problem:

    neptune run -- dry_train --problem fashion_mnist
    Run time with GPU: about 15 minutes.

    For whales problem:

    neptune run -- dry_train --problem whales
    Run time with GPU: TODO

2.2.3. Neptune's cloud

  1. Edit neptune.yaml file in the same way as in submit mode except solution_dir. Set solution_dir as the path in Neptune's cloud where the new pipeline instance will be stored, e.g. /output/trained_solution. The path must start with /output.
  2. Type:

    For fashion mnist problem:

    neptune send \
    --environment keras-2.0-gpu-py3 \
    --worker gcp-gpu-medium \
    -- dry_train --problem fashion_mnist
    Run time with GPU: TODO

    For whales problem:

    neptune send \
    --environment pytorch-0.2.0-gpu-py3 \
    --worker gcp-gpu-medium \
    -- dry_train --problem whales
    Run time with GPU: TODO
Clone this wiki locally