-
Notifications
You must be signed in to change notification settings - Fork 31
RunningCode
- Download the code by running cloning this repository
git clone https://github.com/prgumd/EVDodgeNet
You'll need the following dependencies to run our code.
- OpenCV 3.3
- TensorFlow 1.14 (GPU or CPU version)
- Appropriate Cuda and Cudnn version for your Tensorflow and Ubuntu version
- Matplotlib
- tqdm
- Numpy
- Termcolor
We've tested the code on both Ubuntu 16.04 and 18.04 using Tensorflow 1.14 on GPU using Python 2.7.
Now, follow the instructions below for training/testing each network.
To train the EVDeblurNet code, you'll need to run code\DeblurNetUnsup\TrainEVDeblurNet.py
. The pre-trained models can be downloaded from here. The following command line flags are implemented for ease of use.
-
--BasePath
: Base path from where images are loaded, eg., /media/nitin/Research/EVDodge/downfacing_processed. -
--NumEpochs
: Number of epochs the training will be done for -
--DivTrain
: Factor to reduce Train data by per epoch, used for debugging only or for super large datasets. -
--MiniBatchSize
: Size of the MiniBatch to use. -
--LoadCheckPoint
: Load Model from latest Checkpoint from CheckPointPath? -
--LogsPath
: Path to save logs, eg., /media/nitin/Research/EVDodge/Logs/ -
--LossFuncName
: Choice of Loss functions, Choice of Loss functions, choose from M for Mean, V for Variance. -
--CheckPointPath
: Path to save checkpoints. -
--GPUDevice
: What GPU do you want to use? -1 for CPU. -
--LR
: Learning Rate. -
--SymType
: Similarity mapping, choose from L1 and Chab
To run the EVDeblurNet code, you'll need to run code\DeblurNetUnsup\RunEVDeblurNet.py
. The following command line flags are implemented for ease of use.
-
--ModelPath
: Path to load model from. eg., /media/nitin/Research/EVDodge/CheckpointsDeblurNet/199model.ckpt -
--ReadPath
: Path to load images from. eg., /media/nitin/Research/EVDodge/DatasetChethanEvents/processed -
--WritePath
: Path to write images to. eg., /media/nitin/Research/EVDodge/DatasetChethanEvents/Deblurred -
--GPUDevice
: What GPU do you want to use? -1 for CPU.
To train the EVDeblurNet code, you'll need to run code\HomographyNetUnsup\TrainEVDeblurNet.py
. The pre-trained models can be downloaded from here. The following command line flags are implemented for ease of use.
-
--BasePath
: Base path from where images are loaded, eg., /media/nitin/Research/EVDodge/downfacing_processed. -
--NumEpochs
: Number of epochs the training will be done for -
--DivTrain
: Factor to reduce Train data by per epoch, used for debugging only or for super large datasets. -
--MiniBatchSize
: Size of the MiniBatch to use. -
--LoadCheckPoint
: Load Model from latest Checkpoint from CheckPointPath? -
--LossFuncName
: Choice of Loss functions, choose from PhotoL1, PhotoChab, PhotoRobust when using TrainingType as US. If TrainingType S is used this parameter is ignored. -
--NetworkType
: Choice of Network type, choose from Small, Large. -
--CheckPointPath
: Path to save checkpoints. -
--LogsPath
: Path to save logs, eg., /media/nitin/Research/EVDodge/Logs/ -
--GPUDevice
: What GPU do you want to use? -1 for CPU. -
--LR
: Learning Rate. -
--TrainingType
: Training Type, S: Supervised, US: Unsupervised.
To run the EVDeblurNet code, you'll need to run code\HomographyNetUnsup\RunEVHomographyNet.py
. The following command line flags are implemented for ease of use.
-
--ModelPath
: Path to load model from. eg., /media/nitin/Research/EVDodge/CheckpointsDeblurNet/199model.ckpt -
--ReadPath
: Path to load images from. eg., /media/nitin/Research/EVDodge/DatasetChethanEvents/processed -
--WritePath
: Path to write images to. eg., /media/nitin/Research/EVDodge/DatasetChethanEvents/Deblurred -
--GPUDevice
: What GPU do you want to use? -1 for CPU. -
--CropType
: What kind of crop do you want to perform? R: Random, C: Center
To train the EVSegNet code, you'll need to run code\EVSegNet\TrainEVSegNet.py
. The pre-trained models can be downloaded from here. The following command line flags are implemented for ease of use.
-
--BasePath
: Base path from where images are loaded, eg., /media/nitin/Research/EVDodge/downfacing_processed. -
--NumEpochs
: Number of epochs the training will be done for -
--DivTrain
: Factor to reduce Train data by per epoch, used for debugging only or for super large datasets. -
--MiniBatchSize
: Size of the MiniBatch to use. -
--LoadCheckPoint
: Load Model from latest Checkpoint from CheckPointPath? -
--CheckPointPath
: Path to save checkpoints. -
--LogsPath
: Path to save logs, eg., /media/nitin/Research/EVDodge/Logs/ -
--GPUDevice
: What GPU do you want to use? -1 for CPU. -
--LR
: Learning Rate. -
--TrainingType
: Training Type, S: Supervised, US: Unsupervised. -
--MaxFrameDiff
: Maximum Frame difference to feed into network.
To run the EVSegNet code, you'll need to run code\EVSegNet\RunEVSegNet.py
. The following command line flags are implemented for ease of use.
-
--ModelPath
: Path to load model from. eg., /media/nitin/Research/EVDodge/CheckpointsDeblurNet/199model.ckpt -
--ReadPath
: Path to load images from. eg., /media/nitin/Research/EVDodge/DatasetChethanEvents/processed -
--WritePath
: Path to write images to. eg., /media/nitin/Research/EVDodge/DatasetChethanEvents/Deblurred -
--GPUDevice
: What GPU do you want to use? -1 for CPU.
None of the network combinations are trained together. They are trained individually and the output of one is fed into the next. One could train the EVDeblurNet
using the instructions above, then run the model to get predictions and then train EVHomographyNet
on the predictions to obtain a combination and so on.