🔔🔔🔔 This project is shutdown owing to a very bad coding architecture, the new version of this project will be release soon.
Create a virtual environment and upgrade the version of pip
python3 -m venv .env
source .env/bin/activate
python -m pip install -U pip
Install the required packages
cd /path/to/cloned/repo/directory
pip install -r requirements.txt
Install Pytorch packages, the version of the Nvidia driver is 535
and the CUDA version is 12.1
on a Ubuntu-based system
pip3 install torch torchvision torchaudio
- dataset
- [ds].py where
ds
is any dataset name - getds.py used as a mapping from args to a dataset
- [ds].py where
- loss
- [loss].py where
loss
is any loss name
- [loss].py where
- metrics
- [metric].py where
metric
is any metric name
- [metric].py where
- model
- [model_name]
- *.py
- [model_name]
- utils
- main.py
- mapping.py
- requirements.txt
- trainer.py
- Data sources and related annotations must be put into
dataset/source
folder, which is an ignored folder. A custom dataset class inheritingtorch.utils.data.Dataset
of Pytorch and read all data fromdataset/source
. ⚠️ ⚠️ : Do not change the content of any file insidedataset/
, just create a new.py
file to contribute your custom dataset.
- Create a new folder/new file
.py
to store your model architecture/loss function and its sub-components ⚠️ ⚠️ : Do not change the content of any file insidemodel/
andloss/
- For collaborators, create a new branch with the name as follows:
<task>-<name>
(i.e. model_deeplabv3) and then pull a request to merge to branchmain
. - For outer collaborators, fork this repo, and then pull a request also.
- For review the pull request, you can tag the owner of this repo, the owner will merge and complete the final task to emerge your contribution thoroughly.