The dataset configs are located within tools/cfgs/dataset_configs, and the model configs are located within tools/cfgs for different datasets.
- Please download the official KITTI 3D object detection dataset and organize the downloaded files as follows (the road planes could be downloaded from [road plane], which are optional for data augmentation in the training):
- NOTE: if you already have the data infos from
pcdet v0.1
, you can choose to use the old infos and set the DATABASE_WITH_FAKELIDAR option in tools/cfgs/dataset_configs/kitti_dataset.yaml as True. The second choice is that you can create the infos and gt database again and leave the config unchanged.
OpenPCDet
├── data
│ ├── kitti
│ │ │── ImageSets
│ │ │── training
│ │ │ ├──calib & velodyne & label_2 & image_2 & (optional: planes)
| | | ├──modes
| | | ├──64
| | | ├──32
| | | ├──32^
| | | ├──16
| | | ├──16^
│ │ │── testing
│ │ │ ├──calib & velodyne & image_2
├── pcdet
├── tools
- Generate the data infos and extract point cloud data in modes folder by running the following command:
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml
- Please download the official NuScenes 3D object detection dataset and organize the downloaded files as follows:
OpenPCDet
├── data
│ ├── nuscenes
│ │ │── v1.0-trainval (or v1.0-mini if you use mini)
│ │ │ │── samples
│ │ │ │── sweeps
│ │ │ │── maps
│ │ │ │── v1.0-trainval
├── pcdet
├── tools
- Install the
nuscenes-devkit
with version1.0.5
by running the following command:
pip install nuscenes-devkit==1.0.5
- Generate the data infos by running the following command (it may take several hours):
python -m pcdet.datasets.nuscenes.nuscenes_dataset --func create_nuscenes_infos \
--cfg_file tools/cfgs/dataset_configs/nuscenes_dataset.yaml \
--version v1.0-trainval
- Please download the official Waymo Open Dataset,
including the training data
training_0000.tar~training_0031.tar
and the validation datavalidation_0000.tar~validation_0007.tar
. - Unzip all the above
xxxx.tar
files to the directory ofdata/waymo/raw_data
as follows (You could get 798 train tfrecord and 202 val tfrecord ):
OpenPCDet
├── data
│ ├── waymo
│ │ │── ImageSets
│ │ │── raw_data
│ │ │ │── segment-xxxxxxxx.tfrecord
| | | |── ...
| | |── modes
│ │ │ │── 64
│ │ │ │── segment-xxxxxxxx/
| | | |── ...
│ │ │ │── 32
│ │ │ │── 16^
│ │ │── pcdet_gt_database_train_sampled_xx/
│ │ │── pcdet_waymo_dbinfos_train_sampled_xx.pkl
├── pcdet
├── tools
- Install the official
waymo-open-dataset
by running the following command:
pip3 install --upgrade pip
# tf 2.0.0
pip3 install waymo-open-dataset-tf-2-0-0==1.2.0 --user
- Extract point cloud data with different beams from tfrecord and generate data infos by running the following command (it takes several hours,
and you could refer to
data/waymo/modes/64
to see how many records that have been processed):
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
--cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml
Note that you do not need to install waymo-open-dataset
if you have already processed the data before and do not need to evaluate with official Waymo Metrics.
- Test with a pretrained model:
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}
- To test all the saved checkpoints of a specific training setting and draw the performance curve on the Tensorboard, add the
--eval_all
argument:
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --eval_all
- To test with multiple GPUs:
sh scripts/dist_test.sh ${NUM_GPUS} \
--cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}
# or
sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_GPUS} \
--cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}
You could optionally add extra command line parameters --batch_size ${BATCH_SIZE}
and --epochs ${EPOCHS}
to specify your preferred parameters.
- Train with multiple GPUs or multiple machines
sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
# or
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
- Train with a single GPU:
python train.py --cfg_file ${CONFIG_FILE}
Take PointPillars on Waymo -> nuScenes as an example:
- Train 64-beam teacher model
sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file cfgs/da-waymo-nus_models/pointpillar/pointpillar.yaml \
--batch_size ${BATCH_SIZE} --extra_tag 64 --exp_name ${EXP_NAME}
- Train 32-beam model whose teacher is 64-beam model
sh scripts/dist_train_mimic.sh ${NUM_GPUS} --cfg_file cfgs/da-waymo-nus_models/pointpillar/pointpillar.yaml \
--batch_size ${BATCH_SIZE} --extra_tag 32 --teacher_tag 64 --pretrained_model ${64-BEAM MODEL} \
--pretrained_teacher_model ${64-BEAM MODEL} --mimic_weight 1 --mimic_mode roi --exp_name ${EXP_NAME}
- Train 16*-beam model whose teacher is 32-beam model
sh scripts/dist_train_mimic.sh ${NUM_GPUS} --cfg_file cfgs/da-waymo-nus_models/pointpillar/pointpillar.yaml \
--batch_size ${BATCH_SIZE} --extra_tag 16^ --teacher_tag 32 --pretrained_model ${32-BEAM MODEL} \
--pretrained_teacher_model ${32-BEAM MODEL} --mimic_weight 1 --mimic_mode roi --exp_name ${EXP_NAME}
Notice that you need to select the best model as the teacher model for the next stage.
Although our method is designed for beam-induced domain gap, we can easily combine our method with other general 3D UDA methods (e.g. ST3D) to solve general domain gaps.
sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file cfgs/da-waymo-nus_models/pointpillar_st3d/pointpillar_st3d.yaml \
--batch_size ${BATCH_SIZE} --extra_tag 16^ --pretrained_model ${16*-BEAM MODEL} --exp_name ${EXP_NAME}
For evaluation, you also need to add --extra_tag
:
sh scripts/dist_test.sh ${NUM_GPUS} --extra_tag 16^ \
--cfg_file cfgs/da-waymo-nus_models/pointpillar_st3d/pointpillar_st3d.yaml --batch_size ${BATCH_SIZE} --ckpt ${CKPT}