Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] support DRN #2668

Merged
merged 3 commits into from
Oct 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 84 additions & 0 deletions configs/localization/drn/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# DRN

[Dense Regression Network for Video Grounding](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zeng_Dense_Regression_Network_for_Video_Grounding_CVPR_2020_paper.pdf)

<!-- [ALGORITHM] -->

## Abstract

<!-- [ABSTRACT] -->

We address the problem of video grounding from natural language queries. The key challenge in this task is that one training video might only contain a few annotated starting/ending frames that can be used as positive examples for model training. Most conventional approaches directly train a binary classifier using such imbalance data, thus achieving inferior results. The key idea of this paper is to use the distances between the frame within the ground truth and the starting (ending) frame as dense supervisions to improve the video grounding accuracy. Specifically, we design a novel dense regression network (DRN) to regress the distances from each frame to the starting (ending) frame of the video segment described by the query. We also propose a simple but effective IoU regression head module to explicitly consider the localization quality of the grounding results (i.e., the IoU between the predicted location and the ground truth). Experimental results show that our approach significantly outperforms state-of-the-arts on three datasets (i.e., Charades-STA, ActivityNet-Captions, and TACoS).

<!-- [IMAGE] -->

<div align=center>
<img src="https://github.com/open-mmlab/mmaction2/files/12532583/Fig1.pdf" width="800"/>
</div>

## Results and Models

### Charades STA C3D feature

| feature | gpus | pretrain | Recall@Top1(IoU=0.5) | Recall@Top5(IoU=0.5) | config | ckpt | log |
| :-----: | :--: | :------: | :------------------: | :------------------: | :----------------------------------------------: | :---------------------------------------------: | :--------------------------------------------: |
| C3D | 2 | None | 47.04 | 84.57 | [config](configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_third.py) | [ckpt](https://download.openmmlab.com/mmaction/v1.0/localization/drn/drn_2xb16-4096-10e_c3d-feature_20230809-ec0429a6.pth) | [log](https://download.openmmlab.com/mmaction/v1.0/drn_2xb16-4096-10e_c3d-feature.log) |

For more details on data preparation, you can refer to [Charades STA Data Preparation](/tools/data/charades-sta/README.md).

## Train

The training of DRN has three stages. Following the official paper, the second and the third stage loads the best checkpoint from previous stage.

The first stage training:

```shell
bash tools/dist_train.sh configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_first.py 2
```

The second stage training:

```shell
BEST_CKPT=work_dirs/drn_2xb16-4096-10e_c3d-feature_first/SOME.PTH
bash tools/dist_train.sh configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_second.py 2 --cfg-options load_from=${BEST_CKPT}
```

The third stage training:

```shell
BEST_CKPT=work_dirs/drn_2xb16-4096-10e_c3d-feature_second/SOME.PTH
bash tools/dist_train.sh configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_third.py 2 --cfg-options load_from=${BEST_CKPT}
```

## Test

Test DRN on Charades STA C3D feature:

```shell
python3 tools/test.py configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_third.py CHECKPOINT.PTH
```

For more details, you can refer to the **Testing** part in the [Training and Test Tutorial](/docs/en/user_guides/train_test.md).

## Citation

```BibTeX
@inproceedings{DRN2020CVPR,
author = {Runhao, Zeng and Haoming, Xu and Wenbing, Huang and Peihao, Chen and Mingkui, Tan and Chuang Gan},
title = {Dense Regression Network for Video Grounding},
booktitle = {CVPR},
year = {2020},
}
```

<!-- [DATASET] -->

```BibTeX
@inproceedings{gao2017tall,
title={Tall: Temporal activity localization via language query},
author={Gao, Jiyang and Sun, Chen and Yang, Zhenheng and Nevatia, Ram},
booktitle={Proceedings of the IEEE international conference on computer vision},
pages={5267--5275},
year={2017}
}
```
115 changes: 115 additions & 0 deletions configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_first.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
_base_ = ['../../_base_/default_runtime.py']

# model settings
model = dict(
type='DRN',
vocab_size=1301,
feature_dim=4096,
embed_dim=300,
hidden_dim=512,
bidirection=True,
first_output_dim=256,
fpn_feature_dim=512,
lstm_layers=1,
graph_node_features=1024,
fcos_pre_nms_top_n=32,
fcos_inference_thr=0.05,
fcos_prior_prob=0.01,
focal_alpha=0.25,
focal_gamma=2.0,
fpn_stride=[1, 2, 4],
fcos_nms_thr=0.6,
fcos_conv_layers=1,
fcos_num_class=2,
is_first_stage=True,
is_second_stage=False)

# dataset settings
dataset_type = 'CharadesSTADataset'
root = 'data/CharadesSTA'
data_root = f'{root}/C3D_unit16_overlap0.5_merged/'
data_root_val = f'{root}/C3D_unit16_overlap0.5_merged/'
ann_file_train = f'{root}/Charades_sta_train.txt'
ann_file_val = f'{root}/Charades_sta_test.txt'
ann_file_test = f'{root}/Charades_sta_test.txt'

word2id_file = f'{root}/Charades_word2id.json'
fps_file = f'{root}/Charades_fps_dict.json'
duration_file = f'{root}/Charades_duration.json'
num_frames_file = f'{root}/Charades_frames_info.json'
window_size = 16
ft_overlap = 0.5

train_pipeline = [
dict(
type='PackLocalizationInputs',
keys=('gt_bbox', 'proposals'),
meta_keys=('vid_name', 'query_tokens', 'query_length', 'num_proposals',
'num_frames'))
]

val_pipeline = train_pipeline
test_pipeline = val_pipeline

train_dataloader = dict(
batch_size=16,
num_workers=8,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
drop_last=True,
dataset=dict(
type=dataset_type,
ann_file=ann_file_train,
data_prefix=dict(video=data_root),
pipeline=train_pipeline,
word2id_file=word2id_file,
fps_file=fps_file,
duration_file=duration_file,
num_frames_file=num_frames_file,
window_size=window_size,
ft_overlap=ft_overlap),
)

val_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
drop_last=True,
dataset=dict(
type=dataset_type,
ann_file=ann_file_val,
data_prefix=dict(video=data_root),
pipeline=val_pipeline,
word2id_file=word2id_file,
fps_file=fps_file,
duration_file=duration_file,
num_frames_file=num_frames_file,
window_size=window_size,
ft_overlap=ft_overlap),
)
test_dataloader = val_dataloader

max_epochs = 10
train_cfg = dict(
type='EpochBasedTrainLoop',
max_epochs=max_epochs,
val_begin=1,
val_interval=1)

val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')

val_evaluator = dict(type='RecallatTopK', topK_list=(1, 5), threshold=0.5)
test_evaluator = val_evaluator

optim_wrapper = dict(
optimizer=dict(type='Adam', lr=1e-3),
clip_grad=dict(max_norm=5, norm_type=2),
)

param_scheduler = [
dict(type='LinearLR', start_factor=0.1, by_epoch=True, begin=0, end=5),
]

find_unused_parameters = True
110 changes: 110 additions & 0 deletions configs/localization/drn/drn_2xb16-4096-10e_c3d-feature_second.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
_base_ = ['../../_base_/default_runtime.py']

# model settings
model = dict(
type='DRN',
vocab_size=1301,
feature_dim=4096,
embed_dim=300,
hidden_dim=512,
bidirection=True,
first_output_dim=256,
fpn_feature_dim=512,
lstm_layers=1,
graph_node_features=1024,
fcos_pre_nms_top_n=32,
fcos_inference_thr=0.05,
fcos_prior_prob=0.01,
focal_alpha=0.25,
focal_gamma=2.0,
fpn_stride=[1, 2, 4],
fcos_nms_thr=0.6,
fcos_conv_layers=1,
fcos_num_class=2,
is_first_stage=False,
is_second_stage=True)

# dataset settings
dataset_type = 'CharadesSTADataset'
root = 'data/CharadesSTA'
data_root = f'{root}/C3D_unit16_overlap0.5_merged/'
data_root_val = f'{root}/C3D_unit16_overlap0.5_merged/'
ann_file_train = f'{root}/Charades_sta_train.txt'
ann_file_val = f'{root}/Charades_sta_test.txt'
ann_file_test = f'{root}/Charades_sta_test.txt'

word2id_file = f'{root}/Charades_word2id.json'
fps_file = f'{root}/Charades_fps_dict.json'
duration_file = f'{root}/Charades_duration.json'
num_frames_file = f'{root}/Charades_frames_info.json'
window_size = 16
ft_overlap = 0.5

train_pipeline = [
dict(
type='PackLocalizationInputs',
keys=('gt_bbox', 'proposals'),
meta_keys=('vid_name', 'query_tokens', 'query_length', 'num_proposals',
'num_frames'))
]

val_pipeline = train_pipeline
test_pipeline = val_pipeline

train_dataloader = dict(
batch_size=16,
num_workers=8,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
drop_last=True,
dataset=dict(
type=dataset_type,
ann_file=ann_file_train,
data_prefix=dict(video=data_root),
pipeline=train_pipeline,
word2id_file=word2id_file,
fps_file=fps_file,
duration_file=duration_file,
num_frames_file=num_frames_file,
window_size=window_size,
ft_overlap=ft_overlap),
)

val_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
drop_last=True,
dataset=dict(
type=dataset_type,
ann_file=ann_file_val,
data_prefix=dict(video=data_root),
pipeline=val_pipeline,
word2id_file=word2id_file,
fps_file=fps_file,
duration_file=duration_file,
num_frames_file=num_frames_file,
window_size=window_size,
ft_overlap=ft_overlap),
)
test_dataloader = val_dataloader

max_epochs = 10
train_cfg = dict(
type='EpochBasedTrainLoop',
max_epochs=max_epochs,
val_begin=1,
val_interval=1)

val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')

val_evaluator = dict(type='RecallatTopK', topK_list=(1, 5), threshold=0.5)
test_evaluator = val_evaluator

optim_wrapper = dict(
optimizer=dict(type='Adam', lr=1e-5),
clip_grad=dict(max_norm=5, norm_type=2))

find_unused_parameters = True
Loading
Loading