Skip to content

Latest commit

 

History

History
50 lines (50 loc) · 2 KB

2022-06-28-bao22c.md

File metadata and controls

50 lines (50 loc) · 2 KB
title booktitle abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Generative Modeling for Multi-task Visual Learning
Proceedings of the 39th International Conference on Machine Learning
Generative modeling has recently shown great promise in computer vision, but it has mostly focused on synthesizing visually realistic images. In this paper, motivated by multi-task learning of shareable feature representations, we consider a novel problem of learning a shared generative model that is useful across various visual perception tasks. Correspondingly, we propose a general multi-task oriented generative modeling (MGM) framework, by coupling a discriminative multi-task network with a generative network. While it is challenging to synthesize both RGB images and pixel-level annotations in multi-task scenarios, our framework enables us to use synthesized images paired with only weak annotations (i.e., image-level scene labels) to facilitate multiple visual tasks. Experimental evaluation on challenging multi-task benchmarks, including NYUv2 and Taskonomy, demonstrates that our MGM framework improves the performance of all the tasks by large margins, consistently outperforming state-of-the-art multi-task approaches in different sample-size regimes.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
bao22c
0
Generative Modeling for Multi-task Visual Learning
1537
1554
1537-1554
1537
false
Bao, Zhipeng and Hebert, Martial and Wang, Yu-Xiong
given family
Zhipeng
Bao
given family
Martial
Hebert
given family
Yu-Xiong
Wang
2022-06-28
Proceedings of the 39th International Conference on Machine Learning
162
inproceedings
date-parts
2022
6
28