LAVIS - A One-stop Library for Language-Vision Intelligence
-
Updated
Nov 18, 2024 - Jupyter Notebook
LAVIS - A One-stop Library for Language-Vision Intelligence
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Janus-Series: Unified Multimodal Understanding and Generation Models
[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
[TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset
Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
[CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》
Recognize Any Regions
[MedIA'24] FLAIR: A Foundation LAnguage-Image model of the Retina for fundus image understanding.
Official repository for "CLIP model is an Efficient Continual Learner".
PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"
This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have been cited and discussed in the survey just accepted https://dl.acm.org/doi/abs/10.1145/3617833 .
Multi-Aspect Vision Language Pretraining - CVPR2024
📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)
Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]
[ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model
Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)
Bias-to-Text: Debiasing Unknown Visual Biases through Language Interpretation
Add a description, image, and links to the vision-language-pretraining topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-pretraining topic, visit your repo's landing page and select "manage topics."