llava-vl.github.io Open in urlscan Pro
2606:50c0:8000::153  Public Scan

Submitted URL: http://llava-vl.github.io/
Effective URL: https://llava-vl.github.io/
Submission: On December 15 via manual from IN — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

LLAVA: LARGE LANGUAGE AND VISION ASSISTANT


VISUAL INSTRUCTION TUNING

NEURIPS 2023 (ORAL)

Haotian Liu*, Chunyuan Li*, Qingyang Wu, Yong Jae Lee
▶ University of Wisconsin-Madison ▶ Microsoft Research ▶ Columbia University
*Equal Contribution
arXiv arXiv (LLaVA-1.5) Code Demo Dataset Model

🔥[NEW!] LLAVA-1.5 ACHIEVES SOTA ON 11 BENCHMARKS, WITH JUST SIMPLE
MODIFICATIONS TO THE ORIGINAL LLAVA, UTILIZES ALL PUBLIC DATA, COMPLETES
TRAINING IN ~1 DAY ON A SINGLE 8-A100 NODE, AND SURPASSES METHODS THAT USE
BILLION-SCALE DATA.

LLAVA REPRESENTS A NOVEL END-TO-END TRAINED LARGE MULTIMODAL MODEL THAT COMBINES
A VISION ENCODER AND VICUNA FOR GENERAL-PURPOSE VISUAL AND LANGUAGE
UNDERSTANDING, ACHIEVING IMPRESSIVE CHAT CAPABILITIES MIMICKING SPIRITS OF THE
MULTIMODAL GPT-4 AND SETTING A NEW STATE-OF-THE-ART ACCURACY ON SCIENCE QA.

Loading...




ABSTRACT

Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks in
the language domain, but the idea is less explored in the multimodal field.

 1. Multimodal Instruct Data. We present the first attempt to use language-only
    GPT-4 to generate multimodal language-image instruction-following data.
 2. LLaVA Model. We introduce LLaVA (Large Language-and-Vision Assistant), an
    end-to-end trained large multimodal model that connects a vision encoder and
    LLM for general-purpose visual and language understanding.
 3. Performance. Our early experiments show that LLaVA demonstrates impressive
    multimodel chat abilities, sometimes exhibiting the behaviors of multimodal
    GPT-4 on unseen images/instructions, and yields a 85.1% relative score
    compared with GPT-4 on a synthetic multimodal instruction-following dataset.
    When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new
    state-of-the-art accuracy of 92.53%.
 4. Open-source. We make GPT-4 generated visual instruction tuning data, our
    model and code base publicly available.




MULTIMODAL INSTRUCION-FOLLOWING DATA

Based on the COCO dataset, we interact with language-only GPT-4, and collect
158K unique language-image instruction-following samples in total, including 58K
in conversations, 23K in detailed description, and 77k in complex reasoning,
respectively. Please check out ``LLaVA-Instruct-150K''' on [HuggingFace
Dataset].

Data file name File Size Sample Size conversation_58k.json 126 MB 58K
detail_23k.json 20.5 MB 23K complex_reasoning_77k.json 79.6 MB 77K



For each subset, we visualize the root noun-verb pairs for the instruction and
response. For each chart, please click the link for the interactive page to
check out the noun-verb pairs whose frequency is higher the given number.

Instruction: Conversation [0, 20, 50] Instruction: Detailed Description [0]
Instruction: Complex Reasoning [0, 20, 50]
Response: Conversation [0, 20, 50] Response: Detailed Description [0, 20, 50]
Response: Complex Reasoning [0, 20, 50]


LLAVA: LARGE LANGUAGE-AND-VISION ASSISTANT

LLaVa connects pre-trained CLIP ViT-L/14 visual encoder and large language model
Vicuna, using a simple projection matrix. We consider a two-stage
instruction-tuning procedure:

 * Stage 1: Pre-training for Feature Alignment. Only the projection matrix is
   updated, based on a subset of CC3M.
 * Stage 2: Fine-tuning End-to-End. Both the projection matrix and LLM are
   updated for two different use senarios:
    * Visual Chat: LLaVA is fine-tuned on our generated multimodal
      instruction-following data for daily user-oriented applications.
    * Science QA: LLaVA is fine-tuned on this multimodal reasonsing dataset for
      the science domain.

Please check out our [Model Zoo].






PERFORMANCE


VISUAL CHAT: TOWARDS BUILDING MULTIMODAL GPT-4 LEVEL CHATBOT



An evaluation dataset with 30 unseen images is constructed: each image is
assocaited with three types of instructions: conversation, detailed description
and complex reasoning. This leads to 90 new language-image instructions, on
which we test LLaVA and GPT-4, and use GPT-4 to rate their responses from score
1 to 10. The summed score and relative score per type is reported. Overall,
LLaVA achieves 85.1% relative score compared with GPT-4, indicating the
effectinvess of the proposed self-instruct method in multimodal settings


SCIENCE QA: NEW SOTA WITH THE SYNERGY OF LLAVA WITH GPT-4



LLaVA alones achieve 90.92%. We use the text-only GPT-4 as the judge, to predict
the final answer based on its own previous answers and the LLaVA answers. This
"GPT-4 as judge" scheme yields a new SOTA 92.53%.


EXAMPLES ON VISUAL INSTRUCTION FOLLOWING


VISUAL REASONING ON TWO EXAMPLES FROM OPENAI GPT-4 TECHNICAL REPORT




OPTICAL CHARACTER RECOGNITION (OCR)


 
keyboard_arrow_left keyboard_arrow_right

User






User


Do you know who drew this painting?​



LLaVA


The painting depicts a woman, commonly believed to be Mona Lisa, the famous artwork by Leonardo da Vinci. It is a portrait painting that showcases the woman's enigmatic smile and has become one of the most famous and iconic art pieces in the world. The original work is displayed in the Louvre Museum in Paris, and it is known for its intricate details, use of oil paint, and the artist's innovative techniques that contributed to its enduring appeal and mystery.​​




BIBTEX


  @misc{liu2023improvedllava,
          author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
          title={Improved Baselines with Visual Instruction Tuning}, 
          publisher={arXiv:2310.03744},
          year={2023},
  }

  @inproceedings{liu2023llava,
    author      = {Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
    title       = {Visual Instruction Tuning},
    booktitle   = {NeurIPS},
    year        = {2023}
  }
  


ACKNOWLEDGEMENT

This website is adapted from Nerfies, licensed under a Creative Commons
Attribution-ShareAlike 4.0 International License. We thank the LLaMA team for
giving us access to their models, and open-source projects, including Alpaca and
Vicuna.

Usage and License Notices: The data, code and checkpoint is intended and
licensed for research use only. They are also restricted to uses that follow the
license agreement of CLIP, LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0
(allowing only non-commercial use) and models trained using the dataset should
not be used outside of research purposes.

Related Links: [REACT] [GLIGEN] [Computer Vision in the Wild (CVinW)]
[Insutrction Tuning with GPT-4]