Image-Text-to-Text
Transformers
English
respect / README.md
Zizhao Chen
update
330c4c3
metadata
base_model:
  - HuggingFaceM4/idefics2-8b
language:
  - en
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers

Retrospective Learning from Interactions

This repository contains the lil-lab/respect model, based on the ACL paper Retrospective Learning from Interactions. For more resources, please see https://lil-lab.github.io/respect and https://github.com/lil-lab/respect.

Sample Usage

To get started with the model, follow these steps:

1. Setting up Environment

Prepare your conda environment:

conda create -n respect python=3.9.18
pip install -r requirements.txt
pip install -e .

2. Download Data

from datasets import load_dataset

ds = load_dataset("lil-lab/respect", name="turn", split="train")

3. Load Model Checkpoints

Download checkpoints and load the model using transformers and peft:

import torch
from transformers import Idefics2ForConditionalGeneration
from peft import PeftModel

checkpoint = "HuggingFaceM4/idefics2-8b"
model_id = 'lil-lab/respect'

model = Idefics2ForConditionalGeneration.from_pretrained(
    checkpoint, torch_dtype=torch.bfloat16)
peft_model = PeftModel.from_pretrained(
    model, model_id, adapter_name="r6_bp", revision="r6_bp")

Reproducibility

To generate plots from the paper, run analysis/plots.ipynb in the GitHub repository.