YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llava-phi3-3.8b-lora - bnb 8bits

Original model description:

base_model: microsoft/Phi-3-mini-128k-instruct inference: false

Model Card for Model ID

A LLaVA (Large Language and Vision Assistant) version of microsoft/Phi-3-mini-128k-instruct

Model Details

Trained with this dataset of liuhaotian/LLaVA-Instruct-150K. This model has a around 4.1B parameters, where the 3.8B came from the base model.

Downloads last month
1
Safetensors
Model size
4B params
Tensor type
F32
F16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support