YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Summarization Model (T5) β€” Ready for Hugging Face

This repo contains:

  • data/train.csv: your training data (two columns: text β†’ label)
  • train.py: a compact training script for fine-tuning FLAN-T5 for summarization
  • requirements.txt: minimal dependencies

Quick Start (Locally or on a Hugging Face Space/Notebook)

pip install -r requirements.txt
python train.py --model_name google/flan-t5-small --epochs 3 --batch_size 8 --output_dir checkpoint

Tip: If you have a GPU, add --fp16 for faster training.

Customizing

  • Change --input_col and --target_col if your CSV headers differ.
  • Adjust --max_input_len and --max_target_len based on your data.

Export the Model

The trained model is saved under checkpoint/. Upload that folder to a Hugging Face Model repo. Set your model's Inference widget to Text2Text and use:

Pipeline tag: text2text-generation
Library: transformers

Inference (Command Line)

After training:

python checkpoint/inference.py checkpoint < input.txt

Data Schema

  • Input column: text
  • Target (summary) column: label

If these are incorrect, update train.py flags accordingly.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support