JSSP LLaMA 8B Fine-tuned Model

Model Description

Job Shop Scheduling Problem (JSSP) ์ตœ์ ํ™”๋ฅผ ์œ„ํ•ด ํŒŒ์ธํŠœ๋‹๋œ LLaMA 8B ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

Training Details

  • Base Model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
  • LoRA Rank: 64
  • Epochs: 4
  • Max Sequence Length: 40,000
  • Dataset: ACCORD

Usage

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="HYUNJINI/pfsp_test_1",
    max_seq_length=40000,
    load_in_4bit=True,
    dtype=torch.bfloat16,
)
FastLanguageModel.for_inference(model)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HYUNJINI/pfsp_test_1