LlamaTrace (Merged LoRA + Base)

Model Information

  • Base Model: meta-llama/Meta-Llama-3-8B
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Objective: Network traffic analysis, anomaly detection, syslog/pcap summarization
  • Tokenizer: base model tokenizer

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("choihyuunmin/LLaMa-PcapLog")
tokenizer = AutoTokenizer.from_pretrained("choihyuunmin/LLaMa-PcapLog")

input_text = "Anaylze below network packet : \n"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
1
Safetensors
Model size
8B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for choihyuunmin/Llama-PcapLog

Adapter
(654)
this model
Adapters
1 model