Laserhun commited on
Commit
c1af734
·
verified ·
1 Parent(s): d500238

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +30 -6
  2. app.py +103 -0
  3. requirements.txt +9 -0
README.md CHANGED
@@ -1,12 +1,36 @@
1
  ---
2
- title: Gemma 3n Luau Demo
3
- emoji: 🐨
4
- colorFrom: purple
5
- colorTo: gray
6
  sdk: gradio
7
- sdk_version: 5.44.1
8
  app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Gemma 3n E4B Luau Generator
3
+ emoji: 🎮
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: gradio
7
+ sdk_version: 4.44.0
8
  app_file: app.py
9
  pinned: false
10
+ license: apache-2.0
11
  ---
12
 
13
+ # 🎮 Gemma-3n-E4B Luau Code Generator
14
+
15
+ This Space hosts the Gemma-3n-E4B model (8B parameters, 4B runtime efficiency) fine-tuned on the Roblox Luau corpus.
16
+
17
+ ## Model Details
18
+ - **Base Model**: google/gemma-3n-E4B
19
+ - **Architecture**: Novel architecture with 8B parameters but 4B model runtime footprint
20
+ - **Fine-tuned Dataset**: Roblox/luau_corpus
21
+ - **Task**: Luau code generation for Roblox development
22
+ - **Model Repository**: [Laserhun/gemma-3n-E4B-luau-finetuned](https://huggingface.co/Laserhun/gemma-3n-E4B-luau-finetuned)
23
+
24
+ ## About Gemma-3n-E4B
25
+ Gemma-3n-E4B uses an innovative architecture that provides 8 billion parameter model quality while maintaining the runtime efficiency of a 4 billion parameter model.
26
+
27
+ ## Features
28
+ - Advanced Luau code generation
29
+ - Roblox-specific patterns and best practices
30
+ - Efficient memory usage despite large parameter count
31
+ - High-quality code output
32
+
33
+ ## Training Details
34
+ - Fine-tuned on Roblox Luau corpus
35
+ - Uses LoRA for efficient adaptation
36
+ - Optimized for code generation tasks
app.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import torch
3
+ from transformers import AutoModelForCausalLM, AutoTokenizer
4
+ from peft import PeftModel, PeftConfig
5
+
6
+ # Model configuration - Gemma-3n-E4B fine-tuned
7
+ MODEL_ID = "Laserhun/gemma-3n-E4B-luau-finetuned"
8
+ BASE_MODEL_ID = "google/gemma-3n-E4B"
9
+
10
+ print("Loading Gemma-3n-E4B fine-tuned model...")
11
+ try:
12
+ # Try loading as PEFT model
13
+ peft_config = PeftConfig.from_pretrained(MODEL_ID)
14
+
15
+ # Load base model
16
+ base_model = AutoModelForCausalLM.from_pretrained(
17
+ BASE_MODEL_ID,
18
+ torch_dtype=torch.float16,
19
+ device_map="auto",
20
+ trust_remote_code=True,
21
+ ignore_mismatched_sizes=True
22
+ )
23
+
24
+ # Load PEFT adapters
25
+ model = PeftModel.from_pretrained(base_model, MODEL_ID)
26
+ print("Loaded Gemma-3n-E4B as PEFT model")
27
+ except:
28
+ # Load as regular model
29
+ model = AutoModelForCausalLM.from_pretrained(
30
+ MODEL_ID,
31
+ torch_dtype=torch.float16,
32
+ device_map="auto",
33
+ trust_remote_code=True
34
+ )
35
+ print("Loaded as regular model")
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
38
+ if not tokenizer.pad_token:
39
+ tokenizer.pad_token = tokenizer.eos_token
40
+
41
+ def generate_luau_code(prompt, max_length=512, temperature=0.7, top_p=0.95):
42
+ """Generate Luau code using Gemma-3n-E4B model"""
43
+
44
+ # Format for Gemma-3n
45
+ formatted_prompt = f"<start_of_turn>user\n{prompt}<end_of_turn>\n<start_of_turn>model\n"
46
+
47
+ # Tokenize
48
+ inputs = tokenizer(formatted_prompt, return_tensors="pt", truncation=True, max_length=512)
49
+
50
+ # Move to device
51
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
52
+
53
+ # Generate
54
+ with torch.no_grad():
55
+ outputs = model.generate(
56
+ **inputs,
57
+ max_new_tokens=max_length,
58
+ temperature=temperature,
59
+ top_p=top_p,
60
+ do_sample=True,
61
+ pad_token_id=tokenizer.pad_token_id,
62
+ eos_token_id=tokenizer.eos_token_id
63
+ )
64
+
65
+ # Decode
66
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
67
+
68
+ # Extract response
69
+ if "<start_of_turn>model" in generated_text:
70
+ response = generated_text.split("<start_of_turn>model")[-1].strip()
71
+ else:
72
+ response = generated_text[len(formatted_prompt):].strip()
73
+
74
+ return response
75
+
76
+ # Create Gradio interface
77
+ iface = gr.Interface(
78
+ fn=generate_luau_code,
79
+ inputs=[
80
+ gr.Textbox(
81
+ lines=4,
82
+ placeholder="Describe the Luau code you want to generate...",
83
+ label="Enter your Luau code request"
84
+ ),
85
+ gr.Slider(minimum=100, maximum=1000, value=512, step=50, label="Max Length"),
86
+ gr.Slider(minimum=0.1, maximum=1.0, value=0.7, step=0.1, label="Temperature"),
87
+ gr.Slider(minimum=0.1, maximum=1.0, value=0.95, step=0.05, label="Top P")
88
+ ],
89
+ outputs=gr.Code(language="lua", label="Generated Luau Code"),
90
+ title="🎮 Gemma-3n-E4B Luau Code Generator",
91
+ description="Generate Roblox Luau code using Gemma-3n-E4B model (8B parameters, 4B runtime) fine-tuned on Luau corpus.",
92
+ examples=[
93
+ ["Create a smooth part movement function with easing", 512, 0.7, 0.95],
94
+ ["Write a door script with click interaction and smooth animation", 512, 0.7, 0.95],
95
+ ["Generate a complete inventory system with add, remove, and display functions", 700, 0.7, 0.95],
96
+ ["Create a spawning system for objects at random positions", 400, 0.7, 0.95],
97
+ ["Write a leaderboard system that saves player scores", 600, 0.7, 0.95]
98
+ ],
99
+ theme=gr.themes.Soft()
100
+ )
101
+
102
+ if __name__ == "__main__":
103
+ iface.launch()
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ transformers>=4.35.0
2
+ torch>=2.0.0
3
+ gradio>=4.0.0
4
+ accelerate
5
+ peft
6
+ sentencepiece
7
+ protobuf
8
+ bitsandbytes
9
+ timm