luzimu/WebGen-Bench
Viewer • Updated • 6.77k • 331 • 3
How to use luzimu/WebGen-LM-7B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="luzimu/WebGen-LM-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("luzimu/WebGen-LM-7B")
model = AutoModelForCausalLM.from_pretrained("luzimu/WebGen-LM-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use luzimu/WebGen-LM-7B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "luzimu/WebGen-LM-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "luzimu/WebGen-LM-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/luzimu/WebGen-LM-7B
How to use luzimu/WebGen-LM-7B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "luzimu/WebGen-LM-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "luzimu/WebGen-LM-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "luzimu/WebGen-LM-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "luzimu/WebGen-LM-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use luzimu/WebGen-LM-7B with Docker Model Runner:
docker model run hf.co/luzimu/WebGen-LM-7B
WebGen-LM is trained using the Bolt.diy trajectories generated from a subset of the training set of WebGen-Bench (🤗 luzimu/WebGen-Bench). It has been introduced in the paper WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch.
The training data and code can be found at WebGen-Bench (Github).
The WebGen-LM family of models are as follows:
| Models | HF Links |
|---|---|
| WebGen-LM-7B | 🤗 luzimu/WebGen-LM-7B |
| WebGen-LM-14B | 🤗 luzimu/WebGen-LM-14B |
| WebGen-LM-32B | 🤗 luzimu/WebGen-LM-32B |
You can use this model with the Hugging Face transformers library.
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "luzimu/WebGen-LM-7B" # This model card refers to WebGen-LM-7B
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# Example for website generation
user_prompt = "Generate a simple HTML page with a heading 'Hello, World!' and a paragraph of lorem ipsum text."
messages = [
{"role": "user", "content": user_prompt}
]
# Apply chat template for instruction-following format
text_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate output
model_inputs = tokenizer(text_input, return_tensors="pt").to(model.device)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=500, do_sample=True, temperature=0.01, top_k=50, top_p=0.95)
# Decode and print the generated code
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(generated_text)
# Example using Hugging Face pipeline for simpler inference
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = generator(user_prompt, max_new_tokens=500, do_sample=True, temperature=0.01, top_k=50, top_p=0.95)
print(result[0]['generated_text'])
If you find our project useful, please cite:
@misc{lu2025webgenbenchevaluatingllmsgenerating,
title={WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch},
author={Zimu Lu and Yunqiao Yang and Houxing Ren and Haotian Hou and Han Xiao and Ke Wang and Weikang Shi and Aojun Zhou and Mingjie Zhan and Hongsheng Li},
year={2025},
eprint={2505.03733},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03733},
}
@misc{lu2025webgenagentenhancinginteractivewebsite,
title={WebGen-Agent: Enhancing Interactive Website Generation with Multi-Level Feedback and Step-Level Reinforcement Learning},
author={Zimu Lu and Houxing Ren and Yunqiao Yang and Ke Wang and Zhuofan Zong and Junting Pan and Mingjie Zhan and Hongsheng Li},
year={2025},
eprint={2509.22644},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.22644},
}