You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

LS-W4-270M-Micro-T1

Model Description

LS-W4-270M-Micro-T1 is the first model in the Web4 Localized Services (W4-LS) series, specifically designed for highly efficient, on-device text generation. As a Micro Language Model (Micro-LM), it features a compact architecture with a total of $2 \times 270$ million parameters.
This model is a Masked Language Model (MLM) specialized in generating social media captions. It prioritizes inference speed and minimal resource usage, making it ideal for client-side execution.

Key Features πŸš€

  • Base Architecture: Built on top of Gemma 3 270M.
  • Micro-LM Architecture: Optimized for low-latency performance on consumer devices.
  • Social Media Specialization: Trained to generate engaging and contextually relevant social media captions.
  • Serverless Operation: A core innovation of this model is its ability to run entirely locally within a web browser or on a client device without requiring a server. This ensures full privacy and offline functionality.

How to Use: Serverless Deployment

The model is designed exclusively for serverless environments and cannot be executed using traditional Hugging Face inference endpoints.

Client-Side/On-Device Deployment Files

To run this model locally in a browser or on a device, the necessary client-side deployment files are required.
The required .task and .tflite files for local deployment can be downloaded at:

https://ai.web4.one

Model Details

Model Name: LS-W4-270M-Micro-T1
Model Type: Masked Language Model (MLM)
Parameters: 540 Million (2Γ—270 Million)
Base Model: Gemma 3 270M
Primary Task: Social Media Caption Generation (Serverless/Local Inference)
License: Same license as the base model Gemma 3 270M

Training Details πŸ› οΈ

The model was fine-tuned specifically for the task of social media caption generation.
Training Data Size: Over 50,000 datasets (examples/entries) were used for fine-tuning.
Training Hardware: Fine-tuning was performed on a T4 GPU with 12 GB of RAM.

Downloads last month
-
Safetensors
Model size
0.4B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Web4/LS-W4-270M-Micro-T1