Glowlytics Skin Analysis Models

On-device ML models for real-time skin health analysis. Part of the Glowlytics platform.

Models

1. Skin Signals (skin_signals.onnx) β€” 0.6 MB

Unified multi-head EfficientNet-B0 that predicts 4 skin health signals from a single face image.

Signal Description Correlation (r) Val MAE
Structure Pore visibility, texture uniformity 0.913 ~4.5 pts
Hydration Skin moisture / dryness indicators 0.889 ~4.5 pts
Sun Damage UV exposure signs, hyperpigmentation 0.882 ~4.5 pts
Elasticity Skin firmness, age-related changes 0.940 ~4.5 pts

Training: Knowledge distillation from Claude Sonnet 4 teacher labels on 4,717 images (UTKFace + FFHQ). 30 epochs on A10G GPU.

Input: RGB image, 224x224, ImageNet-normalized
Output: 4 float values in [0, 1] (multiply by 100 for 0-100 score)

2. Acne Lesion Detector (acne_detector.onnx) β€” 43 MB

YOLOv8s object detection model for identifying acne lesions (comedones, papules, pustules, nodules).

Metric Value
mAP50 0.473
mAP50-95 0.199
Precision 0.503
Recall 0.486

Training: 150 epochs + 50 epoch fine-tune (frozen backbone) on 1,843 images with 18,717 bounding box annotations. Trained at 1280px resolution on A10G GPU.

Input: RGB image, 640x640
Output: Bounding boxes with class labels and confidence scores

Files

File Size Description
skin_signals.onnx 0.6 MB Unified 4-signal model (ONNX)
skin_signals_best.pt 19.8 MB PyTorch weights
acne_detector.onnx 43 MB YOLOv8s lesion detector (ONNX)
acne_detector_best.pt 22.7 MB PyTorch weights
structure_model.onnx legacy v1 structure-only model
hydration_model.onnx legacy v1 hydration-only model
elasticity_model.onnx legacy v1 elasticity-only model

Usage

import onnxruntime as ort
import numpy as np
from PIL import Image
from torchvision import transforms

# Skin signals
transform = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])

img = Image.open("face.jpg").convert("RGB")
input_tensor = transform(img).unsqueeze(0).numpy()

sess = ort.InferenceSession("skin_signals.onnx")
scores = sess.run(None, {"image": input_tensor})[0][0]

signals = ["structure", "hydration", "sunDamage", "elasticity"]
for name, score in zip(signals, scores):
    print(f"{name}: {score * 100:.1f}/100")

Architecture

  • Skin Signals: EfficientNet-B0 backbone (pretrained ImageNet) with shared hidden layers (1280 -> 512 -> 256) and 4 independent signal heads (256 -> 64 -> 1 with sigmoid). Total: 4.86M parameters.
  • Acne Detector: YOLOv8s with cosine LR, AdamW optimizer, heavy augmentation. Phase 1: full training. Phase 2: frozen backbone fine-tuning.

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support