Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available:
5.49.1
metadata
title: Wound Analysis V2
emoji: π’
colorFrom: yellow
colorTo: pink
sdk: gradio
sdk_version: 5.42.0
app_file: app.py
pinned: false
Wound Analysis LE
π©Ή Project Overview
Wound Analysis LE is an advanced medical imaging tool for automated wound assessment using deep learning. It provides:
- Wound classification (type identification)
- Depth estimation (3D wound structure)
- Segmentation (precise wound area extraction)
- Severity analysis (quantitative and AI-powered clinical assessment)
The system is built for research and educational purposes, integrating state-of-the-art computer vision models and a user-friendly Gradio interface.
π Features & Workflow
- Wound Classification: Identifies wound type using a vision transformer model.
- Depth Estimation: Generates depth maps and 3D visualizations from 2D images using DepthAnythingV2 (DINOv2 + DPT).
- Segmentation: Extracts wound regions using deep learning models (Deeplabv3+, FCN, SegNet, Unet).
- Severity Analysis: Computes wound area, depth, volume, and provides AI-powered medical assessment (Gemini AI integration).
- Interactive Gradio App: Step-by-step workflow with visualization, overlays, and downloadable results.
ποΈ Model Architecture
Segmentation Models
- Deeplabv3+: Encoder-decoder with atrous convolutions for semantic segmentation.
- FCN (VGG16-16s): Fully convolutional network for pixel-wise segmentation.
- SegNet: Encoder-decoder architecture for efficient segmentation.
- Unet (multiple variants): U-shaped architecture for biomedical image segmentation.
Depth Estimation
- DepthAnythingV2: Combines DINOv2 vision transformer backbone with DPT head for monocular depth prediction.
- DINOv2: Self-supervised vision transformer for feature extraction.
- DPT: Dense Prediction Transformer for pixel-wise depth estimation.
Classification
- Vision Transformer (ViT): Used for wound type classification (via HuggingFace Transformers).
π οΈ Installation & Requirements
- Clone the repository
git clone <repo-url> cd Wound-Analysis-LE - Install dependencies
pip install -r requirements.txt- Key dependencies:
gradio,torch,tensorflow,opencv-python,transformers,open3d,plotly,google-generativeai, etc.
- Key dependencies:
- Download model weights
- The app will auto-download required weights (e.g., DINOv2, segmentation models) on first run if not present.
π» Usage Instructions
Run the Gradio App
python app.py
- Access the app at: http://localhost:7860
Segmentation Tool (Standalone)
python temp_files/segmentation_app.py
Workflow
- Upload a wound image
- Classify: Get wound type and initial AI analysis
- Depth Estimation: Generate depth map and 3D visualization
- Segmentation: Auto-segment wound area
- Severity Analysis: Quantitative and AI-powered report
- Download: Export masks, overlays, and 3D data
π Training & Evaluation
- Training scripts: See
temp_files/train.py - Metrics: Dice coefficient, precision, recall, loss (see
utils/learning/metrics.py) - Results: Training history and model checkpoints in
training_history/- Example: Dice coefficient > 0.98 on training set (see
2025-08-07_16-25-27.json)
- Example: Dice coefficient > 0.98 on training set (see
π Code Structure
app.pyβ Main Gradio app (classification, depth, segmentation, severity)models/β Segmentation model definitions (Deeplab, FCN, SegNet, Unet)depth_anything_v2/β Depth estimation (DINOv2, DPT, utility layers)utils/β Data loading, augmentation, metrics, postprocessingtemp_files/β Standalone scripts, experiments, and legacy toolstraining_history/β Model checkpoints and training logs
π References
- Deeplabv3+ Paper
- DINOv2 (Meta AI)
- DPT: Vision Transformers for Dense Prediction
- HuggingFace Transformers
- Gradio
- Open3D
- Augmentor
- Datasets: Custom wound datasets (not included)
β οΈ Disclaimer
This tool is for research and educational purposes only. It does not provide medical advice or diagnosis. Always consult a medical professional for clinical decisions.