BWSK ViT-base
ViT-base (86M params) trained in 6 variants (3 BWSK modes x 2 experiments) on CIFAR-10 with full convergence training and early stopping.
This repo contains all model weights, configs, and training results in a single consolidated repository.
What is BWSK?
BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.
Model Overview
| Property | Value |
|---|---|
| Base Model | google/vit-base-patch16-224 |
| Architecture | Vit (image_cls) |
| Parameters | 86M |
| Dataset | CIFAR-10 |
| Eval Metric | Accuracy |
S/K Classification
| Type | Ratio |
|---|---|
| S-type (information-preserving) | 72.1% |
| K-type (information-erasing) | 27.9% |
Fine-tune Results
| Mode | Final Loss | Val Accuracy | Test Accuracy | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 0.0022 | 97.8% | 97.6% | 3.1 GB | 3.8m | 1 |
| BWSK Analyzed | 0.3425 | 98.0% | 98.2% | 3.1 GB | 8.4m | 2 |
| BWSK Reversible | 0.0019 | 97.7% | 97.3% | 2.0 GB | 4.5m | 1 |
Memory savings (reversible vs conventional): 37.3%
From Scratch Results
| Mode | Final Loss | Val Accuracy | Test Accuracy | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 1.5347 | 37.9% | 37.5% | 3.1 GB | 7.6m | 2 |
| BWSK Analyzed | 1.8406 | 38.0% | 36.9% | 3.1 GB | 4.3m | 1 |
| BWSK Reversible | 1.8934 | 39.6% | 37.8% | 2.0 GB | 6.4m | 2 |
Memory savings (reversible vs conventional): 37.3%
Repository Structure
βββ README.md
βββ results.json
βββ finetune-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
Usage
Load a specific variant:
from transformers import AutoModelForImageClassification, AutoFeatureExtractor
# Load fine-tuned conventional variant
model = AutoModelForImageClassification.from_pretrained(
"tzervas/bwsk-vit-base", subfolder="finetune-conventional"
)
Training Configuration
| Setting | Value |
|---|---|
| Optimizer | AdamW |
| LR (fine-tune) | 5e-05 |
| LR (from-scratch) | 3e-04 |
| LR Schedule | Cosine with warmup |
| Max Grad Norm | 1.0 |
| Mixed Precision | AMP (float16) |
| Early Stopping | Patience 3 |
| Batch Size | 16 |
Links
Citation
@software{zervas2026bwsk,
author = {Zervas, Tyler},
title = {BWSK: Combinator-Typed Neural Network Analysis},
year = {2026},
url = {https://github.com/tzervas/ai-s-combinator},
}
License
MIT
Model tree for tzervas/bwsk-vit-base
Base model
google/vit-base-patch16-224Dataset used to train tzervas/bwsk-vit-base
Evaluation results
- accuracy on cifar10self-reported0.976
- accuracy on cifar10self-reported0.982
- accuracy on cifar10self-reported0.973
- accuracy on cifar10self-reported0.375
- accuracy on cifar10self-reported0.369
- accuracy on cifar10self-reported0.378