BWSK Switch-Base-8
Switch-Base-8 (220M params) trained in 6 variants (3 BWSK modes x 2 experiments) on WikiText-2 with full convergence training and early stopping.
This repo contains all model weights, configs, and training results in a single consolidated repository.
What is BWSK?
BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.
Model Overview
| Property | Value |
|---|---|
| Base Model | google/switch-base-8 |
| Architecture | Moe (seq2seq) |
| Parameters | 220M |
| Dataset | WikiText-2 |
| Eval Metric | Perplexity |
S/K Classification
| Type | Ratio |
|---|---|
| S-type (information-preserving) | 52.6% |
| K-type (information-erasing) | 38.7% |
| Gray (context-dependent) | 8.6% |
Fine-tune Results
| Mode | Final Loss | Val Perplexity | Test Perplexity | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 2.9923 | 29.02 | 27.72 | 15.2 GB | 1.5h | 5 |
| BWSK Analyzed | 3.1352 | 29.99 | 28.66 | 15.2 GB | 1.8h | 4 |
| BWSK Reversible | 3.2770 | 29.24 | 27.96 | 15.2 GB | 2.5h | 5 |
Memory savings (reversible vs conventional): 0.0%
From Scratch Results
| Mode | Final Loss | Val Perplexity | Test Perplexity | Peak Memory | Time | Epochs |
|---|---|---|---|---|---|---|
| Conventional | 5.5342 | 289.26 | 290.61 | 14.2 GB | 1.8h | 5 |
| BWSK Analyzed | 5.2518 | 288.67 | 288.12 | 14.2 GB | 1.8h | 5 |
| BWSK Reversible | 5.0745 | 297.67 | 299.35 | 14.1 GB | 1.8h | 5 |
Memory savings (reversible vs conventional): 0.5%
Repository Structure
βββ README.md
βββ results.json
βββ finetune-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ finetune-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-conventional/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-analyzed/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
βββ scratch-bwsk-reversible/
β βββ model.safetensors
β βββ config.json
β βββ training_results.json
Usage
Load a specific variant:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load fine-tuned conventional variant
model = AutoModelForSeq2SeqLM.from_pretrained(
"tzervas/bwsk-switch-base-8", subfolder="finetune-conventional"
)
tokenizer = AutoTokenizer.from_pretrained(
"tzervas/bwsk-switch-base-8", subfolder="finetune-conventional"
)
# Load from-scratch BWSK reversible variant
model = AutoModelForSeq2SeqLM.from_pretrained(
"tzervas/bwsk-switch-base-8", subfolder="scratch-bwsk-reversible"
)
Training Configuration
| Setting | Value |
|---|---|
| Optimizer | AdamW |
| LR (fine-tune) | 3e-05 |
| LR (from-scratch) | 2e-04 |
| LR Schedule | Cosine with warmup |
| Max Grad Norm | 1.0 |
| Mixed Precision | AMP (float16) |
| Early Stopping | Patience 3 |
| Batch Size | 1 |
| Sequence Length | 256 |
Links
Citation
@software{zervas2026bwsk,
author = {Zervas, Tyler},
title = {BWSK: Combinator-Typed Neural Network Analysis},
year = {2026},
url = {https://github.com/tzervas/ai-s-combinator},
}
License
MIT
Model tree for tzervas/bwsk-switch-base-8
Base model
google/switch-base-8Dataset used to train tzervas/bwsk-switch-base-8
Evaluation results
- perplexity on wikitextself-reported27.721
- perplexity on wikitextself-reported28.658
- perplexity on wikitextself-reported27.962
- perplexity on wikitextself-reported290.611
- perplexity on wikitextself-reported288.115
- perplexity on wikitextself-reported299.353