BWSK Pythia-160M

Pythia-160M (160M params) trained in 6 variants (3 BWSK modes x 2 experiments) on WikiText-2 with full convergence training and early stopping.

This repo contains all model weights, configs, and training results in a single consolidated repository.

What is BWSK?

BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.

Model Overview

Property Value
Base Model EleutherAI/pythia-160m
Architecture Transformer (causal_lm)
Parameters 160M
Dataset WikiText-2
Eval Metric Perplexity

S/K Classification

Type Ratio
S-type (information-preserving) 67.3%
K-type (information-erasing) 32.7%

Fine-tune Results

Mode Final Loss Val Perplexity Test Perplexity Peak Memory Time Epochs
Conventional 2.4009 20.31 19.85 5.3 GB 6.4m 4
BWSK Analyzed 2.6281 20.37 19.82 5.3 GB 6.5m 4
BWSK Reversible 2.5617 20.32 19.82 4.3 GB 7.4m 4

Memory savings (reversible vs conventional): 18.8%

From Scratch Results

Mode Final Loss Val Perplexity Test Perplexity Peak Memory Time Epochs
Conventional 4.6782 221.93 228.35 5.3 GB 8.3m 5
BWSK Analyzed 4.5683 224.09 228.98 5.3 GB 8.1m 5
BWSK Reversible 4.7448 216.27 219.84 4.3 GB 9.4m 5

Memory savings (reversible vs conventional): 18.7%

Repository Structure

β”œβ”€β”€ README.md
β”œβ”€β”€ results.json
β”œβ”€β”€ finetune-conventional/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ finetune-bwsk-analyzed/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ finetune-bwsk-reversible/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-conventional/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-bwsk-analyzed/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-bwsk-reversible/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json

Usage

Load a specific variant:

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load fine-tuned conventional variant
model = AutoModelForCausalLM.from_pretrained(
    "tzervas/bwsk-pythia-160m", subfolder="finetune-conventional"
)
tokenizer = AutoTokenizer.from_pretrained(
    "tzervas/bwsk-pythia-160m", subfolder="finetune-conventional"
)

# Load from-scratch BWSK reversible variant
model = AutoModelForCausalLM.from_pretrained(
    "tzervas/bwsk-pythia-160m", subfolder="scratch-bwsk-reversible"
)

Training Configuration

Setting Value
Optimizer AdamW
LR (fine-tune) 3e-05
LR (from-scratch) 2e-04
LR Schedule Cosine with warmup
Max Grad Norm 1.0
Mixed Precision AMP (float16)
Early Stopping Patience 3
Batch Size 4
Sequence Length 512

Links

Citation

@software{zervas2026bwsk,
  author = {Zervas, Tyler},
  title = {BWSK: Combinator-Typed Neural Network Analysis},
  year = {2026},
  url = {https://github.com/tzervas/ai-s-combinator},
}

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tzervas/bwsk-pythia-160m

Finetuned
(218)
this model

Dataset used to train tzervas/bwsk-pythia-160m

Evaluation results