AbstractPhil's picture
Create FORMULAS.md
c972e15 verified
# Geometric Formula Catalog
## Token Topology & Loss System โ€” AbstractPhil + Claude
*ROSE loss discarded. These are the active formulas.*
---
## 1. Multi-Scale Crystal Loss
Classification through learnable crystal prototypes at multiple projection dimensions. Each class has a crystal centroid at each scale. No softmax โ€” geometric distance IS the classifier.
**Scales:** `[64, 128, 256, 512, 1024]` (each is a projection dimension, not spatial)
### 1.1 Per-Scale Crystal Similarity
```
sim(x, c_k) = (xฬ‚ ยท ฤ‰_k) / ฯ„
where:
xฬ‚ = normalize(proj_k(features)) # [B, scale_dim]
ฤ‰_k = normalize(crystals_k) # [num_classes, scale_dim]
ฯ„ = temperature (default 0.07)
```
### 1.2 Per-Scale Coherence Loss
Pull features toward their correct class crystal:
```
L_coherence = -mean(log(exp(sim(x, c_y)) / ฮฃ_j exp(sim(x, c_j))))
where y = true class label
```
### 1.3 Per-Scale Separation Loss
Push class crystals apart with margin:
```
L_separation = ฮฃ_{iโ‰ j} max(0, margin - ||ฤ‰_i - ฤ‰_j||โ‚‚)ยฒ / (C(C-1))
where C = num_classes, margin = 1.0
```
### 1.4 Per-Scale Discretization Loss (Cantor Targets)
Cluster crystal Cantor values toward `{0.0, 0.5, 1.0}`:
```
L_discretization = mean(min_t(||cantor(c_i) - t||ยฒ))
where t โˆˆ {0.0, 0.5, 1.0}
```
### 1.5 Per-Scale Crystal Geometry Loss
Maintain target distance from features to class prototypes:
```
L_geometry = mean((||x - c_y||โ‚‚ - d_target)ยฒ)
where d_target = 1.0
```
### 1.6 Total Multi-Scale Crystal Loss
```
L_crystal = (1/S) ฮฃ_{k=1}^{S} w_k ยท (
w_coh ยท L_coherence_k +
w_sep ยท L_separation_k +
w_disc ยท L_discretization_k +
w_geom ยท L_geometry_k
)
Proven weights: w_coh=1.0, w_sep=0.5, w_disc=1.0, w_geom=0.5
```
### 1.7 Crystal Prediction (No Softmax Head)
```
logits = ฮฃ_k w_k ยท (ฮฑ ยท cos_sim_k + ฮฒ ยท cantor_coherence_k + ฮณ ยท crystal_geometry_k)
where prediction = argmax(logits)
```
**Results:** 86% ImageNet (CLIP bigG features), 74.87% CIFAR-100 (393K params), ~92% CIFAR-100 (78KB model)
---
## 2. Geometric Basin Compatibility Loss
Classification through geometric formula satisfaction. Four structural checks produce compatibility scores โˆˆ [0,1]. No cross-entropy needed.
### 2.1 Triadic Compatibility
```
T(x, c) = exp(-||proj(x) - c||โ‚‚ยฒ / (2ฯƒยฒ))
where c = class centroid, ฯƒ = learned bandwidth
```
### 2.2 Self-Similarity Check
```
S(x) = exp(-Var(cantor_levels(x)))
where cantor_levels extracts per-level Cantor measures
High self-similarity โ†’ low variance across levels โ†’ high score
```
### 2.3 Cantor Coherence Check
```
C(x, p_y) = exp(-||cantor(x) - p_y||โ‚‚ยฒ)
where p_y = class Cantor prototype
```
### 2.4 Hierarchical Check
```
H(x) = ฮฃ_{k=1}^{L} 0.5^k ยท match(level_k(x), expected_k)
```
### 2.5 Combined Compatibility Score
```
compat(x, class_j) = T(x, c_j) ยท S(x) ยท C(x, p_j) ยท H(x)
Product of four factors โˆˆ [0,1] โ†’ output โˆˆ [0,1]
```
### 2.6 Basin Loss (Three-Term, No Cross-Entropy)
```
L_correct = -mean(log(compat(x, y) + ฮต))
L_incorrect = -mean(log(1 - compat(x, jโ‰ y) + ฮต))
L_contrastive = NLL(log_softmax(compat / ฯ„), y)
L_basin = L_correct + 0.5 ยท L_incorrect + 0.5 ยท L_contrastive
```
**Results:** 67.69% CIFAR-100 with NO attention, NO cross-entropy, NO transformers (geo-beatrix). Beat ViT-beatrix (66.0%).
---
## 3. K-Simplex Channel Formulas
Tokens represented as k-simplices with Cayley-Menger validated geometry. Shape `[B, T, K+1, F]` where K+1 = vertices.
### 3.1 Template + Deformation
```
v_i = v_i^{template} + ฮฑ ยท ฮ”v_i
where:
v_i^{template} = regular k-simplex vertices (frozen)
ฮฑ = deformation scale (0.05 base, per-k scaled)
ฮ”v_i = learned offset from neural network
```
### 3.2 K-Scaled Deformation
Volume scales as `edge^k`, so higher k needs smaller deformation:
```
ฮฑ_k = ฮฑ_base / โˆš(k + 1)
k=1: ฮฑ ร— 0.71 k=3: ฮฑ ร— 0.50
k=2: ฮฑ ร— 0.58 k=4: ฮฑ ร— 0.45
```
### 3.3 Per-Token Simplex Coordinates
```
coords = proj(token_embedding) # [B, T, edim]
vertex_weights = softmax(route(token_embedding)) # [B, T, K+1]
simplex_state = vertex_weights @ vertices # [B, T, edim]
```
### 3.4 K-Simplex Attention (Proven Superior to K-Simplex Classification)
```
For each token pair (i, j):
dยฒ_ij = ||simplex_i - simplex_j||ยฒ # pairwise simplex distance
attn_ij = softmax(-dยฒ_ij / ฯ„) # geometric attention weights
Output = attn @ V # standard value projection
```
**Results:** 89.13% FMNIST, 84.59% CIFAR-10, 69.08% CIFAR-100 as attention. Entropy decreases through layers (sharpening). Fewer tokens = sharper attention (25 patches > 64 patches).
---
## 4. Cayley-Menger Formulas
The structural invariant. If CM fails, geometry is invalid. Non-negotiable.
### 4.1 Cayley-Menger Matrix
```
CM = | 0 1 1 ... 1 |
| 1 0 dโ‚€โ‚ยฒ ... dโ‚€โ‚–ยฒ |
| 1 dโ‚€โ‚ยฒ 0 ... dโ‚โ‚–ยฒ |
| โ‹ฎ โ‹ฎ โ‹ฎ โ‹ฑ โ‹ฎ |
| 1 dโ‚€โ‚–ยฒ dโ‚โ‚–ยฒ ... 0 |
Size: (K+2) ร— (K+2) for a K-simplex
```
### 4.2 Volume Formula (Corrected)
```
Volยฒ = (-1)^(K+1) / (2^K ยท (K!)ยฒ) ยท det(CM)
Validity: Volยฒ > 0 indicates non-degenerate simplex
```
### 4.3 Gram Determinant Alternative (More Stable)
```
X_translated = X[:, 1:, :] - X[:, 0:1, :] # [B, K, D]
G = X_translated @ X_translated.T # [B, K, K]
Vol = โˆš(det(G)) / K!
```
### 4.4 Validity Loss
```
L_validity = mean(ReLU(-Volยฒ))
Penalizes collapsed simplices (Volยฒ < 0)
```
### 4.5 Volume Consistency Loss
```
L_vol_consistency = Var(Volยฒ) across batch
Encourages uniform geometric structure
```
### 4.6 Hierarchical Cell Loss (k=4 pentachoron)
```
5 cells (tetrahedra), each with 4 vertices, 6 edges:
L_cell = mean(ReLU(ฮต - Volยฒ_cell_i))
for i = 1..5 cells of the pentachoron
```
### 4.7 Volยฒ Scaling Reference
```
k=1: Volยฒ ~ 1e+0 (edge length squared)
k=2: Volยฒ ~ 1e-1 (triangle area squared)
k=3: Volยฒ ~ 1e-2 (tetrahedron volume squared)
k=4: Volยฒ ~ 1e-3 (5-cell hypervolume squared)
```
---
## 5. Cantor Lens Formulas
The Devil's Staircase as a hierarchical lens for viewing token relationships.
### 5.1 Devil's Staircase (Beatrix Staircase)
```
C(x) = ฮฃ_{k=1}^{levels} bit_k ร— 0.5^k
where:
y_k = x ร— 3^k # scale to level k
p = softmax(-dยฒ/ฯ„) over centers [0.5, 1.5, 2.5]
bit_k = p_right + ฮฑ ร— p_middle # soft ternary assignment
ฮฑ = learnable middle-third fill (default 0.5)
ฯ„ = softmax temperature (default 0.25)
```
### 5.2 Branch Path Extraction
```
branch_path(x) = [argmax(p_1), argmax(p_2), ..., argmax(p_L)]
Each level: L (left third), M (middle third), R (right third)
```
### 5.3 Hierarchical Alignment (NOT Distance)
**CRITICAL: Distance is meaningless on Cantor set.**
```
alignment(i, j) = ฮฃ_{k=1}^{L} 0.5^k ยท ๐Ÿ™(path_i[k] == path_j[k])
Level weights: [0.5, 0.25, 0.125, 0.0625, 0.03125]
```
Coarse matches = routing highways (wormholes).
Fine matches = local structure only.
### 5.4 Euclidean Bridge (Lossy but Necessary)
```
distance(i, j) = |C(x_i) - C(x_j)|
Use ONLY when interfacing with Euclidean systems (optimizers, standard losses).
Alignment is the Cantor-native metric.
```
### 5.5 Cantor Routing Bias (for Attention)
```
bias[i,j] = alignment(i, j) # precomputed [S, S] matrix
attn_scores = (Q @ K.T / โˆšd) + ฮป ยท bias
where ฮป = learnable routing weight
```
### 5.6 Alpha Modulation
```
ฮฑ โ†’ 0.0: Pure ternary (Cantor dust, maximally disconnected)
ฮฑ โ†’ 0.5: Triadic equilibrium (proven stable zone: 0.44-0.50)
ฮฑ โ†’ 1.0: Filled (continuous, no fractal structure)
```
---
## 6. Cantor Topological Ropes
Position encodings that encode structural hierarchy, not just sequence order.
### 6.1 Standard RoPE (Baseline)
```
ฮธ_i = 10000^(-2i/d)
R(m) = [cos(mฮธ_i), -sin(mฮธ_i); sin(mฮธ_i), cos(mฮธ_i)]
for dimension pair (2i, 2i+1) at position m
```
### 6.2 BeatrixRoPE (Devil's Staircase Warping)
```
pos_beatrix(m) = C(m / seq_len) # Cantor function of normalized position
R_beatrix(m) = R(pos_beatrix(m) ร— seq_len)
```
Tokens in same ternary branch get **similar** positions โ†’ attend easily.
Creates hierarchical plateaus.
### 6.3 CantorRoPE (Wormhole Shortcuts)
```
pos_cantor(m) = trend ร— m + deviation ร— wormhole(m)
where:
trend = 1.0 (aligns macro slope with standard RoPE)
deviation = learnable perturbation scale
wormhole(m) = branch_path_alignment signal
```
Tokens with aligned branch paths can shortcut regardless of sequential distance.
### 6.4 Aligned Triad (Proven Configuration)
```
Standard: linear baseline "this comes after that"
Beatrix: hierarchical plateaus "these belong together"
Cantor: wormhole perturbations "these can shortcut"
All share same macro slope (trend=1.0), different micro structure.
```
### 6.5 Tower Assignment
```
Tower_positive = BeatrixRoPE(...) # hierarchical reasoning
Tower_negative = CantorRoPE(...) # wormhole reasoning
Signed pairs create differential forces in oscillator fusion.
```
---
## 7. Beatrix Oscillation Formulas (GeoFractal Router)
Physics-based fusion replacing static weighted sums. Tower outputs are force fields, not opinions to average.
### 7.1 Covariant Dynamics
```
dx/dt = v
dv/dt = -2ฮฒ(t)ยทv - ฯ‰ยฒยทLog_x(x_ref) + ฮบ(t)ยทu_towers + ฮณ(t)ยทฮพ_guide
where:
x = position on manifold
v = velocity in tangent space
ฮฒ(t) = damping schedule
ฯ‰ = spring frequency
x_ref = conditioning anchor
ฮบ(t) = tower coupling strength
u_towers = force from tower opinions
ฮณ(t) = guidance strength
ฮพ_guide = external guidance (DINO, text, etc.)
```
### 7.2 Manifold Operations
```
Log_x(y) = y - x # tangent vector from x toward y
Exp_x(v) = x + v # move along tangent vector
PT_{xโ†’y}(v) = v # parallel transport (flat approx)
```
### 7.3 Tower Force Generation
```
For N towers with signed pairs:
force_i = proj_i(tower_output_i) # [B, manifold_dim]
u_towers = ฮฃ_i w_i ยท force_i # weighted combination
Positive towers push toward structure.
Negative towers push away from collapse.
```
### 7.4 Tesla 3-6-9 Schedule
```
ฮฒ(t) = ฮฒ_base + resonance(t)
resonance(t) = 0.1ยทsin(3ฯ€t) + 0.05ยทsin(6ฯ€t) + 0.025ยทsin(9ฯ€t)
Resonant peaks at t = 1/3, 2/3, 1.0
Energy doesn't flow linearly โ€” it oscillates.
```
### 7.5 Schedule Types
| Schedule | Formula |
|----------|---------|
| Constant | `s(t) = start` |
| Linear | `s(t) = start + (end - start) ยท t` |
| Cosine | `s(t) = end + (start - end) ยท 0.5(1 + cos(ฯ€t))` |
| Sigmoid | `s(t) = start + (end - start) ยท ฯƒ(12(t - 0.5))` |
| Tesla 3-6-9 | `s(t) = linear(t) + resonance(t)` |
### 7.6 Intrinsic Tension ฯ„
```
ฯ„ = ฯƒ(gain ยท (ฮฃ_i w_i ยท invariant_i - equilibrium))
where:
invariant_i = geometric invariants (Volยฒ, edge stats, etc.)
w_i = learned per-invariant weights
gain = steepness of sigmoid response
equilibrium = learned bias
ฯ„ โ†’ 0: Pure spring (geometric constraint dominates)
ฯ„ โ†’ 1: Pure control (tower forces dominate)
```
### 7.7 Stability Criterion
```
Eigenvalues of linearized system:
ฮป = -ฮฒ ยฑ โˆš(ฮฒยฒ - (1-ฯ„)ฯ‰ยฒ)
Overdamped: ฮฒยฒ > (1-ฯ„)ฯ‰ยฒ (stable, no oscillation)
Underdamped: ฮฒยฒ < (1-ฯ„)ฯ‰ยฒ (oscillatory)
Critical: ฮฒยฒ = (1-ฯ„)ฯ‰ยฒ (fastest convergence)
```
### 7.8 Energy Tracking
```
E_kinetic = 0.5 ยท ||v||ยฒ
E_potential = 0.5 ยท ฯ‰ยฒ ยท ||Log_x(x_ref)||ยฒ
E_total = E_kinetic + E_potential
Healthy training: E_total decreases over integration steps.
```
---
## 8. K-Simplex Linear (Near-Zero Params)
Replaces `nn.Linear` with geometric routing through simplex structure.
### 8.1 Architecture
```
Input (B, input_dim)
โ†’ chunk into (B, num_simplices, K+1) groups
โ†’ per-scalar entry into vertex (K+1 options)
โ†’ private hidden projection per vertex (depth = K+1)
โ†’ pairwise signal passages between all vertex pairs
โ†’ attenuation gates on pairwise influence
โ†’ exit: weighted sum of vertex states
Output (B, output_dim)
```
### 8.2 Parameter Count
```
Per simplex (K+1 inputs):
Entry: (K+1) ร— (K+1) ร— hidden
Vertex: (K+1) ร— hidden
Pairwise: C(K+1, 2) ร— 3 ร— hidden
Attenuate: C(K+1, 2) ร— 2
Exit: (K+1) ร— hidden + (K+1)
For K=4, input_dim=512:
103 simplices ร— 300 params = 30,900
vs nn.Linear: 262,656
Ratio: 0.118x (11.8% of linear params)
```
### 8.3 Structural Comparison
```
Structure size per simplex: (K+1) ร— (K+1) ร— C(K+1,2)
K=2: 3ร—3ร—3 = 27
K=4: 5ร—5ร—10 = 250
K=6: 7ร—7ร—21 = 1029
```
### 8.4 Results
```
Fashion-MNIST:
KSimplex-k4: 85.94% with 8,511 params
MLP baseline: 89.00% with 101,770 params
Ratio: 11.5ร— more parameter-efficient
Epoch 1: 84.28% test (instant useful signal)
Epoch 19: 85.94% test (stable convergence)
```
---
## 9. K-Simplex Deformation Limitations
Critical stability boundaries from extensive geometric explorer experiments.
### 9.1 Stability Zones by Configuration
| Configuration | Differentiation Zone | Collapse Threshold |
|---------------|---------------------|-------------------|
| k=1-4, edim=16 | 0.15 - 0.35 | ~0.50 |
| k=1-4, edim=32 | 0.15 - 0.50 | >2.0 |
| k=1-6, edim=16 | 0.35 - 0.45 | ~0.50 |
| k=1-6, edim=32 | 0.25 - 0.60 | >2.0 |
### 9.2 Embedding Dimension Safety Ratio
```
stability_ratio = edim / k_max
ratio โ‰ฅ 8ร— โ†’ Very stable, deform up to 2.0
ratio โ‰ฅ 4ร— โ†’ Comfortable margin
ratio โ‰ฅ 2ร— โ†’ Tight but functional
ratio < 2ร— โ†’ Dangerous, frequent invalidity
```
### 9.3 Deformation Behavior
```
Low deform (0 - 0.15):
Clear k-level hierarchy
Volยฒ decreases exponentially with k
Conservative but safe
Medium deform (0.15 - 0.35): โ† OPTIMAL ZONE
Distinct geometric signatures per k
Maximum useful differentiation
Training should target this range
High deform (> 0.5):
Noise dominates
k-levels converge (lose meaning)
Geometric structure destroyed
```
### 9.4 Late-Stage K-Simplex Invalidity
```
As k increases:
- CM determinant computation becomes numerically unstable
- More edge configurations become geometrically impossible
- Deeper layers produce invalid simplex configurations
k=4 in 32D: stable with wide margin
k=5 in 32D: functional but tighter
k=6 in 32D: approaching invalidity ceiling
Recommendation: k=4 (pentachoron) as primary, kโ‰ค3 for tight budgets
```
### 9.5 Cross-Entropy Degeneracy Problem
```
Cross-entropy applied directly to simplex features:
โ†’ Vertices converge (minimizing distance to class boundary)
โ†’ Volume โ†’ 0 (simplex collapses)
โ†’ ฮฑ diverges from triadic equilibrium
โ†’ Geometric structure destroyed after sufficient epochs
Solution: Use crystal loss or basin loss, NOT cross-entropy on geometric features.
```
---
## 10. Cross-Contrast Capacity Tests
Validating that geometric structure survives training and provides meaningful classification signal.
### 10.1 Geometric Cross-Contrastive Loss
```
sim_matrix = (xฬ‚ @ xฬ‚.T) / ฯ„ # [B, B] embedding similarity
cantor_positives = (|C(i) - C(j)| < ฮธ_cantor) AND (|Vol(i) - Vol(j)| < ฮธ_vol)
L_cross = -log(ฮฃ_jโˆˆpositives exp(sim_ij) / ฮฃ_jโˆˆall exp(sim_ij))
where positives are defined by geometric proximity, not class labels
```
### 10.2 Capacity Invariants to Monitor
```
1. Volยฒ > 0 for all simplices (validity)
2. ฮฑ โˆˆ [0.44, 0.50] (triadic equilibrium)
3. Edge length variance < threshold (structural uniformity)
4. Cantor prototype separation > margin (class distinctness)
5. Crystal distance to prototype ~ d_target (geometric alignment)
```
### 10.3 Differential Cross-Contrast (Tower Pairs)
```
For positive/negative tower pairs:
ฮ”_force = force_positive - force_negative
L_differential = -log(ฯƒ(ฮ”_force ยท direction_to_correct_class))
+ log(ฯƒ(ฮ”_force ยท direction_to_incorrect_class))
Signed pairs create differential forces, not just different opinions.
```
### 10.4 Cross-Scale Consistency
```
For scales sโ‚, sโ‚‚:
features_s1 = proj_s1(backbone_features)
features_s2 = proj_s2(backbone_features)
L_consistency = ||rank_order(sim_s1) - rank_order(sim_s2)||โ‚‚
Ensures geometric relationships are preserved across crystal scales.
```
### 10.5 OOD Detection via Geometric Violation
```
In-distribution: Volยฒ > 0, ฮฑ stable, Cantor coherent
Out-of-distribution: Violations of above
OOD_score = (1 - ฯƒ(Volยฒ ยท 10โถ)) + (|ฮฑ - 0.5|) + (1 - compat_max)
```
### 10.6 Scaling Limitation (Known)
```
Cross-contrastive loss across full vocabulary:
O(Vยฒ) pairwise comparisons
V=100 (CIFAR-100): 10K pairs โ†’ feasible
V=1000 (ImageNet): 1M pairs โ†’ expensive
V=50000 (tokenizer): 2.5B pairs โ†’ infeasible
Solution: Hierarchical contrastive within Cantor branches.
Only contrast within same coarse branch (routing highways).
Fine branches โ†’ local contrast only.
```
---
## Appendix A: Proven Results Summary
| Model | Task | Accuracy | Params | Key Innovation |
|-------|------|----------|--------|----------------|
| David | ImageNet (CLIP bigG) | 86% | ~120K | Multi-scale crystal |
| David | CIFAR-100 | 74.87% | 393K | Crystal prototypes |
| David | CIFAR-100 | ~92% | 78KB | Extreme compression |
| geo-beatrix | CIFAR-100 | 67.69% | โ€” | NO attention, NO CE |
| KSimplex Attention | FMNIST | 89.13% | โ€” | Geometric attention |
| KSimplex Attention | CIFAR-10 | 84.59% | โ€” | Conv stem + geo attn |
| KSimplex Attention | CIFAR-100 | 69.08% | โ€” | Multi-layer sharpening |
| KSimplex Linear | FMNIST | 85.94% | 8,511 | 11.5ร— efficiency |
| KSimplex LLM | Shakespeare | PPL 113 | 54M | 100% geo validity |
| Beeper v5 | Ethics | Coherent | Random | Architecture IS intelligence |
## Appendix B: Formula Dependencies
```
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Cayley-Mengerโ”‚ โ† structural invariant
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ–ผ โ–ผ โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ K-Simplexโ”‚ โ”‚ Crystal โ”‚ โ”‚ Basin โ”‚
โ”‚ Channel โ”‚ โ”‚ Loss โ”‚ โ”‚ Compat โ”‚
โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚ โ”‚ โ”‚
โ–ผ โ–ผ โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Cantor Lens โ”‚
โ”‚ (Staircase + Alignment + Bias) โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ–ผ โ–ผ โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Topo โ”‚ โ”‚ Osc โ”‚ โ”‚ KSimplex โ”‚
โ”‚ Ropes โ”‚ โ”‚ Fuse โ”‚ โ”‚ Linear โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
```
## Appendix C: What Kills Geometry (Known Failure Modes)
1. **Cross-entropy on geometric features** โ†’ simplex collapse
2. **Distance on Cantor set** โ†’ meaningless (use alignment)
3. **Deformation > 0.35 at edim/k < 4** โ†’ invalidity
4. **k > 4 without edim โ‰ฅ 8k** โ†’ numerical instability
5. **Uniform Cantor level weights** โ†’ hides 8ร— routing significance difference
6. **Resizing crystal anchors across scales** โ†’ destroys pentachoron geometry (use separate init per scale)
7. **Dropout scaling with โˆšdim** โ†’ inconsistent information flow across scales