This entry posts the active full outline from info-geo/full_paper_outline.md as part of the running work trail.
Source file: {{ page.source_path }}
Paper Outline v2: Proxy Gap, Transport Geometry, and Falsifiable Mechanism Tests
Target Venues
- ICML 2026
- NeurIPS 2026
- ICLR 2027
Working Title Options
- When Reconstruction Looks Solved but Behavior Disagrees
- Proxy Failure in SAE Evaluation Across Scale
- Transport-Informed Diagnostics for Sparse Autoencoders
Submission-Safe Abstract (Template)
Sparse autoencoders (SAEs) are commonly evaluated with reconstruction metrics such as $R^2$, yet behavior-preservation metrics can disagree sharply in practice. We document a robust disagreement regime in mid-layer residual streams where model scaling improves reconstruction while worsening cross-entropy recovery at fixed sparsity budget. We propose a falsifiable mechanism class: Euclidean reconstruction error underweights behavior-sensitive directions induced by downstream geometry. Motivated by recent work on entropic-transport interpretations of attention, information geometry of softmax, and task-relevant dimensionality, we test whether sensitivity-/geometry-aware diagnostics outperform standard reconstruction proxies after controlling for optimization budget. We provide a pre-registered experimental ladder that separates optimization artifacts from intrinsic residual effects, and report which claims are supported, contradicted, or unresolved.
Claim Ladder (Core)
C1 (Observed, current evidence)
Proxy mismatch exists in budgeted low-$k$ mid-layer settings:
- $\Delta R^2 > 0$ while $\Delta CE_{rec} < 0$ across scale in key cells.
C2 (Must test next)
Optimization-vs-intrinsic discrimination:
- Does low-$k$ CE gap vanish with higher SAE token budget?
C3 (Mechanism test)
Geometry-aware proxies (SWD / pullback approximations) outperform $R^2$ in predicting behavior.
C4 (Optional extension)
Task-relevant dimensional mismatch contributes to where proxy failure is strongest.
Section-by-Section Outline (v2)
1. Introduction (1.5-2 pages)
- Why SAE evaluation quality matters.
- Empirical anomaly at scale.
- Three-gap decomposition: - metric-space gap, - optimization gap, - dimensionality gap.
- Contributions: empirical regime map + falsification framework.
2. Experimental Setup and Data (1-1.5 pages)
- Models/layers/SAE configs.
- Metrics definitions (
R^2,CE_rec, rate, usage proxies). - Current artifact provenance and caveats (single-seed, 10M budget).
3. Theory as Hypothesis Generator (1.5-2 pages)
- Attention-EOT interpretation (bounded claim).
- Output-space information geometry and pullback concept.
- MI task-relevant dimensionality framing.
- Clear non-claims.
4. Discriminating Experiments (1.5-2 pages)
- Exp-A: low-$k$ token sweep for H0 vs H1.
- Exp-B: SWD vs reconstruction proxies.
- Exp-C: anisotropy controls.
- Exp-D/E optional second wave.
5. Results (2-3 pages)
Structure this section as:
- Observed now (proxy mismatch).
- After new runs (H0/H1 decision).
- Mechanistic lift (proxy leaderboard with uncertainty).
6. Discussion (1 page)
- What is now supported.
- What remains unresolved.
- Practical implications for SAE evaluation/steering.
7. Reproducibility and Limits (0.5-1 page)
- Seed vs bootstrap uncertainty.
- Compute constraints and confounds.
- Open-source artifact checklist.
Pre-Registered Decision Rules
D1: Optimization gate
Fit: $$ \Delta CE(T,k)=\Delta_\infty(k)+A_kT^{-\beta_k}. $$
Interpretation:
- $\Delta_\infty(k=8)$ near 0 with tight CI -> optimization-dominant.
- $\Delta_\infty(k=8)$ negative with CI excluding 0 -> residual intrinsic component.
D2: Mechanism gate
Support for geometry-aware mechanism requires:
- SWD/pullback proxy out-of-sample predictive lift over $R^2$.
- Lift stable under resampling/seed variation.
D3: Dimensionality gate (optional)
Support requires:
- MI-derived dimensional indicators explain additional CE variance after controlling for basic capacity proxies.
Statistical Plan (Minimum)
- Distinguish eval bootstrap uncertainty from training-seed uncertainty.
- Mark single-seed findings as descriptive.
- Use held-out cell prediction for proxy comparisons.
- Avoid fixed rho-threshold claims without uncertainty context.
Figure Plan (v2)
- Fig 1: Core mismatch panel (
R^2vsCE_recby scale at fixed $k$). - Fig 2: Token budget vs
Delta CE(low-$k$ asymptote fit). - Fig 3: Proxy leaderboard (
R^2, cosine, SWD, optional pullback metric). - Fig 4: Layer anisotropy summaries vs mismatch magnitude.
- Fig 5 (optional): MI dimension indicators vs proxy-failure regimes.
Coauthor Work Breakdown (Execution-Ready)
Workstream A: Runs
- Launch low-$k$ token sweep (70M/410M, mid, $k=8,16$, 10M/50M/100M).
- Add 2-3 seeds on anchor cells.
Workstream B: Analysis
- Implement SWD extraction.
- Build standardized table generator with uncertainty fields.
Workstream C: Writing
- Maintain two claim tables: - supported now, - contingent on pending experiments.
- Keep abstract/conclusion conditional until D1 and D2 are settled.
Timeline (ASAP realistic)
- Week 1: Exp-A minimal + Exp-B first pass + draft figures 1-3.
- Week 2: seed repeats + anisotropy controls + claims update.
- Week 3: optional pullback/MI second wave if D1 supports intrinsic residual.
- Week 4: full manuscript freeze for internal review.
“Do Not Overclaim” Checklist (Pre-Submission)
- No “intrinsic geometry proven” language without D1 support.
- No theorem labels for unproved approximations in this setting.
- No direct “replace PR with MI k*” claims.
- Every major mechanism claim tied to one explicit falsification result.
Target Length
- Main paper: 9-10 pages (conference format).
- Appendix: 6-10 pages (methods, extra tables, ablations).