monogate.dev

The Phantom

Train an EML tree to find π. It won't. Unless you know the trick.

Regularization λ0.0000
Below critical threshold (0.001). The phantom attractor dominates.
Found π
0
Trapped
0

Phantom attractors are precision-dependent saddle points in the EML optimization landscape. They arise because finite-precision gradient computation in nested exp/ln chains cannot distinguish the saddle from a true minimum. At float64, the saddle is escapable and training finds π directly. At float32, the gradient signal is too coarse — the phantom traps every run. Adding L1 regularization (λ ≥ 0.001) steepens the landscape enough to escape even at float32. This phenomenon is specific to EML trees — Taylor and Padé bases don't exhibit it. Data: 20 seeds × 10 λ values (experiments/gen_attractor_data_v2.py).

monogate.org · monogate.dev