← Back to Home

Precision Race Visualization

Watch FP8, FP16, FP32, and FP64 compete to converge to the true eigenvalue. Each precision format shows different convergence characteristics, with lower precisions hitting their natural floor earlier due to machine epsilon limits.

Space: Play/Pause | R: Reset | Esc: Reset Zoom | ←/→: Step | Ctrl+Wheel: Zoom | Shift+Drag: Pan
FP8 (E4M3)
FP16
FP32
FP64
FP8
0
--
FP16
0
--
FP32
0
--
FP64
0
--

Experiment Conditions

Matrix Properties

Size: -- × -- SPD
Condition number κ: --
Frobenius norm: --

Eigenvalue Properties

λ₁ (dominant): --
λ₂ (second): --
Gap ratio λ₂/λ₁: --
Convergence rate: --% per iteration

Experiment Setup

Random seed: --
Convergence type: --
Initial vector: random, normalized
Algorithm: Power Method

Precision Characteristics

Format Mantissa Machine ε Expected Floor Iteration Budget
FP8 (E4M3) 3 bits 1.25×10⁻¹ ~10⁻² 3,000
FP16 10 bits 9.77×10⁻⁴ ~10⁻³ 2,000
FP32 23 bits 1.19×10⁻⁷ ~10⁻⁷ 1,000
FP64 52 bits 2.22×10⁻¹⁶ ~10⁻¹⁵ 500 (ref)

Iteration budgets scaled by throughput multipliers: all formats run for equivalent compute time (500 effective FP64 iterations).

Key Observations

  • Normalized by throughput: X-axis shows "effective FP64 iterations" - each precision runs for equivalent compute time (FP8 runs 6× more raw iterations but appears aligned with FP64)
  • FP8 (3000 raw iterations = 500 effective): stagnates around 10⁻² residual due to ε ≈ 0.125
  • FP16 (2000 raw = 500 effective): achieves ~10⁻³ residual before hitting precision floor
  • FP32 (1000 raw = 500 effective): converges to ~10⁻⁷ residual, sufficient for most applications
  • FP64 (500 raw = 500 effective): achieves machine precision convergence beyond 10⁻¹⁵
  • All formats follow the same initial trajectory when aligned by compute cost, but diverge at their precision floors