What if the invisible force pushing our universe apart isn't constant — but changing? What would that mean for everything we thought we knew about the cosmos?
Welcome to FreeAstroScience. We're glad you stopped by. Here, we turn complex scientific ideas into something anyone can grasp — because we believe knowledge shouldn't have a locked door. Whether you're a physics student, a curious stargazer, or someone who just saw a headline about "evolving dark energy" and wondered what on earth it meant — this article was written for you.
In February 2026, a fierce debate erupted in cosmology. The Dark Energy Spectroscopic Instrument (DESI) released its second massive batch of data — over 14 million galaxies and quasars mapped across billions of light-years . And when scientists compared those maps with predictions from our best model of the universe, something didn't quite match. Some say dark energy itself is evolving. Others, like Dr. Slava Turyshev at NASA's Jet Propulsion Laboratory, think the answer might be simpler: our measurements aren't precise enough yet .
This debate sits at the very heart of modern cosmology. Stay with us. By the time you finish reading, you'll understand both sides of the argument — and why the answer shapes the future of physics.
Dark Energy on Trial: What 14 Million Galaxies Reveal About Our Universe's Hidden Force
What Is Dark Energy — And Why Should You Care?
Let's start from the ground up. In 1998, two teams of astronomers — led by Saul Perlmutter, Brian Schmidt, and Adam Riess — published findings that turned physics on its head. They expected the universe's expansion to be slowing down. Gravity, after all, pulls things together. A decelerating cosmos was the sensible bet.
Instead, the expansion was speeding up .
Something invisible and powerful was pushing galaxies apart — faster and faster with every passing billion years. We named it dark energy. And it makes up roughly 68% of everything that exists.
Read that again. More than two-thirds of the cosmos is made of something we can't see, can't touch, and can't directly detect. We only see its fingerprints — the accelerating stretch of space itself.
The simplest explanation? A cosmological constant (Λ). Einstein first introduced it in 1917, then called it his "biggest blunder." In the standard ΛCDM model, dark energy has a fixed value. It doesn't change. It just is .
But here's the uncomfortable truth. When physicists try to calculate that constant from quantum field theory, the predicted value overshoots by a factor of roughly 10¹²⁰ . That's 120 orders of magnitude — the most embarrassing mismatch in all of physics, often called the vacuum energy naturalness problem .
So the question has always whispered in the background: is dark energy truly constant? Or is it something more dynamic — something that shifts over cosmic time?
New data from DESI has made that whisper a shout.
The Math Behind Cosmic Expansion
At the core of cosmology sits the Friedmann equation. It governs how the universe grows.
The Friedmann Equation (Eq. 2 — Turyshev 2026)
\[ H^{2}(z) \;=\; H_{0}^{2}\!\left[\,\Omega_{\rm m}(1{+}z)^{3} \;+\; \Omega_{\rm r}(1{+}z)^{4} \;+\; \Omega_{k}(1{+}z)^{2} \;+\; \Omega_{\rm DE}\,f_{\rm DE}(z)\,\right] \]In everyday language: the universe's expansion rate (H) depends on everything inside it — ordinary matter, radiation, curvature, and dark energy. Each ingredient contributes differently as the cosmos ages .
For dark energy, the critical variable is w — the equation of state. If (w = -1) exactly, dark energy is a cosmological constant. Unchanging. Eternal. But if (w) drifts over time, we're dealing with something entirely different — and the fate of the universe might need a rewrite .
The dark energy density evolves as :
Dark Energy Density Evolution (Eq. 3)
\[ f_{\rm DE}(z) \;=\; \exp\!\left(3\int_{0}^{z}\frac{1+w(z')}{1+z'}\,dz'\right) \]When (w = -1), that integral vanishes. The dark energy density stays flat. Any deviation from (-1), and things start to change.
What Did DESI's Second Data Release Actually Find?
14 Million Galaxies and One Nagging Mismatch
The Dark Energy Spectroscopic Instrument sits atop the Nicholas U. Mayall 4-meter telescope at Kitt Peak, Arizona. Its mission: measure the positions and distances of millions of cosmic objects, building the most detailed 3D map of the universe ever assembled .
In its second data release — known as DESI DR2 — the instrument delivered percent-level precise distance measurements across redshifts from 0 to roughly 2.5. That corresponds to looking back over 10 billion years of cosmic history . It even included a high-redshift anchor from the Lyman-alpha forest at an effective redshift of z = 2.33 .
Now here's where things get compelling.
Scientists combined DESI's galaxy maps with data from the Cosmic Microwave Background (CMB) — the faint afterglow of the Big Bang, best measured by the Planck satellite in 2018. The CMB tells us what the early universe looked like. DESI tells us what the late universe looks like. And in the standard flat ΛCDM model, they should agree .
They don't perfectly agree. The tension sits at about 2.3σ .
In particle physics, you need 5σ to claim a discovery. But in cosmology — where datasets are harder to replicate — 2.3σ is enough to make everyone sit up straight.
The Numbers Behind the Headlines
When researchers allowed dark energy's equation of state to change over time — using a two-parameter model called w₀wₐCDM (also known as CPL, after Chevallier, Polarski, and Linder) — the combined fit improved . The preference for evolving dark energy over the cosmological constant reached :
- 3.1σ when combining DESI BAO with CMB data alone
- 2.8 to 4.2σ when Type Ia supernovae were added — but the exact number depends heavily on which supernova catalog was used
The best-fit solutions fell in a specific quadrant: w₀ slightly above −1 and wₐ slightly below 0 . Translation: dark energy looks like a cosmological constant in the distant past but appears to gradually weaken toward the present.
That's a tantalizing pattern. But — and this is a big "but" — the signal's strength shifts depending on which supernova dataset you plug in .
| Probe / Dataset | Key Measurement | Selected Result |
|---|---|---|
| Planck 2018 (ΛCDM baseline) | CMB anisotropies | H₀ ≈ 67.4 ± 0.5 km/s/Mpc; Ωm ≈ 0.315 ± 0.007 |
| DESI DR2 BAO | BAO distances, 0 ≲ z ≲ 2.5 | ~2.3σ mismatch with CMB in flat ΛCDM; 3.1σ preference for w₀waCDM (BAO+CMB) |
| DESI DR2 Ly-α BAO | DM/rd , DH/rd at zeff = 2.33 | DH/rd = 8.632 ± 0.098 ± 0.026; DM/rd = 38.99 ± 0.52 ± 0.12 |
| Pantheon+ (SNe Ia) | SN Hubble diagram to z ≈ 2.26 | Ωm = 0.334 ± 0.018 (flat ΛCDM); w₀ = −0.90 ± 0.14 (flat wCDM) |
| DES-SN5YR (SNe Ia) | 1 635 DES SNe + low-z anchor | Ωm = 0.352 ± 0.017 (flat ΛCDM) |
| DES Year 6 (3×2pt) | Shear + clustering + lensing | S₈ = 0.789 ± 0.012; w = −1.12 (+0.26/−0.20) in wCDM |
| Data compiled from DESI DR2 publications and complementary survey results. See Sources and . | ||
Is This Real Physics — Or Are We Just Measuring Wrong?
As the famous saying goes: extraordinary claims require extraordinary evidence. And that's precisely where Dr. Slava Turyshev enters the picture .
Dr. Turyshev — a physicist at NASA's Jet Propulsion Laboratory, and the most vocal advocate for the Solar Gravitational Lens mission — published a detailed pre-print on arXiv in February 2026. His message is cautious but clear: before we rewrite the textbooks, let's make sure our data is clean .
The Supernova Problem: When 0.02 Magnitudes Shakes the Universe
Type Ia supernovae are our best "standard candles" for measuring cosmic distances. They all explode with roughly the same brightness. By comparing how bright they should be with how bright they appear, we can calculate how far away they are .
Think of it like judging the distance to a streetlamp by how dim it looks. Simple in principle. Tricky in practice.
Each supernova needs corrections — for light-curve shape, color, and host galaxy properties. These corrections are expressed as a distance modulus μ(z), defined as :
Distance Modulus (Eq. 11)
\[ \mu(z) \;=\; m_B - M_B \;=\; 5\,\log_{10}\!\left(\frac{D_L(z)}{10\;\text{pc}}\right) \]If those corrections are off — even by a tiny 0.02 magnitudes — it can shift our inferred dark energy parameters by a startling amount .
Here's the math. A coherent offset of 0.02 mag translates to a fractional luminosity distance shift of roughly 0.92% . At cosmological precision, that's not noise — it's a landmine.
Dr. Turyshev's linear-response analysis shows exactly how these small errors propagate. At redshift z ≈ 1, a residual of just +0.02 mag can shift the dark energy equation of state by δw₀ ≈ −0.065 (holding wₐ fixed) or δwₐ ≈ −0.28 (holding w₀ fixed) .
Those shifts are comparable in size to the "evolving dark energy" signals being reported from DESI.
| Redshift (z) | ∂Δμ / ∂w₀ | ∂Δμ / ∂wa | δw₀ (wa fixed) | δwa (w₀ fixed) |
|---|---|---|---|---|
| 0.3 | −0.143 | −0.016 | −0.14 | −1.2 |
| 0.5 | −0.230 | −0.036 | −0.087 | −0.56 |
| 1.0 | −0.310 | −0.071 | −0.065 | −0.28 |
| 2.0 | −0.298 | −0.089 | −0.067 | −0.22 |
| Values evaluated at a flat ΛCDM fiducial (Ωm = 0.315, h = 0.674) with zref = 0.1. Adapted from Turyshev (2026), Table I. Source . | ||||
That table is a wake-up call. The "signal" for new physics and the "noise" from calibration errors occupy the same territory. Before we declare a revolution, we need to be sure we can tell them apart .
As a practical target: keeping the bias on w₀ below 0.05 means controlling coherent relative modulus residuals to roughly (1–2) × 10⁻² mag across the critical redshift range of z ~ 0.5 to 1.0 .
The Sound Horizon: Can We Trust Our Cosmic Ruler?
There's a second potential source of confusion, and it's just as sneaky.
When we measure distances with baryon acoustic oscillations (BAO), we rely on the sound horizon — the distance that pressure waves traveled through the hot plasma of the early universe before the cosmos cooled enough for atoms to form, about 380,000 years after the Big Bang .
That distance is frozen into the distribution of galaxies like a cosmic fingerprint. It acts as a standard ruler: compare the apparent size of the pattern at different epochs, and you trace out how the universe has expanded .
But here's the catch. The ruler itself is calibrated by early-universe physics. It depends on assumptions about the baryon density, photon temperature, and expansion rate before recombination :
Sound Horizon at the Drag Epoch (Eq. 27)
\[ r_d \;=\; \int_{z_d}^{\infty}\frac{c_s(z)}{H(z)}\,dz \]If our assumptions about the pre-recombination universe are slightly wrong — if rd differs from the Planck-calibrated value — our entire distance ladder shifts. And that shift can masquerade as evolving dark energy when it's really an early-universe calibration artifact .
This is exactly why Dr. Turyshev's next diagnostic tool is so powerful.
Can We Test Expansion Without Any Ruler at All?
The Alcock-Paczynski Diagnostic: A Ruler-Free Window Into Expansion
Dr. Turyshev proposes a mathematical technique called the Alcock-Paczynski (AP) diagnostic that elegantly sidesteps the ruler problem .
The concept is beautiful in its simplicity. Instead of measuring absolute distances (which depend on rd), you look at the shape of the universe — specifically, the ratio of transverse and radial distances to the same objects :
The Alcock-Paczynski Parameter (Eq. 29)
\[ F_{\rm AP}(z) \;\equiv\; \frac{D_M(z)}{D_H(z)} \;=\; \frac{D_M(z)/r_d}{D_H(z)/r_d} \]See what happened there? Both H₀ and the sound horizon rd cancel in the ratio. What remains is a pure measurement of how the expansion history's shape has changed over time — completely free from early-universe assumptions .
This gives us a clean test:
- If the DESI mismatch is caused by a problem with rd (an early-universe issue), then FAP should still match ΛCDM predictions.
- If FAP itself deviates from ΛCDM, that points to genuine late-time changes in the expansion rate .
So what does the data say?
Using DESI DR2's Lyman-alpha measurements at zeff = 2.33 :
• Observed: FAP = 4.518 ± 0.095 (stat) ± 0.019 (sys)
• ΛCDM prediction: FAP ≈ 4.55
• Difference: −0.03 ± 0.10 — consistent at ≤ 0.3σ
That's a reassuring result for the standard model. With current data, the ruler-free test doesn't show any clear deviation from ΛCDM .
This doesn't kill the idea of evolving dark energy. A single high-redshift data point can't discriminate between smooth late-time models. But it does tell us to stay level-headed. We need FAP measurements at multiple redshifts — and they're on the way as DESI publishes more of its anisotropic BAO data .
What If Dark Energy Really Is Evolving?
Let's flip the coin. Suppose the signal is real. Suppose supernovae are calibrated perfectly and rd is spot-on. What physical models could explain a dark energy that changes over time?
The CPL Framework: Putting Numbers on Change
The most common way to describe evolving dark energy is the CPL parametrization (Chevallier–Polarski–Linder) :
CPL Equation of State (Eq. 46)
\[ w(a) \;=\; w_0 + w_a\,(1 - a) \;=\; w_0 + w_a\,\frac{z}{1+z} \]Here, w₀ is the equation of state today, and wₐ tells you how fast w changes with cosmic time. If w₀ = −1 and wₐ = 0, you recover the cosmological constant. Any other values mean dark energy is dynamic .
The effective phantom crossing — the moment when w crosses −1 — happens at :
[ z_{\times} ;=; \frac{-1 - w_0}{w_a + 1 + w_0} ]
DESI's combined fits prefer w₀ slightly above −1 and wₐ slightly below 0. This pattern has a specific name: "thawing" . Dark energy was frozen near −1 in the early universe and has slowly begun to drift away from it.
Quintessence: A Field That Slowly Wakes Up
The simplest physical picture? Quintessence — a scalar field slowly rolling down a potential energy hill, much like a marble rolling across a very shallow bowl.
For billions of years, Hubble friction held the field nearly still (mimicking a cosmological constant). As the universe expanded and that friction weakened, the field began to move. Its equation of state crept above −1 .
This is elegant. It fits the thawing pattern naturally. And its sound speed is approximately 1, meaning dark energy clustering is weak on sub-horizon scales — so it doesn't drastically change how matter clumps together .
But quintessence has a hard limit: it can never cross the w = −1 barrier. It always sits at w ≥ −1 . If future data firmly shows a crossing, quintessence — as a single canonical scalar field — is ruled out.
The LTIT Model: When Dark Sectors Start Talking to Each Other
Dr. Turyshev introduces an inventive model: the Late-Transition Interacting Thawer (LTIT) .
Think of it this way. Imagine dark energy and dark matter are neighbors who've been ignoring each other for billions of years. Then, at some point in cosmic history, they start exchanging energy — like one neighbor lending the other sugar, except the "sugar" is energy-momentum .
The LTIT model has three ingredients :
- A canonical scalar field (like quintessence) with a potential V(ϕ)
- A coupling to cold dark matter that's switched off in the early universe
- A trigger function (a hyperbolic tangent) that turns the coupling on only at late cosmic times
The interaction creates an effective equation of state that can look like it crosses −1 — even though the underlying physics never requires a "phantom" field with pathological negative kinetic energy :
LTIT Effective Equation of State (Eq. 68)
\[ w_{\rm eff}^{\rm LTIT}(z) \;=\; w_\varphi(z) \;-\; \frac{Q_{\rm LTIT}}{3\,H\,\rho_\varphi} \]The LTIT model is testable. It predicts specific changes in how matter clusters together — measurable through redshift-space distortions and lensing. It leaves the early universe (and the sound horizon) untouched. And because it only modifies late-time physics, FAP diagnostics should track its predictions .
Phantom Crossing: When Physics Breaks Its Own Rules
If w genuinely crosses −1 at some point in cosmic history, things get strange. Standard single-field models can't achieve this without encountering ghost instabilities — fields with negative kinetic energy whose vacuum would decay catastrophically .
A "no-go theorem" by Vikman (2005) makes this explicit: for a broad class of single-field models, smooth crossing of w = −1 is blocked under standard stability assumptions .
To make phantom crossing work, you'd need at least one of :
- Multiple fields — a "quintom" scenario with two scalar fields
- Dark sector interactions — like the LTIT model, where the effective w crosses −1 while the real one doesn't
- Modified gravity — where the "w" reconstructed from distances isn't a real equation of state at all
Each of these makes different predictions. And each is testable.
| Model Class | Background Signature | Perturbation / Gravity Signature | Key Cross-Check |
|---|---|---|---|
| ΛCDM | w = −1 constant; ρDE = const. | GR growth; μ = η = 1 | Geometry + growth consistency; standard sirens give DGWL = DEML |
| Smooth Quintessence | w(z) > −1; no phantom crossing | c²s ≈ 1; weak clustering; GR growth | No persistent w < −1; growth index near GR |
| Clustering DE (k-essence) | w(z) > −1 possible | c²s ≪ 1; modified lensing/growth | 3×2pt vs. distances; scale dependence in growth |
| Interacting Dark Sector | weff(z) can mimic crossing | Modified growth + momentum transfer | RSD + 3×2pt closure; bias & velocity field consistency |
| Early-Time rd Shift (EDE-like) | Background fits partly absorbed by rd | Changes early-time structure; often linked to σ₈ shifts | CMB lensing + LSS + BAO consistency; direct rd prior tests |
| Modified Gravity (scalar-tensor) | Effective w(z) from distances may cross −1 | μ ≠ 1, η ≠ 1; GW friction possible | Standard sirens: DGWL ≠ DEML; EFT stability; growth/lensing |
| Adapted from Turyshev (2026), Table III. Models distinguished by perturbation-level closure tests. Source . | |||
How Do We Compare Competing Models of the Universe?
Saying "model X fits better than model Y" isn't enough. You need to quantify how much better — and whether the improvement justifies the extra complexity.
Statistical Tools That Keep Cosmologists Honest
The standard approach is a likelihood ratio test. You compare the best-fit chi-squared values between two nested models — say, ΛCDM (simpler) and w₀wₐCDM (more complex). The difference, Δχ², follows a known statistical distribution :
Likelihood Ratio & Gaussian-Equivalent Significance (Eq. 70)
\[ p \;=\; 1 - F_{\chi^2_k}(\Delta\chi^2), \qquad N_\sigma \;=\; \Phi^{-1}(1 - p/2) \]A common mistake people make: claiming that Nσ ≈ √(Δχ²). That shortcut only works when you've added one extra parameter. The w₀wₐCDM extension adds two (w₀ and wₐ), so the conversion is different .
For k = 2 extra parameters :
- 1σ → Δχ² ≈ 2.30
- 2σ → Δχ² ≈ 6.18
- 3σ → Δχ² ≈ 11.83
- 4σ → Δχ² ≈ 19.33
Then there are information criteria — penalties for adding complexity :
- AIC adds a cost of 2 per extra parameter. Relatively lenient.
- BIC adds ln(Ndata) per extra parameter. For supernova datasets with thousands of data points, that penalty is roughly 7 per parameter — much harsher.
For SN-dominated datasets, BIC demands Δχ² > 14–18 to favor the two-parameter extension. So a "3.1σ" detection under a likelihood ratio might not survive the stricter BIC test . This gap between metrics is exactly why reporting multiple comparison tools — likelihood ratio, AIC, BIC, and ideally Bayesian evidence — matters for honest science.
Growth, Lensing, and Gravitational Waves — Beyond Distance
Distance measurements alone can't distinguish all these models. Two very different theories might predict the same expansion history yet disagree wildly on how matter clumps together.
That's where growth measurements come in.
Redshift-space distortions (RSD) track how fast galaxies fall toward overdense regions. Weak gravitational lensing maps the total mass distribution by measuring how background galaxy shapes are subtly stretched by foreground gravity. Together — in what's called a 3×2pt analysis (combining shear-shear, galaxy-galaxy lensing, and galaxy clustering) — they test whether a model that fits distances also fits perturbations .
DES Year 6 recently reported exactly this kind of analysis. Their results: S₈ = 0.789 ± 0.012 in ΛCDM, and w = −1.12 (+0.26/−0.20) in a constant-w extension — consistent with the cosmological constant .
And then there's the gravitational wave option. When neutron stars merge (like GW170817 in August 2017), the gravitational waves provide a "standard siren" — an independent distance measurement that doesn't depend on supernova calibration or the sound horizon . As more events accumulate, standard sirens will offer a powerful cross-check.
Some modified gravity theories even predict that gravitational-wave luminosity distances differ from electromagnetic ones :
GW Luminosity Distance in Modified Gravity (Eq. 62)
\[ D_L^{\rm GW}(z) \;=\; D_L^{\rm EM}(z)\;\exp\!\left[\frac{1}{2}\int_0^z \frac{\nu(z')}{1+z'}\,dz'\right] \]If (D_L^{\rm GW} \neq D_L^{\rm EM}), that's a direct signature of modified gravity — and a decisive way to tell it apart from simple dark-energy evolution .
What's Coming Next in This Cosmic Detective Story?
The data pipeline is far from empty. Several game-changing developments are expected soon:
DESI DR3 — The third data release will cover the first three full years of the main survey. It's expected later in 2026 and should dramatically sharpen BAO precision across multiple redshift bins . Every new bin gives us another FAP data point — building the ruler-free test into a genuine discriminator.
Euclid — The European Space Agency's Euclid telescope has already released its first dataset . By combining galaxy shape measurements with spectroscopic data, Euclid will test both expansion and growth with remarkable precision .
More Standard Sirens — As the LIGO, Virgo, and KAGRA gravitational-wave detectors improve, the number of neutron-star mergers with electromagnetic counterparts will grow. Each event gives us a calibration-free distance measurement — immune to the supernova and sound-horizon systematics at the center of this debate .
Full 3×2pt + BAO Joint Analyses — Combining weak lensing, galaxy clustering, and shear from DES, KiDS-Legacy (reporting S₈ = 0.815 +0.016/−0.021), and HSC Year 3 with DESI BAO will test perturbation-level consistency. This kind of test can distinguish between background-only modifications and genuine new physics .
As Dr. Turyshev writes, the dominant limitation is no longer statistical precision of distance ratios. It's the coherent propagation of a small set of calibration directions — the BAO ruler (via rd) and redshift-dependent supernova systematics . We've entered an era where knowing how well we measure matters as much as what we measure.
A Moment of Perspective
Let's step back and breathe.
We're living through an extraordinary chapter of cosmology. The DESI instrument has given us the most detailed 3D map of the universe ever assembled — over 14 million galaxies and quasars stretched across 10 billion years of cosmic history . That map hints, gently and provocatively, that dark energy might not be the simple cosmological constant we've assumed for a quarter century.
But hints aren't proof.
And as Dr. Turyshev's careful analysis demonstrates, tiny measurement errors in supernova calibration or in our assumed cosmic ruler can mimic the signal of evolving dark energy . The difference between "new physics" and "we need better calibration" currently comes down to a few hundredths of a magnitude. That's how fine the line is.
Here's what we know for certain:
- DESI DR2 data shows a real tension with the standard ΛCDM model when combined with CMB measurements — at about 2.3σ significance .
- Allowing dark energy to evolve improves the fit — but the strength of that improvement depends on which supernova dataset is used and how calibration systematics are handled .
- The ruler-free Alcock-Paczynski test at z = 2.33 shows no clear deviation from ΛCDM, limiting the evidence for genuine late-time expansion changes at this single redshift .
- Multiple physical models can explain the observed patterns — from quintessence to interacting dark sectors to modified gravity — and they make different predictions for growth, lensing, and gravitational waves .
- The next two years of data — from DESI DR3, Euclid, DES, and gravitational-wave observatories — will either strengthen the case for new physics or reveal that our standard candles and standard rulers needed a tune-up.
Either outcome is a win. Confirming ΛCDM to unprecedented precision would be a triumph of scientific methodology. Discovering that dark energy evolves would open an entirely new frontier — forcing us to rethink fundamental forces, the fate of the universe, and possibly physics itself.
We don't know which door will open. And honestly? That uncertainty is what makes this moment thrilling.
This article was written specifically for you by FreeAstroScience.com, where we explain complex scientific principles in clear, accessible language — because we believe every curious mind deserves a seat at the table of knowledge. At FreeAstroScience, we strive to keep your mind active, engaged, and questioning. As Francisco Goya warned us centuries ago: the sleep of reason breeds monsters. Stay awake. Stay curious. And come back soon — there's always more universe to explore.
Written by Gerd Dani — President, FreeAstroScience Science and Cultural Group
Sources
- Andy Tomaswick, "Is Dark Energy Actually Evolving?" Universe Today, February 16, 2026.
- Slava G. Turyshev, "Dark Energy After DESI DR2: Observational Status, Reconstructions, and Physical Models," arXiv:2602.05368v1 [astro-ph.CO], February 6, 2026. Jet Propulsion Laboratory, California Institute of Technology. Carried out under contract with NASA. ©2026 California Institute of Technology.
We support peace and science for all. Knowledge builds bridges — not walls.

Post a Comment