MatterSpace
MatterSpace Lattice Passes Blind Rediscovery on Live Production Deployment
MatterSpace Lattice has passed blind rediscovery on a live hosted production deployment. Not a research prototype. Not a carefully controlled lab environment. Not a script running on a researcher's workstation. A fully operational cloud-hosted product, running on standard GPU hardware, accessed through production API surfaces, with every telemetry check, health probe, and product validation test active during the campaign. Two sealed targets—Re₁@Ni(111)+CH₄ and Ir₁@Ni(111)+CH₄ single-atom alloy catalysts for methane cracking—were rediscovered through the same deployment that serves commercial customers.
This distinction matters more than it might first appear. The gap between a research result and a production product is where most AI systems quietly fail. A model that works in a notebook does not necessarily work in a production service. A generation pipeline that runs on a researcher's GPU cluster does not necessarily survive the constraints of real infrastructure—health checks, timeout policies, memory limits, SDK contract enforcement, and the full weight of product-grade surface validation. MatterSpace Lattice passed all of it. Blind rediscovery was achieved not despite the production environment but through it.
The sealed targets
The two sealed targets were Re₁@Ni(111)+CH₄ and Ir₁@Ni(111)+CH₄—single-atom alloy catalysts where a single rhenium or iridium atom is embedded in a Ni(111) surface, evaluated for methane (CH₄) adsorption and C–H bond activation. These are the same catalyst systems that MatterSpace blindly rediscovered in the original research benchmark, now used as sealed targets for a hosted production validation campaign. The target structures were completely firewalled from the generation pipeline. No target information was available to the engine during candidate generation.
Results: Re₁@Ni(111)+CH₄
The rhenium target campaign generated 50 candidates on the hosted deployment. All 50 passed Level A validation—a 100% pass rate on the adsorption energy threshold, confirming that every generated candidate fell within the catalytically relevant performance envelope. At Level B, 26 of the 50 candidates achieved structural fingerprint matches against the sealed Re₁@Ni target, a 52% hit rate for independent fingerprint rediscovery. At Level C—the strictest test, full atomic-level RMSD comparison—the best candidate achieved 655 milliangstrom (0.655 Å) RMSD against the known Re₁@Ni crystal structure. This is below the 1.00 Å threshold that defines a successful hosted structural rediscovery. Pass on all three levels.
Results: Ir₁@Ni(111)+CH₄
The iridium target campaign generated 50 candidates on the same hosted deployment. Again, all 50 passed Level A validation—100% pass rate on the adsorption energy threshold. At Level B, 30 of the 50 candidates achieved fingerprint matches, a 60% hit rate. At Level C, the best candidate achieved 508 milliangstrom (0.508 Å) RMSD against the known Ir₁@Ni crystal structure—also below the 1.00 Å threshold. Pass on all three levels. The iridium result is notably tighter than the rhenium result, with a structural precision that approaches the sub-half-angstrom range achieved in the original research benchmark.
Two sealed targets. Two clean passes across all three validation levels. Both on a live hosted deployment with full product surface enforcement. MatterSpace Lattice is a production product, not a research artifact.
Hardware and timing
Both campaigns ran on recorded NVIDIA A10G GPU hardware—a standard cloud GPU tier, not the A100 used in the original research benchmark. The A10G represents the kind of hardware available through standard cloud computing providers at commodity pricing, reinforcing that MatterSpace does not require exotic compute infrastructure to achieve scientifically meaningful results.
The Re₁@Ni campaign completed in 5.15 hours of generation time plus 1.42 hours of evaluation time—a total wall-clock time of approximately 6.6 hours for the full blind rediscovery pipeline. The Ir₁@Ni campaign completed in 5.37 hours of generation time plus 0.49 hours of evaluation time—approximately 5.9 hours total. These timings include all hosted infrastructure overhead: API request handling, health monitoring, telemetry recording, and product surface validation. They represent real production performance, not idealized research benchmarks stripped of operational costs.
82/82 product surface checks
During both campaigns, MatterSpace Lattice passed 82 out of 82 hosted product surface checks. These checks validate that the production deployment is operating correctly across its full surface area: API endpoint availability, SDK contract compliance, MCP tool surface functionality, authentication and authorization flows, candidate serialization and deserialization, result persistence, campaign state management, and telemetry integrity. A failure on any of these checks would indicate that the hosted deployment was not operating at product grade—that some aspect of the production infrastructure was degraded or misconfigured. All 82 passed. The blind rediscovery results were produced by a fully healthy, fully operational production deployment.
Why production deployment matters
The materials science AI field is filled with impressive research results that exist only as research results. Models that achieve strong benchmarks in controlled environments but have never been deployed as production services. Generation pipelines that work on the original author's hardware but have never survived the discipline of production infrastructure—containerization, resource limits, API contracts, monitoring, and the expectation that the system will produce correct results reliably, not just once for a paper but continuously for paying customers.
MatterSpace Lattice exists as a production product. It has a full REST API surface for programmatic access. It has an SDK for integration into research workflows and automated pipelines. It has an MCP (Model Context Protocol) tool surface for agent-driven discovery workflows. All of these surfaces were operational during the blind rediscovery campaigns. The results were produced through the same code paths, the same infrastructure, and the same validation checks that every customer interaction uses. There is no separate research mode that bypasses production constraints. The product is the research system, and the research system is the product.
Operational scope
At the time of the hosted blind rediscovery campaigns, MatterSpace Lattice had 9 domain packs available—covering materials classes including batteries, catalysts, superconductors, magnets, photovoltaics, thermoelectrics, high-entropy alloys, electrolytes, and coatings. Each domain pack supplies the domain-specific physics, constraints, objective functions, and validation criteria for its materials class, while the core engine handles landscape navigation, candidate generation, constraint enforcement, diversity maintenance, and provenance tracking.
Four campaign modes were operational: open discovery for exploring novel materials without predefined targets, blind rediscovery for validation benchmarks against sealed reference structures, targeted optimization for refining candidates toward specific property objectives, and property scanning for systematic exploration of property landscapes across compositional or structural axes. All four modes were available through the production API during the rediscovery campaigns.
From research benchmark to product validation
The original MatterSpace blind rediscovery—documented in the February 2026 research paper—was a research benchmark. It ran on an A100 GPU, generated 600 candidates across 23 dopant elements, and achieved sub-half-angstrom structural precision on both Re₁@Ni and Ir₁@Ni targets. That result established the science: MatterSpace's physics-first approach can blindly rediscover known catalytic materials at crystallographic precision.
The hosted blind rediscovery documented here establishes something different: that the science survives productionization. The A10G hardware is less powerful than the A100. The candidate count per campaign was 50 rather than 600. The production environment imposes overhead that a research script does not. And yet the engine passed all three validation levels for both targets. The structural precision is wider than the research benchmark—0.655 Å and 0.508 Å versus 0.466 Å and 0.408 Å—but both results are well within the 1.00 Å hosted threshold, and the Ir₁@Ni result in particular demonstrates that near-research-grade precision is achievable in a production deployment.
What this means for customers
For organizations evaluating MatterSpace Lattice as a materials discovery platform, the hosted blind rediscovery result provides a specific, verifiable claim: the production deployment can generate candidates that match known materials at sub-angstrom structural precision, with 100% Level A pass rates, majority Level B fingerprint matches, and Level C structural RMSD well below the validation threshold. These results were achieved on standard cloud GPU hardware, through production API surfaces, with full product surface validation active. This is the evidence that the platform works as advertised—not in a lab, not in a paper, but in the deployment that customers actually use.
The 82/82 product surface check result provides additional assurance. It demonstrates that the platform's full surface area—API, SDK, MCP, authentication, persistence, telemetry—was operating correctly during a scientifically demanding campaign. Product reliability and scientific capability are not traded off against each other. They are achieved simultaneously, because the architecture was designed from the beginning to treat both as non-negotiable requirements.
The research paper
The full research paper documenting the hosted blind rediscovery results—including complete validation data, infrastructure specifications, timing breakdowns, and product surface check details—was authored by Faruk Guney at Vareon, Inc. in March 2026. The full text will be available at vareon.com/research.
MatterSpace Lattice is not a research prototype that might someday become a product. It is a product that passes the same scientific validation benchmarks that established its research credibility—on live infrastructure, through production APIs, with every product check active. That is what it means to ship a discovery engine, not just publish one.
Download Paper