Show HN: A new way to verify remote AI model execution (no TEEs, no ZK)
We’ve published a whitepaper proposing a simple method to verify that a remote AI node is executing a specific model — without using trusted execution environments (TEEs) or zero-knowledge proofs.
The method relies on a minimal set of reference outputs to test model behavior, enabling trust without full re-execution.
Curious to hear thoughts from folks working in ML, distributed systems, or trustless compute.