Show HN: A new way to verify remote AI model execution (no TEEs, no ZK)

1 points by lospoy 14 hours ago

We’ve published a whitepaper proposing a simple method to verify that a remote AI node is executing a specific model — without using trusted execution environments (TEEs) or zero-knowledge proofs.

The method relies on a minimal set of reference outputs to test model behavior, enabling trust without full re-execution.

Curious to hear thoughts from folks working in ML, distributed systems, or trustless compute.

https://arxiv.org/abs/2504.13443