
Revenue Search: Inside Bittensor Subnet Session with Roger & Patrick from Vericore: Subnet 70
17 snips
Mar 5, 2026 Conversation covers geopolitical risk, oil shocks, and how conflict can drive misinformation on social platforms. Deep dive into a decentralized semantic fact‑checking system that produces auditable evidence trails and bias scoring. Discussion of incentives that surface diverse sources, API integrations for agent tooling, and monetization ideas like paid calls, a personalized signal product, and prediction‑market mechanics.
AI Snips
Chapters
Transcript
Episode notes
Vericore Provides Auditable Semantic Fact Checking
- Vericore is an AI-driven, auditable fact-checking engine that produces semantic evidence for claims, scoring support/refute, confidence, and political leaning.
- Miners search for varied or conflicting sources and the subnet aggregates semantic scores to surface a holistic, contextual view rather than echo-chamber top results.
Incentives Make Miners Hunt Contrarian Evidence
- Miners are rewarded for finding diverse and even conflicting evidence, so the system incentivizes escaping echo chambers rather than reinforcing them.
- Vericore exposes full research traces so users can audit sources and see both supporting and contradicting evidence.
Speed Optimization Causes LLM Hallucinations
- Roger described how consumer LLMs prioritize speed over accuracy and can hallucinate because they optimize for UX and fast responses.
- He relayed an OpenAI engineer's comment that speed-focused products often output information that isn't traceable to reliable sources.
