Trusting information
Spinning swarm sensing from comment threads
February 9, 2017 — November 25, 2019
Designing infrastructure for assessing people’s trustworthiness, insofar as such a quantity exists.
A flip side to epistemic communities is the problem of verifying that the information you have is good. If we got to design all the agents in society we might be able to ensure the overall system acquires good information. But when we are dealing with real people, how do we know what they tell us is real? At scale? Do we do this by some kind of social trust graph? Some other mechanism?
1 Proof-of-identity systems
A simple case. Is the message to you from me really from me?
Web of Trust is troublesome for all the usual reasons that encryption is troublesome.
Notary aims to make the internet more secure by making it easy for people to publish and verify content. We often rely on TLS to secure our communications with a web server, which is inherently flawed, as any compromise of the server enables malicious content to be substituted for the legitimate content.
With Notary, publishers can sign their content offline using keys kept highly secure. Once the publisher is ready to make the content available, they can push their signed trusted collection to a Notary Server.
Consumers, having acquired the publisher’s public key through a secure channel, can then communicate with any Notary server or (insecure) mirror, relying only on the publisher’s key to determine the validity and integrity of the received content.
Keybase has an interesting solution here. TBC.
2 Proof-of-truth
🏗
Civil’s attempts to find blockchain-backed proof-of-truth for journalism Others?
Is a reputation system perhaps sufficient?
Robin Hanson once again has a framing I want, asking what info is verifiable. He would like it to be verifiable in the sense that we can make a contract about an outcome, with obvious application to blockchains, prediction markets and general mechanism design.