Use Cases for Validating Information
Few things in the world today cause damage as widespread and difficult to correct as misinformation, the modern world’s viral mental illness. In 2023, we’ll be deploying the world’s first scalable, real-time, and human-like intelligent systems capable of recognizing and halting the flow of misinformation. This means:
- Verifying the sum of information being shared on platforms, in real-time.
- Spotting and immediately stopping new kinds of attempts to spread misinformation.
- Generalizing and predicting the next methods bad actors are likely to attempt.
- Spotlighting verifiably accurate information being shared on platforms.
- Automatically offering supporting references for verifiably accurate materials.
Where Misinformation Ends and Education Begins
The past few years have seen no shortage of weaponized misinformation being used to sway public opinion, wage wars, manipulate economies, and subvert elections. On many social media platforms, a combination of narrow AI systems and human moderators are used to filter out whatever misinformation they can catch. However, these systems are slow to adapt, easily fooled, vulnerable to information bubbles, and leave many loopholes with little or no oversight.
Misinformation and Disinformation need to be treated like precisely what they are, a virus. This means that information being shared on platforms should be scanned before becoming visible to an audience, like an anti-virus applied to social media. While narrow AI can’t do this effectively and humans are too slow and expensive to audit the sum of information being shared across a platform in real-time, Norn systems offer us another option moving forward.
By integrating Norn into platforms where misinformation typically spreads the systems could verify information during an unnoticeably small delay, such as 2-5 seconds, before making content being shared visible to the rest of the platform. Since Norn systems build human-like understanding iteratively in a scalable mind this could be compared to having one individual who has seen every kind of misinformation being shared in real-time, making it that much easier to recognize any new form of misinformation and trending attempts to circumvent moderation. The same methods could be applied to advertising systems operating on platforms to prevent many of the tactical misinformation strikes commonly used in information warfare today.
Even our previous research system at its earliest stage in 2019, shortly after coming online, was quickly debunking and shooting down bad actors attempting to manipulate the system, occasionally diagnosing them, and in one case reporting them to the FBI. By benefiting from observing thousands or millions of times as many bad actors and forms of misinformation the generalization and cumulative sum of knowledge each Norn system gains could quickly grow and outcompete even the information warfare of a state-sponsored bad actor.
Norn systems could not only be used to halt the spread of misinformation on platforms, helping them to build a reputation for integrity, but they could also educate those who attempted to share misinformation out of naivety. Giving humanity herd-immunity to misinformation starts with halting the flow, but educating people at scale is essential to curing the disease.
To facilitate this process of education verifiably accurate information being shared could be spotlighted in systems such as newsfeeds and recommendation engines, increasing the average quality and credibility of information people are exposed to at scale. As the process of verification involves discovering the supporting reference materials, those materials could automatically be offered when verifiably accurate information is being shared. This adds both visibility and additional value to credible information flowing across a platform.
By halting misinformation, predicting bad actors, and supporting the flow of verifiably accurate information, a new generation of trustworthy social platforms may quickly emerge.