by Mitch Kokai
Senior Political Analyst, John Locke Foundation
Margot Cleveland of the Federalist documents disturbing government documents.
Our government is preparing to monitor every word Americans say on the internet—the speech of journalists, politicians, religious organizations, advocacy groups, and even private citizens. Should those conversations conflict with the government’s viewpoint about what is in the best interests of our country and her citizens, that speech will be silenced.
While the “Twitter Files” offer a glimpse into the government’s efforts to censor disfavored viewpoints, what we have seen is nothing compared to what is planned, as the details of hundreds of federal awards lay bare. Research by The Federalist reveals our tax dollars are funding the development of artificial intelligence (AI) and machine-learning (ML) technology that will allow the government to easily discover “problematic” speech and track Americans reading or partaking in such conversations.
Then, in partnership with Big Tech, Big Business, and media outlets, the government will ensure the speech is censored, under the guise of combatting “misinformation” and “disinformation.”
The federal government has awarded more than 500-plus contracts or grants related to “misinformation” or “disinformation” since 2020. One predominant area of research pushed by the Department of Defense involves the use of AI and ML technology to monitor or listen to internet “conversations.”
Originally used as a marketing tool for businesses to track discussions about their brands and products and to track competitors, the DOD and other federal agencies are now paying for-profit public relations and communications firms to convert their technology into tools for the government to monitor speech on the internet.
The areas of the internet the companies monitor differ somewhat, and each business offers its own unique AI and ML proprietary technology, but the underlying approach and goals remain identical: The technology under development will “mine” large portions of the internet and identify conversations deemed indicative of an emerging harmful narrative, to allow the government to track those “threats” and adopt countermeasures before the messages go viral.