The advances in artificial intelligence over the past couple of years might very well turn out to be a watershed moment in history. Generative AI, which is a type of technology that takes data inputs and identifies patterns and structures to create original work, is expected to experience a market boom over the next 10 years, growing from $8.65 billion in 2022 to over $188 billion by 2032. Even North Carolina based SAS has committed $1 billion toward “AI-powered industry solutions”

Wall Street is taking notice. It took ChatGPT only two months to reach 100 million users, decisively breaking a record previously held by TikTok, which took nine months to achieve that same size user base. Apple CEO Tim Cook has said his company “views AI as huge” and will thoughtfully begin weaving it into their products.

But aside from the investor excitement at a new tech boom, what are the potential policy implications of AI? In a recent interview with Piers Morgan, philosopher and historian Yuval Noah Harari, warned that the advent of AI could spell the ‘end of democracy’ given the risk of bad government actors weaponizing it in ways that would make even Orwell’s Big Brother blush. 

At this point, it doesn’t appear that government agencies have much of a grasp on what to do with this emerging technology and how to use it in a way that respects privacy and ensures security. Recently, the Environmental Protection Agency sent out a directive to its employees, banning the use of AI tools, such as ChatGPT, citing potential legal, information security, and privacy concerns. Couple this with the way Congress continues to botch its self-proclaimed war on “Big Tech,” and its apparent policymakers are not leading this revolution.

To add insult to injury, a Senate hearing on AI oversight doesn’t lend confidence that the government is competent or constitutionally qualified to regulate this emerging tech. As a result, they resort to fear mongering.

What we should be wary of are AI abuses by government actors or others with an anti-democratic agenda. We know that powerful technological and biological tools in the hands of the wrong people and nations can pose existential threats for individuals and communities. Look no further than China’s repeated abuse of the Uighurs, a Muslim minority group, that is persecuted by the Chinese Communist regime. Chinese tech companies have secured patents that the CCP can then use to identify minorities and track their movements.

A principle that should keep both American investors and policy makers grounded as they think through the implications of AI in the free world is that oftentimes the impact of a new technology is overblown at the outset, while also being underestimated in the long term. This is called Amara’s Law, coined by Roy Amara, an American futurist and scientist. In a Wall Street Journal interview, Michael Green, an asset management strategist, echoed this principle, saying, “Artificial intelligence is, “almost certainly overhyped in its initial implementation. But the longer-term ramifications are probably greater than we can imagine.”

Make no mistake, AI will cause disruptions, just like previous technologies that came before it. There will be questions about privacy, copyright, safety, and more. We’re going to have to wrestle with these ‘What ifs.’ Luckily, that has long been the American way. We’re a curious people with a strong preference for freedom from government interference. We have never let the fact that something is hard stop us from discovery and innovation and we shouldn’t this time either.

I recently sat down with Glen Hodgson of Free Trade Europa to discuss the future of work and the impact our digital world is having on the job market. You can listen to the entire interview here (or watch it here):