The use of generative Artificial Intelligence (AI) in K–12 schooling is a topic that has taken the education community by storm.

Since the release of ChatGPT in November 2022, the seeming ubiquity of AI chatbots and services — as well as their potential to transform the way in which education is delivered — makes them a frequently trending topic. In North Carolina, a representative from Khan Academy recently gave a live demonstration of the organization’s made-for-education AI model Khanmigo to members of the General Assembly’s Joint Legislative Education Oversight Committee.

During its monthly meeting earlier this March, the North Carolina State Board of Education also discussed the use of AI in the education field. A presentation to the board covered guidelines on AI implementation that were released by the Department of Public Instruction in January 2024. The presentation gave an overview of the department’s guidance and highlighted the need for districts to adopt their own policies and procedures governing the responsible use of AI in their schools.

The rapid proliferation of AI-powered technology, including in the education space, has highlighted a host of challenges, including the potential for bias in AI outputs. It’s a real concern, so how can the problem be addressed?

A piece recently published in City Journal puts forth a promising solution: Let the market work free from “government interference and centralization.”

As the article explained:

Google insists that it will “do better,” but answers to its left-wing AI product may already be on the horizon. According to a report by data scientist David Rozado, machine-learning models such as Anthropic’s Claude, X’s Grok, and Zephyr 7B Beta are almost politically neutral. That developers are creating more centrist alternatives makes sense, given the incentives. A centrist model, after all, will align with more users’ beliefs (not to mention with objective reality) than will one built by Google’s “Responsible AI” team.

As long as AI remains relatively free of government interference and centralization, those who produce machine-learning models will have an incentive to produce a less ideological product. Provided those incentives remain intact, engineers will be able to produce large language models, and AI systems of all kinds, that reflect the majority’s views.

Kevin O’Leary, well-known businessman, entrepreneur, and star of the television show “Shark Tank,” expressed a similar view during a recent interview with Fox Business. According to O’Leary, “The answer is always less government, more innovation because the market always solves for these programs. If you think Google is biased, you’ll use the Microsoft product. Or if you don’t like that, you’ll use something else, because they’ll innovate their way into market acceptance.”

Fixing the problem of bias in AI outputs is a complicated one that won’t be solved overnight. Allowing the free market to address the problem, however, would be a step in the right direction.