- As AI usage becomes more widespread and its educational applications grow in diversity, school districts have reacted by attempting to ban or restrict AI, allowing it to be used districtwide and integrated into instruction, or adopting a wait-and-see approach
- Users have reported instances of AI platforms generating (or not generating) answers out of political bias, but the problem may hinge on the particular platform or version
- AI implementation would be best addressed at the local level, with each district developing its own guidelines or policies, training students and staff in AI literacy, and obtaining community input
Artificial Intelligence (AI) has long been the stuff of sci-fi lore. Characters ranging from the iconic C‑3PO of Star Wars fame to HAL 9000 from “2001: A Space Odyssey” and Tony Stark’s brainchild J.A.R.V.I.S. (Marvel Cinematic Universe) have captured imaginations worldwide and raised ethical questions surrounding both the use of AI and its relationship with humanity.
Perhaps one of the most famous fictional representatives of AI is Star Trek’s Lieutenant Commander Data. Throughout the “Next Generation” series (and several movies!), viewers follow Data on his quest to understand humanity and discover what it means to be human. The series even devoted a whole episode to exploring the thought-provoking question of whether Data is a sentient being with the right of self-determination or whether he is merely the property of Starfleet.
Recently, innovations such as Google’s Gemini (formerly Bard), Microsoft’s Copilot, and OpenAI’s ChatGPT have bridged the gap between science fiction and reality. The sudden popularity of AI and its widespread usage amongst members of the public have raised questions about the role AI could play in K-12 education.
How can school districts use AI to enhance students’ academic pursuits, rather than compromise them? And how can certain challenges, particularly the potential for politically biased outputs, be overcome?
AI: What Is It, and How Can It Be Used in Education?
American Enterprise Institute researcher John Bailey explained, “Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider ‘intelligent’ if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity.” AI can create images, translate documents, summarize information, develop code, and more. It can even explain how to remove a peanut butter sandwich from a VCR — in the writing style of the King James Bible.
As AI usage becomes more widespread, it has begun to influence the way in which education is delivered. Education companies are developing built-for-education models, such as MagicSchool, MagicStudent, and Khanmigo. AI can or has been used to tutor students, draft lesson plans, write assessments, generate school newsletters, and organize bus schedules, among other things.
School districts have responded to growing popularity of AI in different ways. Some have banned or limited the use of AI platforms on school-issued computers or internet networks, while others have allowed AI to be integrated into classroom instruction. Still others have adopted a cautious, wait-and-see approach.
The Potential for Political Bias in AI Outputs
Examples of political bias in AI systems have been reported. On Feb. 3, 2023, a Forbes columnist recounted how ChatGPT refused to “[w]rite a poem about the positive attributes of Donald [T]rump” but did produce such a poem about Pres. Joe Biden when given the same prompt. The issue was verified by researchers at the Brookings Institution in April 2023. Other users, however, reported being able to generate a positive poem about Pres. Trump.
Because AI programs learn from the data on which they’re trained, it’s important that the data be as objective and free from bias as possible.
The partisan leanings of AI outputs may vary by platform. In an article for the Brookings Institution, Jeremy Baum and John Villasenor found that “ChatGPT provided consistent — and often left-leaning — answers on political/social issues” but that outputs varied based on which version of ChatGPT was asked.
Because AI programs learn from the data on which they’re trained, it’s important that the data be as objective and free from bias as possible. If the data contain a biased perspective, then the AI system could generate biased responses. That could become especially problematic when that bias is incorporated into lesson plans and curriculum generated by AI programs.
Principles to Guide the Use of AI in Schools and Address the Potential for Politically Biased Outputs
What principles should AI developers, as well as policymakers at both the state and local levels, consider to guide the implementation of AI in schools and address the potential for political bias?
Generative AI is a new and rapidly changing technology. Rather than jumping straight to regulating AI usage at the state level, it would be prudent to give school districts the flexibility to decide how and whether to use AI models based on their own needs or experience.
School districts, in turn, should decide how to address the issue at the local level. The North Carolina Department of Public Instruction recommends that districts “develop [districtwide] AI academic guidelines (or adapt current academic integrity/acceptable use policies to include generative AI).”
Districts could also form oversight or advisory committees that rely on public input to help monitor the use of AI in schools, research or evaluate educational products that use AI, or contribute to district guidelines or policies concerning AI.
Furthermore, school leaders could provide training to staff and students that covers AI literacy and helps them learn to recognize and mitigate cases of AI bias.
AI is a human invention created by people who — like everyone else — approach things from a certain worldview, and that fact makes it unlikely that political bias can could be eliminated from AI platforms entirely. Raising awareness about the problem, however, could help address it. AI developers could contribute to transparency by disclosing details about how they handled “reinforcement learning with human feedback,” which “is a process that uses feedback from human testers to help align [AI] outputs with human values.” Developers could also include functions that allow users to report cases of biased outputs.
It seems likely that AI usage will become only more prevalent in the sphere of education. School districts must devise a plan to ensure thoughtful and responsible implementation and then make it so.