
When Maryland lawmakers gathered in December 2024 to discuss artificial intelligence (AI) in schools, they weren’t just debating tech specs; they were responding to reports of students using AI-powered apps to manipulate classmates’ images — deepfakes created in seconds, sometimes for bullying or blackmail.
“This technology can empower, but it can also exploit,”
one delegate cautioned during the public hearing. It’s a stark reminder: the AI revolution in education isn’t just about smarter tools — it’s about smarter policies.
The Promise: Personal Tutors, Powered by Algorithms
AI tools like BaxterBot are already appearing in classrooms as virtual teaching assistants, helping teachers tailor lesson plans and monitor student progress in real time. At a Florida middle school, BaxterBot was credited with catching early signs of learning delays in several students, allowing for quick intervention — a job that would have taken teachers weeks to notice manually. According to Bay News 9, the bot collects learning metrics and provides instant feedback to educators, streamlining workloads and amplifying outcomes.
The National Science Foundation puts the potential plainly: with the right safeguards, AI can level the educational playing field. Their report highlights AI-facilitated tutoring systems, which adapt to students’ strengths and weaknesses. One pilot project showed a 17% improvement in standardized test scores for students using AI-enhanced platforms. That’s a noticeable leap — and it’s catching lawmakers’ attention.
The Problem: Privacy, Bias, and Digital Ethics
But these benefits come with thorny challenges. As AI systems gather massive amounts of student data — from test results to behavioral patterns — questions loom: Who owns that data? Can students opt out? How are algorithmic decisions being audited?
Privacy is a key concern for states like Alabama and Wyoming, both of which have rolled out detailed AI policies for schools. As outlined on AI for Education’s guidance hub, these policies limit what data can be collected and require transparency in how AI decisions are made. What might surprise you: only 14 states currently have formal AI ethics policies in education, according to a 2025 National Association of State Boards of Education analysis. That leaves millions of students exposed to tools that may reinforce bias or make opaque decisions based on flawed algorithms.
Even well-intentioned apps can behave poorly if fed skewed data, a caution echoed in the U.S. Department of Education’s 2024 AI Report. One disturbing stat from the report: 62% of educators surveyed said they didn’t fully understand how their school’s AI tools made decisions — meaning teachers are relying on platforms they can’t explain or troubleshoot.
Building Guardrails: Policy in Action
Organizations like the National Education Association are now calling for federal oversight to ensure equitable use of AI in schools. They argue that without cohesive national policies, district-level implementations may favor well-funded schools and exacerbate achievement gaps.
Some of their recommendations include:
- Mandatory transparency about data collection
- Clear opt-out policies for students and parents
- Public repositories of AI tools used in classrooms
These ideas are gaining steam — especially as stories circulate of students using AI to write essays, predict test questions, or, more concerningly, manipulate digital images of peers.
Globally, institutions like UNESCO are also advancing ethical frameworks for AI education, urging governments to see students not just as data points, but as rights-bearing individuals entitled to fair treatment in automated systems.
Looking Ahead: Education’s AI Crossroads
AI isn’t going away. From ChatGPT to BaxterBot, it’s already embedded in how your kids — and their teachers — approach learning. But without smart policy, schools may be trading chalkboards for black boxes.
So, can AI boost learning? Absolutely — if it’s guided by ethics as rigorously as by code. As lawmakers debate the rules of engagement, the classroom could become a proving ground not just for smart machines, but for smart policymaking. The goal isn’t simply to teach students with AI — it’s to teach them about AI, preparing them to shape the very tools that are shaping their futures.
The question isn’t whether AI belongs in education. It’s whether we have the vision — and the values — to use it wisely.
Conclusion
If a machine can adapt to a student’s pace, track emotional cues, and even flag learning delays — yet does so behind an algorithm only a few can explain — are we educating, or outsourcing parts of our humanity? The promise of AI in classrooms isn’t just faster feedback or personalized lessons — it’s a mirror reflecting how much we’re willing to trade control for convenience, and insight for automation.
As schools rush to integrate these tools, we face a deeper question: what kind of intelligence do we value most — artificial or human?
At this crossroads, it’s not enough to ask whether AI can improve education. We need to ask:
- Who gets to decide how it’s used?
- Whose values are embedded in its code?
- How does its presence shape not just learning outcomes, but the very idea of learning itself?
The answers won’t come from tech specs or software updates — they’ll come from us.