When should ChatGPT be taught in K-12? Let’s get into it.

EduLab Capital
13 min readFeb 14, 2024

An interview with a few students, an English teacher, and the CEO of a writing company. Plus, my own point of view, as a former teacher, Googler, school founder, management consultant, and now, EdTech investor.

by Sally Sorte

Image created on DALL-E on prompt “Create an image for ChatGPT in K-12"

How some students are using AI

Violet Westburg, a sophomore at Falmouth High School in Maine, is excited about AI.

“School is made to prepare you for your future, why would you not teach what you think is the future, which is AI?” Westburg says.

“I think the real question is how do we work with AI and not against it.”

She thinks teachers should be given the training to integrate AI effectively into their courses.

Westburg cites examples of how she uses AI to further her own learning: seeking alternate perspectives as she prepares for Socratic seminars (her favorite) after she has already finished her own analysis and preparation, or looking up a math problem where she is stuck at a time when she cannot seek out the teacher for help outside of class.

As our Spring 2024 class of Venture Fellows reflected on productive use cases for AI last week, they described using it to synthesize data, digest study guides, and help with ideation.

ECP Venture Fellow and Computer Science Masters student at Northeastern University, Kayser Raei, says that when he is stuck on a complex coding problem, he turns to AI.

“I rely on chatbots like OpenAI’s GPT-4 and Github’s Copilot… I am able to deconstruct intricate problems into manageable steps and approach them with confidence, often leading to innovative solutions.”

The dangers inherent in the idea that AI is simply a tool

Brad Schiller, Founder and CEO of Prompt, an ECP Portfolio Company focused on writing, is concerned that the upper echelons of society will see AI as a tool, but will forget that 86% of K-12 students are still at a basic writing level where AI could get in the way of developing critical thinking skills.

On the 2022 PISA test, 14% of students in the United States scored at a Level 5 or above in reading. This is compared to an OECD average of 7%. According to OECD.org, Level 5 means “students can comprehend lengthy texts, deal with concepts that are abstract or counterintuitive, and establish distinctions between fact and opinion, based on implicit cues pertaining to the content or source of the information.”

Those all sound like pretty important skills for conscientiously deciphering and leveraging ChatGPT.

“How can you have [students] use a tool, when they have not been taught how to evaluate evidence base and credibility — then integrate it with their own thoughts?” asks Schiller.

“When you give these tools without adequate instruction, these tools become worthless and a crutch and do not actually improve writing abilities at all.”

“Google search is a skill. AI is a skill, then to evaluate the veracity of that info…” states Schiller.

As a former 9th grade English teacher (with Teach for America, at Campbell High School on Oahu), I remember teaching the common core standards aligned with Google search. These standards included finding credible sources, using evidence to further a main claim or thesis statement, in-text citations, and accurately formatting a bibliography.

Similar standards need to be articulated to guide students through the use of Chat GPT. Underlying bias will be an important consideration. Assessing credibility will be far more nuanced than scrolling to the bottom of a Wikipedia page to look up primary sources when working with an LLM that may make things up or even hallucinate.

Further, Schiller references ‘the Gell-Mann Amnesia Effect’ where when people read about something they know about, they apply scrutiny and think the article is higher level, imperfect, and misses several important points. But when they read something about which they know very little, they place a high degree of trust in the source, and forget to apply the same level of scrutiny, forgetting to bear the limitations of the source in mind.

Taylor Kanzler, a veteran English teacher at Falmouth High School in a high-performing district in Maine, sees the positive use case for AI on redundant or lower-level tasks, like adapting a curriculum to fit a particular framework or align with the latest jargon.

However, Kanzler shares Schiller’s concerns about introducing AI too early in a student’s development.

“I think that most adults are going to AI with knowledge and thinking already in their head, and they’re using AI to codify that knowledge or thinking, or to organize it in a certain way that is more communicable, or that is tailored to a specific task or audience or purpose,” she says.

“What worries me about teenagers and young people is that they are going to AI for the idea, they don’t have an idea, they don’t have the thinking, they are getting the thinking and the idea from a machine; they are outsourcing the thinking process to AI.”

While Kanzler acknowledges that of course there are problematic elements for adult use of AI, “I think that adults today have had enough time believing that our thoughts matter and ideas matter, so we can utilize AI as a tool for efficiency. I’m worried that kids have not had that time and this is a way to completely hijack their thinking” and disrupt the practice of formulating original thought.

“I think that AI, and specifically ChatGPT, is just another step further down the road of creating a substance that is really dangerous for underdeveloped humans to be managing,” says Kanzler.

Kanzler likens AI to social media: “Yes, there are all these positives we can point to, yet we have all the data and research to show that it is a harmful and addictive substance, particularly for adolescents. Even many adults lack the ability to self-regulate social media use in a healthy way.”

“We objectively know that teenagers– and pre-teens who are increasingly on social media– we may as well be handing them a bottle of whiskey or a pack of cigarettes and telling them ‘Good luck! Be smart! And to me, AI is just another step down that same road of just wildly unregulated technology that kids are not savvy enough or sophisticated enough to understand.”

That said, Kanzler believes in the role of a teacher to inspire students and connect them to their place and power in society, which is why she recently launched a podcasting course at her school to help students bring the stories of their community to life.

Kanzler introduced me to one of her bright students who is productively leveraging Chat GPT, aforementioned Violet Westburg, the Sophomore at Falmouth High School in Maine.

Westburg uses the analogy of junk food to AI use, and describes how she may overindulge when she goes to a friend’s house where the parents are more permissive. Thus, making AI forbidden does not really teach students anything.

If junk food and AI and social media and sex and drugs and rock and roll are all alluring elements of the teenage experience, abstinence is likely not the answer. Or at least, it’s not realistic.

How does AI impact teaching writing?

Kanzler’s concerns run much deeper than students being lazy and cheating with AI. Pre-Chat GPT, she already saw blatant plagiarism, and says the concept of intellectual ownership is very ambiguous among the Gen Z generation.

“They already don’t understand plagiarism in its most simplistic and traditional form. I am working with 16-year-olds right now who still struggle to make sense of why plagiarism is bad or wrong, and what exactly qualifies as plagiarism. This is in a really overt way, copy and pasting something from an article and then passing it in as your own work.”

“That is wrong. There are a lot of kids where that doesn’t register for them. They have grown up in a world where it’s like, ‘Who cares, nobody’s idea is anybody’s idea, this is just the world’s ideas, like you don’t own ideas.”

CopyLeaks’ report, available on their website, finds that “nearly 60% of GPT-3.5 outputs contain some form of plagiarized content.”

Kanzler describes how the question of plagiarism becomes “even murkier when you’re talking about AI generated stuff” because students feel like, “I entered a couple parameters and hit the buttons and then my AI robot friend spit this out. It’s not mine, but whose is it? Is it anybody’s? It wouldn’t exist if it weren’t for me plugging things in and hitting submit.”

Kanzler says that Millennial and older generations “have a very righteous, purist sense of what plagiarism is, and what qualifies as an original idea, and even if we can get into a nuanced discussion about it…” there is still clarity around ownership and intellectual property rights.

This sort of anonymized ‘collective conscious’ in the cloud is counterintuitive to the “Me, me, me, look at me, I am on social media, I have a brand, I am selling myself” mindset that is attributed to the TikTok generation, acknowledges Kanzler.

“[Students] think that sitting around thinking stuff up is a waste of time. That is what really scares me.”

Kanzler says that the advent of ChatGPT has a lot of teachers in the humanities wrestling with the question of “‘What is the purpose of giving kids tasks that can be immediately generated by AI?’”

“It begs the question of what is the purpose of us teaching writing anymore?”

“I think there is value in teaching writing … I personally think the only writing that we should be assigning kids is writing that really can’t be done by AI because it is so personal. I think giving students choice and teaching them to use their voice… which all ties to this idea of originality… is the most critical thing that they need in the 21st century.”

“We need to break [students] of the idea that nothing they can do is original. Because the most original thing that they have is their voice. There is nobody else in the world who has their voice except them. Even if they are talking about the same idea that some else has already put out there, they are doing it with their voice.

Kanzler says that this is a “total script flip” from how teachers have been teaching writing, “where it was like we tried to squash their voice and make them sound robotic… don’t use “I”, it should be this objective, omniscient, third person… That’s the exact opposite of how we should be teaching writing now.”

“Cool, you’re not the first person in the world who thinks that Gatsby is a brilliant rumination on wealth in America, but nobody has ever talked about it from your perspective… so what we actually need is a lot of *you* in the writing. It should be all about what you think based upon your experience.”

Kanzler thinks that AI is the “call to action that we need to finally say, oh yeah, that was a really bad way of teaching English.”

Kanzler says it is time to move away from the formulaic, “detached, soulless, voiceless” approach to writing.

“The 5-paragraph essay is dead.”

Instead, it is time to awaken students to their voice. “We should help them hone that voice and help them become confident in it… so that they know how to use it to connect with people and compel people” and share their “unique experience on planet earth.”

“That’s what we need to be teaching kids to write, and that cannot be outsourced to AI.”

AI use in College Admissions essays

Brad Schiller, Founder and CEO of ECP portfolio company, Prompt also emphasizes the role of student voice.

A core component of Prompt’s business is a coaching platform for College Admissions essays, with the slogan: “There’s no one like you. We’ll help you prove it.” Iin a post-Covid era where admissions has become increasingly ‘test optional’ — emphasis on standardized tests has decreased, thereby increasing the importance of qualitative factors like the essay.

Schiller notes that in admission essays, “It is way more obvious what is AI generated than for any other type of writing. The primary reason for that is that the content is about you as a person and is not like — I’m writing about the civil war — where there is so much available data and facts that AI can take to integrate to write about a certain prompt.”

AI writing has certain characteristics. What it lacks in personal experience or ‘knowledge’ on a certain topic, it fills in with what it thinks will fulfill the prompt.

“It is buzzword central,” says Brad. “It’s so cringey. It’s all over the place because it is having to fill in these gaps with what it thinks admissions officers want to see and hear.”

The intimate details of your life, and creative voice and detailed imagery that bring your story to life, are also inaccessible to an LLM.

That said, Schiller’s team has seen quite a low rate of AI-generated admissions essays.

“I’ve been really surprised at how little obvious AI we have seen. It’s really, really rare. I thought we would maybe see 1–2%. But we reviewed over 50,000 essays last year, and we detected less than 100 cases.”

Even so, these cases may have been less about ‘laziness’ and more about a language barrier.

“Most of [the cases] were actually international students,” says Schiller. He surmises that in these cases, they likely wrote the essay in their native language, then put it into Google translate, saw that it was still pretty choppy, and then ran it through AI to try to clean it up.

That said, Schiller is aware that Prompt’s writing coaches are working with a self-selecting population that is highly motivated to get into competitive schools. They also know that they will be receiving support and feedback from their coach, which lowers the stakes.

Now, Schiller also points out that, “There’s a difference between AI generated and AI aided.”

Schiller notes that AI-aided is harder to detect. So, if AI is truly used as a research tool or a writing feedback tool, it may provide value. This may be less common in college admissions essays, but is certainly relevant in a more research-based context.

He cautions that students have to be careful because if an admissions officer thinks your essay is AI-generated, they will just throw it out, so it’s a fine line.

Looking beyond admissions to writing as a whole, Schiller has concerns.

“The students who are using AI on their papers are not using it to get an A, they are just trying to pass.”

This points to AI being used as a crutch for learners at a lower end of the mastery spectrum, completing the bare minimum of requirements, as opposed to the best-case scenario — a tool to advance learning.

The ability to assess credibility also becomes increasingly murky. As a former Googler, I am familiar with how hard search engines have worked to maintain credible results, particularly in the ‘pay to play areas’ of the webpage (positions 1 & 2 above your Google search results, or along the right side, when viewed on a desktop).

Schiller sees AI-generated content much more rampant in their writing courses, and beyond that, just straight plagiarism — copying from Google and other resources.

“It’s a problem, it’s a real problem, the prevalence is extremely high.” Schiller says. “Founder of Copyleaks says that 99% of plagiarism is unintentional because kids do not know what it is and don’t know how to paraphrase.”

Schiller points to higher profile cases of plagiarism even among higher education leaders as similar examples of this.

Schiller says, “Maybe you can teach [AI] in the classroom, but it should be after a certain point. But we have proven as a society that we are not capable of doing that. It’s not going to happen.”

Mindi Chen, a second year at MIT Sloan business school and a Spring 2024 Venture Fellow with ECP agrees that there needs to be an intentional sequence to covering AI in schools.

“Just as children early in their education need to learn the order of operations in math before being able to properly use a calculator, another tool that people thought would hinder learning, we need to make sure we are introducing ChatGPT as a tool only after students understand its inputs, operations (prompts), and outputs (along with potential biases).

What is at risk with the increasing prevalence of AI?

Violet Westburg, the high school sophomore and AI proponent, did raise questions about how AI might get in the way of the process of discovery.

She spoke of seeking out teachers between classes, reaching out to people to learn about areas of interest, and peer collaboration, and worries that ChatGPT may erode this.

She notes that much of her best learning takes place with teachers outside of class.

As well as thinking on the spot.

“During Covid you could just mute yourself and look things up, you can’t do that in real life,” Violet says.

She has a point regarding creativity and improvisation.

I told Violet that when I studied abroad in Madrid during college, it was before smartphones and Google maps. I told her that friends and I have wondered what it means now that we never have to get lost.

We mused, “What have we lost from never having to get lost?”

A chance to practice resourcefulness, linger in the unknown, initiate conversations to ask strangers for help, and perhaps, to embrace serendipity when you do not find what you are looking for, but find something else altogether.

Like the day I got turned around on my way to the printing office and ended up at the Palace Cristal in Parque Retiro, admiring the gleaming sunlight prisming through a structure entirely made of glass.

Violet imagines an idealized version of school where you “go to get something out of it, to collaborate and understand things and prepare for the future.” You would not have grades, you would come into school, and get measured based upon participation and how you really get something out of it.

“Everyone comes in and genuinely wants to get something out of school. Plenty of sleep and happy and we collaborate with each other, build off each other’s successes and learn from mistakes and have free expression and learn something that helps them in the future.”

Learning for learning’s sake — certainly the type of learning community that I would like to attend or entrust my daughters to. But the educational system will need to become more sophisticated to strike that right balance of productively and artfully integrating screens, social media, and/or AI.

It all goes back to the question of the ‘job to be done’ in schools themselves, is it childcare or is it learning or social emotional development or all of the above or something else entirely? And what is in the best interest of the learner, their family, and society at large?

You can always ask Jeeves (if he is still around), or Google, or Chat GPT.

--

--

EduLab Capital

EduLab Capital Partners is a VC fund that invests in early stage education and workforce technology companies to scale measurable impact in our communities.