Close Menu
bkngpnarnaul
  • Home
  • Education
    • Biology
    • Chemistry
    • Math
    • Physics
    • Science
    • Teacher
  • E-Learning
    • Educational Technology
  • Health Education
    • Special Education
  • Higher Education
  • IELTS
  • Language Learning
  • Study Abroad

Subscribe to Updates

Please enable JavaScript in your browser to complete this form.
Loading
What's Hot

Oxbridge of America – Math with Bad Drawings

September 25, 2025

Deal of the Day: Get 50% off Paramount+

September 24, 2025

The Bender Bunch: It’s All About Turkeys

September 24, 2025
Facebook X (Twitter) Instagram
Thursday, September 25
Facebook X (Twitter) Instagram Pinterest Vimeo
bkngpnarnaul
  • Home
  • Education
    • Biology
    • Chemistry
    • Math
    • Physics
    • Science
    • Teacher
  • E-Learning
    • Educational Technology
  • Health Education
    • Special Education
  • Higher Education
  • IELTS
  • Language Learning
  • Study Abroad
bkngpnarnaul
Home»Teacher»Rethinking How We Use AI for Writing
Teacher

Rethinking How We Use AI for Writing

adminBy adminSeptember 24, 20251 Comment21 Mins Read1 Views
Share Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
Follow Us
Google News Flipboard Threads
Rethinking How We Use AI for Writing
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


We keep talking about AI in schools like it is either a miracle or a menace, and that either-or mindset steers us into two dead ends. One path hands the wheel to the tool and mistakes novelty for real learning. The other path locks everything down and keeps students from learning how to use AI wisely. In this article, I step past the binary and outline a human-driven, blended approach where learning leads, teachers decide, and AI serves the goals we set for our students.

I’ve received so many instant messages and emails asking me what I think about the new MIT study, “Your Brain on ChatGPT,” and that’s ultimately why I am writing this article. I feel like we are seeing this binary approach of AI being either good or bad, either regressive or transformative, either the best or worst thing to happen to our society. But I worry that this framing ultimately leads us down two dangerous dead ends. It’s an idea I first explored in this sketch video when ChatGPT first came out:

The first dead end is Techno-Futurism. This is what happens when we start with the question, “What can AI do that humans can’t?” and then scrap all those pieces that a machine can do instead. This dead end sees the promise of transformation but sadly places technology as the driver of that transformation. I’ve seen this with the bold statements that AI will replace the essay or that we won’t even need teachers because students will simply sit alone with a chatbot and learn self-paced content. Students will have personalized learning at their fingertips.

This dead end mistakes teaching for content delivery and fails to recognize that learning is deeply human and dynamic. It is often collaborative and creative. We end up mistaking adaptive learning systems for personalized learning. Furthermore, Techno-Futurism mistakes novelty for innovation and we end up chasing shallow trends rather than sustainable change.

But there’s a second dead end that takes us in the opposite direction. This is the Lock It and Block It approach. Here, schools block all forms of AI from school. They often shift toward paper and pencil and ban technology altogether. In some cases, schools use AI detection software (which creates massive legal challenges and the high likelihood of falsely accusing students of cheating). Meanwhile, students never have the opportunity to learn how to use AI in a way that is ethical and intentional.

Fortunately, there’s a third approach that avoids the extremes of Techno-Futurism and Lock It and Block It. This is a human-driven approach that focuses on allowing the learning to drive the AI usage but also being open to ways that generative AI can transform the learning process. This approach is inherently blended. It’s yes / and. This venn diagram is an overlap of AI and the human voice with the word "blended" in the middle

 

As I think about this MIT study, it’s important that we don’t fall into the previously mentioned traps of techno-futurism and lock it and block it. Instead, we need to step back and ask, “What does this study actually demonstrate and what does that mean for our students?”

It helps to examine where we are actually getting our information regarding this study. As much as I love reading journal articles (I swear I’m not being sarcastic) I tend to hear about these big studies through popular journalism first. A slew of dystopian headlines pop up all around warning of the worst case scenarios. In this case, it’s the idea that AI leads to “brain rot.”

From there, social media influencers summarize the summaries of the research by simplifying the research even more and offering their own commentary. For the last few weeks, I have seen so many Instagram Reels offer quick summaries of the research followed by a short, provocative takeaway from the influencer.

So, we end up with layers. At the core we have solid, nuanced research. In this case, the research is specifically about over-reliance on chatbots for writing in a way that leads to passive learning. The article includes limitations, recognition of variance, and an attempt to contextualize findings. The goal here is scientific discovery.

Layer two is where journalists then interpret these findings for a popular audience. The goal here is to explain the science in a way that is easy to understand. Journalists have to convey complex scientific research in a way that people outside of that community can understand. This leads to some necessary oversimplification to make the research more accessible to someone like me (an educator and not a social scientist or software engineer). However, there is a secondary motive two this layer. Media corporations have a profit motive, which means writers must also craft content that stands out. It needs to be somewhat entertaining and even emotionally appealing.

Layer three happens when the headline writers use bold, binary, and emotionally loaded language to grab your attention so you click on the article. The motive isn’t clarity. It’s pure reach. Every news outlet is vying for your attention in a crowded field. This is where we often see a shift from conveying information to tapping into one’s emotions. Social media algorithms tend to reward outrage over nuance, so headline writers often skew toward provocative titles that don’t represent the original research at all.

Layer four happens when a social media influencer summarizes the article (often just the headline) in a way that will capture your attention. The social media algorithms play up emotions like fear, surprise, and anger. So, those rise to the top. Even if you want to create an Instagram Reel offering nuance, our feeds will be flooded with videos that spark the kind of outrage and binary thinking that lead to clicks, shares, and comments.

What ensues is a conversation about a video in response to a headline that is already far removed from research. People become more entrenched in their positions and more polarized in their views. So, we end up with deeper polarization on the AI debate. Note the term debate. I’d love for it to be a discussion but discussions are slower, explanatory, and nuanced. Social media rewards binary debates.

I realize that most of us don’t have the time to pour over a highly technical article. But I do think we can cultivate a daily habit that can help us avoid the outrage cycles that lead to oversimplification and polarization (in this case being pro or anti AI).

The first is to use the process described by LaGarde and Hudgins in Developing Digital Detectives and to start with an emotional pulse check. What is this information trying to make me feel?

The second is to approach the information from a place of curiosity rather than judgment. When I first read the headlines, I immediately asked, “How exactly were students using it? What guardrails did they have? What aspects of the writing process were AI-integrated? Who were the students? How motivated were they to write? Were the prompts in any way AI-resistant? How is this similar or different to the studies of pilots and auto-pilot or to the idea of “cognitive debt” and the use of mapping software for drivers?”

When we respond to curiosity, we admit that we don’t have all the answers. It creates a delay in judgment and pushes us toward intellectual humility. This can then allow us to explore the nuances of the facts and examine complexity and context. Curiosity is inherently slower and messier, but it’s also what leads to a deeper understanding of the topic.

So, what does the study actually say when we approach it with curiosity and nuance? The MIT study wasn’t exactly anti-AI. It was more about cognitive atrophy (a concern I’ve shared frequently on this blog), or as they put it “cognitive debt.” Their study suggests that while AI tools like ChatGPT can make writing easier, they may also create a kind of “cognitive debt.” I had actually never heard the term before but it makes sense. Cognitive debt is essentially the cost of relying on shortcuts that save effort in the moment but leave behind a kind of deficit, where deeper understanding, memory, or mental engagement is weakened and must be “paid back” later through extra work or lost learning.

Participants who used the AI showed weaker brain engagement, less ownership of their work, and even had trouble recalling what they had just written compared to those who wrote without help. In other words, the convenience of outsourcing ideas came at the cost of deeper mental processing. Although the findings are still preliminary and limited to essay writing, the study raises important questions about how to balance the efficiency of AI with the need for students to wrestle with ideas, build memory, and strengthen their own thinking.

As an educator, I’m curious about what these findings reveal as far as approach to integrating AI tools for writing.

 

As I read the article, I was struck by a few things. First, this seemed to point to a real danger of using technology in a way that leads to autopilot. We assume that we are still doing the thinking but we have outsourced this to a machine. This actually seemed to point toward the need to use AI in a way that promotes slower, deeper learning instead of simply task completion. In other words, students need to use AI in a way that develops a depth advantage:

My second takeaway is that we need to set specific guidelines and guardrails around how students use AI in writing. We don’t need to go full-on Lock It and Block It but we can’t fall into the Techno-Futurist approach, either. We need to take a blended approach that is ethical and intentional.

Here we ask, “What does it mean to use AI ethically?” and from there, it recognizes that we will likely change our use of AI based on the content. Instead of being pro-AI or anti-AI, this approach sees AI as a powerful tool that we need to use wisely with a hefty dose of humility.

Sometimes it helps to think of AI use as a continuum from rejecting to embracing its use.

As we navigate the rapid evolution of generative AI in education, it’s helpful to think in terms of a continuum rather than a hierarchy. As educators, we will need to move between this continuum as we think about students using AI in writing.

 

Level 1: AI-Resistant

Many educators are feeling frustrated with AI and so they’re leaning into a more AI-resistant approach. They feel exhausted by student cheating and academic dishonesty. They feel demoralized when they craft a high-interest, critical thinking prompt only to get back a sea of chatbot-generated writing. For them, the MIT article confirmed their greatest fear about AI. If left to their own devices, students will grow overly reliant on AI and lose the ability to write.

Some teachers have responded by requiring paper and pencil for all writing assignments. I’m actually a fan of paper and pencil. It can help with long-term memory and information retention. It can allow us to use visuals and sketch-noting technique. A handwritten approach is a great option when we are doing a single draft as a “learn through writing.” However, there’s a cost. If we are taking more of a “demonstrate what you are learning in writing” rather than “learn through writing” approach, the handwritten process can be laborious and time-consuming. Most of us, as educators, would feel frustrated if we couldn’t type our drafts. Why would our students feel any different?

A second approach has been to lean into AI checkers. While some of these checkers claim to be 98% accurate, I have found that most of the tools are closer to 80%. If a coin flip is 50%, I don’t feel that 30% higher is all that great. But suppose we have 98% accuracy. For a high school English teacher, this could easily lead to 4 students getting away with cheating and 4 students being falsely accused.

And when the accusations happen, they tend to be certain groups that use patterns and verb tense continuity on a regular basis. This means high-achieving students who mimic academic writing styles really well end up being flagged. It means students on the spectrum  (who tend to use consistent verb tenses and formulaic approaches) are also falsely accused. It means a multilingual student using sentence stems and verb tense formulas is likely the most at-risk. True, this can lead to huge lawsuits for schools, but this can also destroy the trust-based relationship between a teacher and a student.

A different approach would be to design prompts to be AI-resistant by focusing on the elements that humans do well that generative AI tends to struggle with. The following are some of the ways we can make our writing prompts AI-resistant:

  • Personal connections: Students answer a question that requires personal reflection connected to their lived experiences. An example would be, “Write about a time when you faced a challenge that connects to the theme of resilience in the novel we just read.”

  • Local and community context: Prompts that live in the school, neighborhood, or region force students to use knowledge that isn’t available to AI. Most AI chatbots have limited contextual knowledge about your local geography.  An example might be, “Interview someone in our community about how our town has changed in the last 10 years. Connect their perspective to what we’ve studied about urban development.” Or in writing about The Great Gatsby, it might be, “How does this novel relate to the American Dream? How do you see this theme play out in our school and in your neighborhood?”

  • Multimodal evidence: When students sketch, diagram, or capture their own photos, the work becomes rooted in what they actually created. An example would be, “Take two photos that show how energy is used in everyday life around you. Write a short explanation of each one and connect them to the science concepts we’ve studied.”

  • Conversation and interviews: Here students pull in dialogue from real people adds a human element that AI can’t invent authentically. You might have students cite quotes from your classroom Socratic Seminar or discussion. An example would be, “Ask three classmates what they think the biggest challenge is in solving climate change. Summarize their answers and explain which one you agree with most and why.”

  • Shifting perspectives: You can ask students to compare how their ideas evolve over time requires reflection across drafts or discussions. While they might still use AI to amplify it, this is something that most chatbots struggle with. An example would be, “Look back at your first journal entry on the Civil War. How has your perspective changed after our debates and readings? Be specific about what shifted and why.”

With this approach, the goal is not simply AI-resistance. It’s about centering the writing process on our human experience. It’s a focus on the aspects of writing that humans do well that chatbots will always struggle with – including empathy and contextual thinking.

As students begin to write, we can take an approach that focuses on trust and transparency. Here, we take a “show your work” approach centered on soft accountability. We can have students do aspects of the writing process in person and ask them to turn in their outline, first draft, and revisions. We might ask them to take notes on the peer feedback they get using the following process:

Then, they can explain one change they made for each step. We might ask them to write their full drafts in Google Drive and pay attention to large chunks of copy and pasted text while they are drafting. This is admittedly harder in classes that are asynchronous and online. But when coupled with AI-resistant prompts, this approach makes it harder to use a chatbot to answer a writing prompt.

 

Level 2: AI-Assisted

With this approach, teachers tend to use the same AI-resistant approached described in Level 1 with students but at the teacher level, they use generative AI on the back-end to design materials and supports for their students. In crafting writing prompts, they might use a chatbot to convert former questions into AI-resistant prompts that include personal reflection and contextual knowledge. They might use AI-generated images for visual prompts or infographics.

Early on, teachers can use generative AI to design some of the graphic organizers, sentence stems, vocabulary banks, and exemplars that students would use to guide them in their writing process. They can use a tool like Notebook LM to synthesize multiple resources and create an AI-generated podcast to help build prior knowledge.

As students engage in research, teachers might use generative AI to craft high-interest informational texts or to change the reading level of specific texts to match each student’s Lexile level. Teachers can even use voice-to-text AI systems to create audio versions of these readers.

I recently worked with teachers to design rotating reading activities that students can use to build background knowledge before doing their own independent research. We then edited the readings to add small details that the teacher knew her class would love to know. We then used the AI to generate discussion questions for each station and sentence stems for the peer discussion. Note that this teacher still actively edited the AI-generated content and made it her own based on her personal knowledge of the class. In this way, she took the vanilla and created her own unique flavor:

As students work through outlining and planning, teachers might create AI-generated tutorials, instructions, and exemplars that they can access. So, a student might use a visually-oriented bank of transition words or an explanation of verb tenses with their corresponding formula. As students begin revising their work, teachers can use AI tools to create rubrics, checklists, peer feedback protocols, and self-assessment tools.

Again, all of this occurs on the back-end but the goal is to save time and make differentiation more feasible.

 

Level 3: AI Integrated

With this approach, students use AI in a way that’s transparent and ethical. I love Ethan Mollick’s notion of the cyborg and the centaur here. With the centaur model, a person and the AI split tasks, each doing separate parts of the work. In the cyborg model, the human and the AI work together in a more integrated, back and forth way. When it comes to writing, a blended approach fits best with the cyborg model because the writer stays actively engaged, treating the chatbot as a tool and a thought partner. This kind of collaboration keeps the writer in control, but it also allows them to use AI as a creative partner rather than just a tool for outsourcing pieces of the process.

Teachers might provide students with this color-coded system to learn about what aspects of the writing process can be human-generated and modified by AI versus AI-generated and modified by a human.

In the past, I’ve had students use the color coded system on the initial draft, so I can see how they are using AI. Let’s explore what a blended, integrated approach might look like.

After getting the writing prompt, a student might use a chatbot to clarify the instructions or to look at exemplars. They might also use a chatbot to set up a plan for a long-term writing assignment (including time estimations). They can then modify the instructions based on their knowledge of their weekly schedule.

Before doing research, students might do a question and answer process back and forth with a chatbot using the FACTS Prompt Engineering cycle. The goal here would be to build up background knowledge on the topic.

As they do research, students might generate their own questions using a set of sentence stems that the teacher created using AI. They might do a rotating reading activity or annotate a curated set of documents. They might fill out a graphic organizer with questions, answers, and sources. As a teacher, you might set specific chatbot parameters, like “it’s okay to ask clarifying questions to a chatbot but you can’t ask it summarize information.” Again, during this phase, students might use AI for accessibility (like voice to text, changing the language, or adding visuals). They might go back to a chatbot briefly to build conceptual understanding of the topic as well.

In this phase, you might have students using notecards and sketchnotes. They might do a Socratic Seminar or small group discussion. So, here, you might find yourself moving into moments of AI-resistance in a way that embraces hands-on, synchronous, human interaction.

Next, as students outline, you might require students to create an initial outline by hand. It could be on a whiteboard, on a paper, or in a Google Document. But then, they can ask a chatbot to create a similar outline. Afterward, they compare and contrast the two outlines and ultimately make modifications to their original outline.

As students begin drafting, you might have a rule that they have to write their own words and only use AI for specific moments of clarification. But you could also have students modify AI generated text to make it their own. Students could take their initial outline and ask for the chatbot to generate the actual text. They would take an initial screenshot with a time stamp and then copy and paste the text into a shared document (Google Document).

From here, students would modify the text to add their own voice. They would need to add additional sentences and perhaps even break up paragraphs. Using their research chart, students would add facts and citations that they then explain. The initial chatbot text would be black but the human text would be a color of the students’ choice.

In the revision phase, students might do something like the twenty-minute peer review process. They could use checklists and rubrics to assess their own work. However, they might run a feedback simulation with multiple avatars that represent their ideal audience. They might ask the chatbot to take on the role of a writing coach, where they can ask their own questions and get feedback.

 

Level 4: AI-Driven

At this level, the AI generates nearly all aspects of the writing process. It can brainstorm ideas, craft outlines, draft full essays, and even simulate multiple rounds of feedback. The human role shifts into something closer to an editor, where the focus is on reviewing, revising, and refining.

On the surface, this feels efficient and even transformative. A student can produce polished text in minutes. But the danger is that students may skip over the deep thinking that happens when you wrestle with words, structure arguments, and make connections for yourself. In this sense, the writing becomes a product without the process and I think we begin to see some of the real dangers brought up in the MIT article.

Still, there are ways to reclaim the process even in an AI-driven model. Students might use AI to generate a full draft but then:

  • Reorganize the structure. Here they move paragraphs around, cut entire sections, or change the flow to reflect their own priorities.

  • Add personal voice. Again, they can heavily modify the text to fit their style. They can insert anecdotes, reflections, or opinions that only they could contribute.

  • Strengthen evidence. Students can integrate research, interviews, or classroom discussions that the AI would not know.

  • Revise for clarity and tone. Students can rewrite sentences so they match their own speaking style or audience expectations. They might take some of the overly verbose writing of AI and simplify it.

  • Layer in context. Students can connect the draft to classroom experiences, local issues, or personal knowledge that grounds the writing in their lived reality.

Here, the human role is less about generating text from scratch and more about transforming the draft into something distinctively theirs. In other words, even if AI does the heavy lifting at the start, the most meaningful learning still happens in how students reshape, reframe, and re-own the writing.

 

So many schools are looking for a singular policy regarding AI and writing. But it’s not that simple. AI usage is inherently contextual and complex. But that’s precisely what our students need. Ultimately, our learning outcomes should drive the use of AI. We begin with our goals and standards,  then choose when AI is most appropriate. Teachers know their learners, their context, and the purpose of each lesson they teach. They are best positioned to set guardrails and define when AI is appropriate. In practice, this means teachers can select the tools that serve their learning outcomes in a way that develops deeper learning.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 



Source link

Rethinking writing
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
yhhifa9
admin
  • Website

Related Posts

Teacher

Create Worksheets with Canva for Education

September 23, 2025
Teacher

Checklist: What Makes Schools Great

September 22, 2025
Teacher

410: Page Gone

September 21, 2025
E-Learning

How to Write in Cuneiform, the Oldest Writing System in the World: A Short Introduction

September 20, 2025
Teacher

Lesson Planning, Productivity, and Stressing Less

September 20, 2025
Teacher

Deal of the Day: Save 15% on Drunk Elephant

September 19, 2025
View 1 Comment

1 Comment

  1. Eli66
    Eli66 on September 25, 2025 6:47 am

    https://shorturl.fm/B8bPv

    Reply
Leave A Reply Cancel Reply

Top Posts

2024 in math puzzles. – Math with Bad Drawings

July 22, 202522 Views

Testing Quantum Theory in Curved Spacetime

July 22, 202515 Views

Announcing the All-New EdTechTeacher Summer Learning Pass!

May 31, 202513 Views

Hannah’s Spring Semester in Cannes

May 28, 202513 Views
Don't Miss

Can I Use Financial Aid for a Study Abroad Program?

By adminSeptember 23, 20251

248 Are you wondering how to pay for study abroad? You’re not alone! Cost concerns…

What I Wish I Knew Before Starting University | Study in Ireland

September 22, 2025

Meet Four College Students Who Studied Abroad in England

September 19, 2025

Literary Gardens – Global Studies Blog

September 16, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Subscribe to Updates

Please enable JavaScript in your browser to complete this form.
Loading
About Us
About Us

Welcome to Bkngpnarnaul. At Bkngpnarnaul, we are committed to shaping the future of technical education in Haryana. As a premier government institution, our mission is to empower students with the knowledge, skills, and practical experience needed to thrive in today’s competitive and ever-evolving technological landscape.

Our Picks

Oxbridge of America – Math with Bad Drawings

September 25, 2025

Deal of the Day: Get 50% off Paramount+

September 24, 2025

Subscribe to Updates

Please enable JavaScript in your browser to complete this form.
Loading
Copyright© 2025 Bkngpnarnaul All Rights Reserved.
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.