Offloading vs. Outsourcing in the College Classroom
And classroom examples that help us to think deeper about this issue
Screens & Sanity is a weekly newsletter designed to help educators make sense of today’s digital noise and turn it into meaningful learning experiences. Each issue blends research, critical media literacy, and concrete teaching tools to help you support students’ media and AI literacy while fostering stronger critical thinking skills so you can maintain your own sanity in the process. It’s a space for educators who want clarity, depth, and humane approaches to technology in higher education.
Natalie Wexler recently posed a critical question on her Substack: when students rely on AI, are they engaging in cognitive offloading or cognitive outsourcing? Drawing on cognitive psychologist Paul Kirschner, she argues that the distinction isn’t semantic, but rather, foundational. “Tools that provide offloading support cognition,” Wexler writes, echoing Kirschner. “Tools that provide outsourcing replace it.”
That difference matters enormously in higher education, where students are still developing not just disciplinary knowledge, but the cognitive habits that make learning durable: sustained attention, synthesis, argumentation, and self-regulation. Yet on many campuses, conversations about AI collapse into single question like:
Is it cheating or not?
Rather than the more useful one:
What kind of thinking is this tool asking students to do?
I’ve been thinking about this distinction a lot of as well. And while I appreciate the intellectual conversation that come from arguing for these terms, I feel lost when it comes to the concrete “where” of how this matters for students. Where does AI support student thinking and where does it substitute it? In the constant fight for praxis, I feel like our conversations around this topic online have become much more about theory and much less about how these situations actually play out in the classroom. To be honest, it’s probably due to the fact we haven’t had a lot of time to see things “play out” quite yet. However, I want to at least spend some time in this post adding to the conversation of “practice” and what assignments, activities and moments may differentiate our students’ experience between offloading and outsourcing.
Offloading: When AI lightens the load but keeps the thinking intact
Cognitive offloading, as Wexler explains, involves externalizing thoughts we’ve already generated so we can free up working memory. In college contexts, this can look like AI functioning as a scaffold rather than a surrogate.
I’ll use a personal example for this section as well. I have a LOT of students. If you’re not a college instructor, ask an instructor you know “what is a 5:5 teaching load” and you’ll see what I mean. I’ve always struggled keeping up with my student feedback. With over 150 essays to provide feedback on every 3 weeks, it’s super overwhelming. I tended to only leave brief commentary feedback or suggestions throughout the text, highlighting as I went and usually a small “good work overall with BLANK.” at the end. Now, I throw all my comments into ChatGPT and ask it to turn it into one feedback sandwich with examples on how to improve. It’s extremely helpful. This small addition to my routine has not stopped my workload, I may have even added an extra minute or two to plug in the prompt. However, the feedback I give students now is more approachable, succent, and (I think) helpful.
For students, let’s take brainstorming as an example. A student staring at a blank page might use AI to generate questions about a topic rather than answers: “What are the major debates around campus free speech?” Used this way, AI doesn’t supply the argument—it helps the student orient themselves in the intellectual terrain. The student still has to decide which questions matter, which sources to pursue, and what position to take.
Or consider revision. A student might paste in a draft paragraph and ask, “Where does my claim become unclear?” or “What assumptions am I making here?” The AI isn’t rewriting the paragraph; it’s acting like a slightly over-eager peer reviewer. The cognitive work—judging relevance, revising prose, clarifying logic—remains firmly with the student.
In my own classes, I’ve seen students use AI to offload low-stakes cognitive friction: creating study schedules, turning messy notes into structured outlines after class, or generating practice questions to quiz themselves before an exam. These uses align with what Wexler describes as storing information students have already generated themselves. Perhaps akin to making a grocery list rather than relying on memory alone.
Importantly, these uses still require students to engage with the material. AI helps them manage complexity; it doesn’t eliminate it.
Outsourcing: When AI does the work for the student
Cognitive outsourcing begins when students skip the generative struggle altogether. As Wexler puts it, outsourcing “doesn’t just mean finding storage space for thoughts we’ve produced through our own mental effort; it means we don’t make the effort ourselves.”
In college writing, this line is crossed most often not with full essay generation—though that happens too—but with subtler forms of substitution.
Consider the AI-generated outline (I argue not to encourage students to use AI for outlining in a previous post). On the surface, asking ChatGPT to produce an outline for a paper on, say, climate justice might seem harmless. But outlining is thinking. It’s where students decide what counts as evidence, how claims relate, and what belongs together. When AI produces that structure, students inherit a logic they didn’t build and often can’t defend.
The same is true for summaries. Many students now routinely ask AI to summarize readings they haven’t done. Wexler warns that this practice is especially troubling, because writing and reading are not just vehicles for information but engines of understanding. When students bypass close reading, they lose the chance to wrestle with ambiguity, nuance, and argumentation. The result is what teachers in the Brookings report describe as “digitally induced ‘amnesia,’” where students cannot recall or explain work they’ve ostensibly completed.
I’ve seen this play out in class discussions. A student submits a polished AI-assisted response online, then struggles to articulate even the basic premise of the reading in person. The work looks complete but cognitively, it’s hollow.
One way I have been addressing this in my classes is turning more to videos responses and in-class mini-presentations. Each reading students are expected to submit a “Video Response” as opposed to a written discussion board. In my AI workshop last week someone rose the point they may still be asking AI to generate the script, to which another colleague noted “At least they’re reading the prompt.”
I kind of agree. Sometimes simply being forced to read an AI-generated output out loud for peers may lead to hesitancy. Did other people actually read? Should I? Is what I’m reading actually accurate? These kinds of questions are learning! Maybe in a weird backwards way, but I’ll take what we can get right now.
The danger of “easy” thinking
One of the most striking student quotes Wexler includes captures the risk succinctly: “It’s easy. You don’t need to use your brain.”
That ease is precisely what makes cognitive outsourcing so seductive in college. Writing a ten-page paper, synthesizing multiple sources, or sustaining an argument over time is genuinely hard. AI offers students a way around that difficulty, but also around the learning embedded in it.
As Wexler notes, “the process of writing itself strengthens our retention of information, deepens our understanding, and helps develop our analytical abilities.” When students outsource that process, they may earn acceptable grades while quietly eroding the very skills higher education is meant to cultivate.
The problem isn’t that students are using AI. Rather, it’s how and when they’re using it.
Teaching the difference
One of Wexler’s most important points is that students can’t be expected to navigate this distinction on their own. If we want AI to function as cognitive offloading rather than outsourcing, we have to teach students what productive struggle looks like and why it matters.
That starts with transparent AI policies that distinguish between support and substitution. Instead of vague prohibitions, instructors can specify which stages of an assignment are AI-optional (brainstorming questions, formatting citations) and which are not (drafting claims, outlining arguments, writing analysis).
It also means, as Wexler argues, teaching writing explicitly. When students know how to build sentences, paragraphs, and outlines, they’re less likely to hand those tasks over to a machine. AI becomes a tool they consult, not a crutch they lean on. Finally, we need to model reflective AI use. Asking students to document how they used AI—and why—invites metacognition. Did this tool help you think more clearly, or did it think for you? That question alone can shift AI use from unconscious outsourcing to intentional offloading.
Applying the brakes
Wexler ends with a warning: we are “on an AI train that appears to be heading off a cliff if someone doesn’t apply the brakes—and soon.” But applying the brakes doesn’t mean banning AI or pretending it doesn’t exist. It means insisting that learning—not convenience—remain the goal.
For college students, the difference between offloading and outsourcing is the difference between graduating with polished artifacts and graduating with practiced minds. AI can help lighten cognitive load, clarify thinking, and support learning—but only if we are clear-eyed about where support ends and substitution begins.
The terminology matters because the thinking matters. And in higher education, that’s a distinction we can no longer afford to blur.
Why We Have to Make the Limits of AI Visible
One final reason this distinction matters is that students often believe AI works better than it actually does, especially for writing and critical thinking. If we don’t explicitly show them where it fails, they have very little reason not to outsource.
From a student’s perspective, AI looks confident, fluent, and fast. It produces clean paragraphs, tidy transitions, and something that sounds like academic writing. What students don’t always see—because they haven’t yet developed expert eyes—is what’s missing. AI is many times filled with weak claims, flattened complexity, vague evidence, and an absence of genuine insight. However, AI doesn’t struggle or hesitate. And because of that, it hides the intellectual labor that good writing requires.
If we want students to understand why writing matters, we can’t just tell them that AI is bad for learning. We have to design moments where we can prove it to them.
In other words, we need to stop treating AI as a temptation students must resist and start treating it as a text students can interrogate.
Showing, not telling: letting AI fail in public
One of the most effective strategies I’ve seen is asking students to compare AI-generated writing with their own emerging ideas and then analyze the difference.
Here’s a prompt instructors can use almost verbatim:
AI Comparison Reflection Prompt
Write a short response (250–300 words) to the following question without using AI:
What is one claim you find compelling or troubling in this week’s reading, and why?Next, ask an AI tool to respond to the same question.
Compare the two responses and reflect on the following:
Where does the AI sound confident but say very little?
What ideas, examples, or nuances appear in your writing that are missing from the AI’s?
Where does the AI avoid taking a clear position?
Which response would you trust more in a real academic conversation and why?
Conclude by explaining one thing this comparison taught you about thinking, not just writing.
Where might this assignment fail? The obvious answer might be that students don’t value their own writing. But valuing your writing and valuing your thinking aren’t the same thing. And honestly, we as instructors have done a lot of work enforcing what “good” writing looks like—sometimes even above what good thinking looks like. Students see polished writing and immediately assume “good thinking.”
I believe our role in this activity is to establish a foundation in our classrooms that this simply isn’t true. We don’t value writing over thinking. Yes, we want both. But you can’t have good writing without good thinking. Polished? Sure. But not good.
Making critical thinking unavoidable
We also need to be honest about this: AI struggles most precisely where higher education claims to add value—argumentation, synthesis, and judgment. It can summarize existing conversations, but it cannot decide what matters. It can mimic academic tone, but it cannot care about the consequences of a claim. It can generate structure, but it cannot explain why one idea deserves emphasis over another.
When students outsource those decisions, they aren’t just saving time—they’re skipping the very experiences that build intellectual confidence. Being explicit about this isn’t punitive. It’s actually an act of care. Many students genuinely believe that “good writing” means “polished writing,” and AI reinforces that misconception. Our job is to help them see that good writing is evidence of thinking in motion. Writing that looks uncertain, provisional, and shaped by human judgment.
Ultimately, the question isn’t whether students will use AI. They will. The more important question is whether they understand when it helps them think and when it quietly replaces that thinking altogether. If we want students to choose cognitive offloading over outsourcing, we have to make the learning visible. We have to design assignments where thinking can’t be skipped, where AI’s limits are exposed, and where students experience the value of doing the work themselves.
—Dr. Sydney



Really enjoyed Sydney.
"students often believe AI works better than it actually does, especially for writing and critical thinking."
This is so true. I've been working on generating material and then setting the students the task of 'taking it apart', playing the part of a pernickety teacher with a fully loaded red pen. They really enjoy it, they really have to think and it does reveal the flaws that are otherwise masked by the superficial polish of AI output.