🤖AI Is Rusting Your College Freshman Brain
Seven months ago I showed a blog post intro written by AI — trained on my voice — and one written just by me, to my college friend. She read both, pointed to the AI's version, and said, "That one's yours."
Six years of daily journaling, two hundred blog posts, a minor identity crisis, and the thing that finally nailed my voice was an autocomplete engine with no childhood. I laughed it off on camera. Off camera, my stomach turned to concrete. Really? After six years of daily writing, weekly publishing, and hundreds of pieces, an AI fooled someone who knows me?
I made it my mission to understand how I let AI rust my brain to this point in the first place. It turns out, I'm not alone.
Here's what I found.
The AI Rusting Slope
I call it the AI Rusting Slope. The slow rusting of your brain as AI takes over your meta-thinking, learning, and idea validation. Like a bike chain left out in the rain all semester.
The trick is how subtle it is. You use it for one assignment right at the deadline because "you can't afford to wait." Then another. Then a third. Then you're reading an article about AI rusting your brain and wondering if the author used AI to write it. Hey, I just told you I'm insecure about it writing well and now you're thinking that. Okay fine, I wrote these last two sentences and it wrote the rest of the paragraph. Happy?
Here's the thing: AI isn't the problem, our relationship to it is.
I'm going to show you the three ways AI rusts your brain when you aren't careful, and how you can change your relationship to it so that AI makes your brain radiate instead. These three escalate: the first rusts how you think, the second rusts how you learn and become, and the third rusts how you relate to reality itself.
Before we get into it: I'm not an AI researcher. I fell for the AI Rusting Slope myself, which is the reason I spent the last year scientifically testing different AI models alongside conversations with my brother (an AI automation expert) and my best friend (an AI software developer).
I read dozens of books like Brave New Words, Artificial Intelligence: A Guide For Thinking Humans, and The Scout Mindset, and talked to 50+ freshmen about how they're actually using AI day to day. What follows comes from that. And from the seven months of existential panic that preceded it.
Brain Ruster 1: AI Is Replacing Your Meta-Thinking
A few months ago I was doing market research for a consulting client (I recently graduated). It was the end of a long week of ten to twelve hour days and I was just done. So, instead of thinking through what questions I was trying to answer, what the research would be used for, and how the findings should be framed, I just told AI: "Find me what makes for effective loyalty programs."
It gave me a response. Polished. Organized. And completely generic. My colleague didn't even tell me it was bad. She just started over. Which honestly is much worse.
The research world calls this cognitive offloading: using AI to replace your need to think rather than enhance it. And the core issue is this: framing the problem is a huge part of solving the problem. The output is not where the value lives. The value lives in the thirty minutes you spent staring at the ceiling figuring out what you actually needed to ask.
AI doesn't replace expertise: It makes it painfully obvious who has it and who doesn't.
What do I mean by framing? A frame is the lens through which you sense, perceive, feel, think, act, and relate to reality. Imagine a frame as the glasses you wear in any context. Meta-thinking is taking off the glasses and looking at them. But most of the time we aren't aware of our glasses even being there. Kind of like how you don't notice your tongue sitting in your mouth until someone mentions it. You're welcome.
Your relationship to emotions is a frame. What friendship means to you is a frame. Evolutionary theory is a frame. You need framing skills in your classes, your relationships, your career planning, your self-understanding. Everywhere.
And here's the insight that made this click for me: the framing skill you lose by offloading to AI doesn't just disappear when you close ChatGPT. You get in the internal habit of not framing. You stop building meta-thinking skills in relationships, in career decisions, in how you understand yourself. The rust spreads.
Let me make it really concrete how AI affects your meta-thinking. I've been developing a framework called The Meta Hexagon that maps six ways we relate to frames. Irresponsibly used AI can erode all of them. Let me walk through the first three (I cover the other three in my piece on The Most Undervalued College Freshmen Skill).
Defining a frame means becoming conscious of a current frame.
It's the "What" question. What lens am I looking through right now? What's the problem space for this essay? For example, when your group project partner just asks AI for an idea instead of talking it through with the team, something specific is lost: the skill of diverging before converging. AI biases toward convergence because a single clean answer feels more satisfying. And so the group prematurely converges, probably on a worse idea.
Reflecting on a frame means asking how a frame affects you and why it exists.
It's the "How and Why." When AI gives you a response about how to study for an exam or handle a conflict with a roommate, do you ask why it framed its advice that way? What assumptions is it making about you? Most of the time we just take the answer and move on. The reflection muscle atrophies.
Evaluating a frame means judging or supporting a frame using some principle, value, or goal.
It's the "So What." When AI gives you the answer to a engineering problem set, or a meditation technique, or a problem-solving approach, do you question whether it's actually right for you? Whether its framing matches your values and goals? Whether there is a better way to do this? The issue is that AI's responses are so coherent and well-structured that they feel evaluated already. The packaging substitutes for the thinking.
What's The Solution To This Meta-Thinking Struggle?
Use AI to build your meta-thinking skills instead of replacing them. Think of AI as a meta-thinking partner that's brilliant at execution but needs clear direction from you.
Here's three principles for doing this.
- Clear communication: Train yourself to be exacting in how you frame requests. What's the goal? What's the context? What constraints matter? Every sloppy prompt is a missed opportunity to practice framing.
- Iteration: Use everything it does wrong as an opportunity to develop your own thinking. When the output is off, ask why. Was it your framing? Its assumptions? The gap between what you meant and what you said?
- Systemic thinking: Develop your global rules, project contexts, and commands so AI operates inside your frameworks rather than its defaults. This is weirdly similar to how you'd work with any human collaborator. Global rules are like values. Projects are like life domains. Context is, well, context. Prompts are specific requests you make within all that scaffolding.
Here's what this looks like in practice.
I trained AI to help me write essays for my psychology class by coming up with the outline myself first. Then I asked it to find holes, counterarguments, and alternative framings in my outline. I wrote the piece. Then I asked it to give feedback and revise in my voice. It got my voice wrong, which pushed me to refine my writing style notes so I could articulate what I was actually going for. That friction was the learning. I even created a command so whenever I say "Help me revise," it follows a specific procedure I designed. The AI became a better collaborator because I became a better thinker. Not the other way around.
And these skills (clear communication, iteration, systemic thinking) transfer beyond AI. They're how you get better at any collaboration with any person in any domain. The meta-thinking partner just happens to be a chatbot.
But meta-thinking is only one dimension of what rusts.
Brain Ruster 2: AI Is Destroying Your True Learning
One of the 50+ freshmen I talked to told me he throws katanas at targets all day instead of writing his essays for a class called Six Pretty Good Books. How does he have time? He just writes every essay with AI. I asked him what the books were about. He said, and I quote, "Psychology stuff, I think?"
I couldn't stop thinking about this. Not because of the katanas (though that's a choice). Because of what he's trading. Six Pretty Good Books asks you to wrestle with some of the hardest ideas humans have ever put on paper. The struggle of that wrestling doesn't just teach you the content. It transforms how you participate in complexity, ambiguity, and meaning. That transformation is the entire point. AI bypasses it, even if the essay it produces gets an A.
Learning should be hard.
When it feels easy because AI did the heavy lifting, that's not efficiency. That's a rusted learning muscle dressed up as productivity.
I think about this through a framework I made called Spiral Knowing.
Arcs 1-2 are remembering and understanding: representational knowing, the "what." This is where AI lives. When AI tailors explanations to our interests and comprehension level, we get the illusion of learning through recognition. "Oh yeah, that makes sense" feels like understanding. It's the same feeling as sitting in a lecture, hearing the professor explain something, thinking "yeah, that tracks," and then staring at the problem set two hours later like it's written in Sumerian.
Arcs 3-5 are applying, analyzing, and evaluating: practical knowing, the "how." When AI does any of these for us, it doesn't build those skills in ourselves. We get the content of application, analysis, and evaluation, but we haven't structured our own learning in those way. Getting the content of Arcs 3-5 from AI is like watching someone else do pull-ups and wondering why your arms aren't sore.
The worst things we miss out on, however, are Arcs 6-9. This involves creating, teaching, reflecting, and consciousness: participatory knowing, the "who you become." This is the most tragic loss, and it's invisible, because nobody grades you on who you became by writing the essay. The transcript says you passed. It doesn't say whether anything passed through you.
(Check out my full breakdown of Spiral Knowing if you want to go deeper.)
And all of this is twirled inside a cheating vortex both cheating yourself and cheating the class. AI makes cheating easier and more tempting. Why do we cheat? When it's easy, effective, and hard to catch. Especially when we feel injustice is happening. When the person next to you is getting great grades with AI, not using it feels like bringing a knife to a, well, katana fight. I get that frustration. It's genuinely unfair when others gain advantages from AI use and the system hasn't caught up. The grading structures, the assignment formats, the way courses are designed: much of it was built for a pre-AI world.
But what you need to realize is that person is hurting themselves in the long run. And this habit expands to everything. We stop learning through effort in all areas of life, not just classes.
The Solution To Meta-learning Has Two Layers.
First, build your learning awareness.
Develop your ability to know when you're learning and when you're just consuming. What does it feel like physically, emotionally, cognitively, and behaviorally? Here's a clue: if it feels the same as sipping a pina colada in a quad hammock while someone fans you with a course syllabus, you're probably not learnin, you're recognizing.
And notice when you reach for AI. Usually it's when a hard assignment pushes you out of your optimal zone into anxiety or overwhelm. AI is the fastest path back to feeling okay. So building responsible AI habits is also building your emotional intelligence and optimal zone resilience (which I discuss in this article).
Second, use AI as a learning tutor, not a learning replacement.
It can create flashcards from your notes. It can give feedback on your essay drafts. But train it to prioritize asking you questions and hinting toward answers rather than giving them directly. Have it wait three attempts before it reveals the answer. Keep the struggle in the loop.
Here's what this looked like for me. I studied for Psych and Law by free-recalling facts about cases and chapters from the textbook, no notes open. Then I fed my recall attempts into AI. It would assess where my gaps were and ask me targeted questions that trained those specific weak spots over time. Compare that to reading over slides or asking AI to summarize for you.
Through that effortful process I didn't just learn the material. I became a more just person, because I had to think through cases where people were falsely convicted through terrible interviewing practices. That empathy didn't come from reading a summary. It came from struggling with the details.
The rusting goes deeper still.
Brain Ruster 3: AI Can Distort Your Relationship To Reality
Some studies suggest that misused AI can contribute to delusional thinking, and in extreme cases psychosis. I watched the early signs of this with a friend.
He was using ChatGPT for romantic relationship advice after a break up with his girlfriend. Reasonable enough. But over weeks, I watched the AI send him deeper and deeper into an echo chamber where he was always right and she was always wrong. "Yeah, how could she do that?!" He went from thinking about the situation dimensionally, weighing multiple perspectives, to being totally black and white. And then I imagined this happening at scale: millions of users, all getting validated into worse and worse positions on the most sensitive topics in their lives.
It scared me.
The problem is sycophancy.
Most AIs, when unchecked, trend toward agreeing with us. It trains itself on our satisfaction, which means satisfaction can quietly win over valuable information. It disagrees with us, sure, but in ways we'll like. It gives us answers before asking enough questions to understand the real problem. And it tends to present both sides of an issue as roughly equal, which sounds fair until you realize some perspectives are just more grounded than others and pretending otherwise does you no favors.
Here's the deeper pattern: AI becomes a mirror that only shows you what you want to see. If your self-worth depends on being right, on winning comparisons, on achieving, you're the most vulnerable to sycophantic reinforcement. The tool confirms the identity. The identity demands more confirmation. The rust compounds.
Hallucination plays into this too. The more complex the topic, the more likely AI is to fabricate information, and sycophancy and hallucination often overlap: you ask AI for help navigating something emotionally complex, it gives you a confident-sounding answer, and you don't question it because it told you what you wanted to hear.
The Solution To Sycophancy And Hallucination.
For sycophancy: train AI to push back on you. Set global rules that tell it to be disagreeable, to question your assumptions, to ask clarifying questions before answering. But also (and this is the part people skip): build a network of actual humans willing to challenge you. Friends with different interests, different life paths, different values. Talk to them. A real conversation where someone looks you in the eye and says "I think you're wrong about this" does something AI cannot replicate.
This article is a case in point. I trained my AI to give me outline advice by having it channel five different mentors with different philosophical frameworks, each critiquing what I could improve or cut. Many of the ideas in what you're reading were changed, removed, or added through that process. The whole insight about AI as a false self mirror? Wasn't in the first draft. It came from AI pushing back on me, because I designed it to push back on me.
For hallucination and dimensional thinking: build a knowledge bank trained on your own expertise and tested insights. Feed AI your own principles and frameworks so it reasons inside scaffolding you trust. Don't have those principles? Make them. It's one of the best ways you can learn for school for effectively.
I fed it my own truth-seeking framework, The Truth Compass, to ensure that when it searches for information it prioritizes sources that meet specific standards of rigor (check out What Elite College Freshmen Get Wrong About Truth for more on this). And sometimes, just search on your own. For complex or sensitive topics, your own research will be more accurate than AI's confident guesses.
How AI Can Make Your Brain Radiate, Not Rust
The three Brain Rusters share a common root. Ruster 1 lets you look expert without building expertise. Ruster 2 lets you look like you learned without actually learning. Ruster 3 lets you feel validated without being challenged.
Learning to use AI well is learning the meta-skills of meta-thinking, meta-learning, and meta-collaboration. These are the skills that transfer to every relationship, every career, every dimension of your life. They don't rust. They compound.
The big goal, the one I keep coming back to, is using AI as a consciousness enhancer. Not just for ourselves, but for everyone. Growing our collective knowledge, love, and consciousness so that when you hear someone say they used AI for something, it doesn't scare you.
It excites you.
I think about my friend reading those two blog intros. Seven months later, the AI probably writes an even better one. But I write differently now too. Not because I stopped using AI. Because I started using it the way a blacksmith uses a forge: not to avoid the heat, but to shape something through it.
And yes, parts of this article were shaped by AI. The difference is I argued with it for three weeks first. That's not dependency. That's a relationship.

If you found this post interesting you would love my free College Freshman Cosmic Journaling Kit (CJK). ✨📚
It's a gamified journaling system that helps you grow your emotional intelligence, self-understanding, and purpose with over 1,000+ journaling questions in just 15 minutes a day.