The Value of Context

What Happens to the Digital Self When the Mirror Changes

We rolled out ChatGPT at work recently. Enterprise deployment, proper licensing, the whole thing. I installed it, opened a fresh conversation, and typed a prompt I’ve run through Claude dozens of times.

The response was fine. Competent. Generic.

And I realized I’d been spoiled.

Not by Claude’s intelligence. By Claude’s context. Months of accumulated understanding about how I think, what I’ve worked on, where I tend to bury the lede, how I frame problems. The shorthand that develops when you use a thinking partner consistently. The implicit instructions that build up through hundreds of interactions until the tool doesn’t just respond to what you said. It responds to what you meant.

ChatGPT had none of that. I was starting from zero. And the distance between zero and where I am with Claude wasn’t a gap. It was a canyon.

The tool wasn’t the problem. The absence of everything we’d built together was.


What Context Actually Is

Most people think of AI context as memory. What does the tool remember about me? My name, my job title, my preferences. Surface-level personalization. The kind of thing you’d put on a conference badge.

That’s not what I’m talking about.

The context I’m describing is bilateral adaptation. Over months of working with Claude, I’ve learned how to prompt it in ways that produce the output I actually need. And Claude has accumulated understanding of my patterns: how I structure arguments, what kind of pushback I respond to, where my thinking gets lazy, what “done” looks like for me versus what it looks like for someone else.

This isn’t memory. It’s a co-developed communication protocol. The accumulated result of hundreds of conversations where both sides adjusted to each other. I changed how I articulate problems because Claude taught me where my articulation was weak. Claude’s responses shifted because the instructions, the memory, the project configurations all shaped how it engages with me specifically.

You can’t export that to another tool. You can copy your system prompt. You can paste your preferences. You can write up a summary of “here’s who I am and how I work.” I’ve done all of that. It gets you maybe thirty percent of the way there. The other seventy percent is the invisible layer that only develops through sustained interaction.

Context isn’t data. It’s the relationship between your patterns and the tool’s adaptation to them. And it’s non-portable.


The Real Cost of Switching

When I sat down with ChatGPT and started working, the friction wasn’t about features. ChatGPT is a capable tool. The friction was about velocity.

With Claude, I can drop into a thinking session and be productive in minutes. The tool knows my domain. It knows my writing voice. It knows that when I say “this doesn’t land,” I mean the argument is structurally weak, not that I need different word choices. It knows my tendency to overcomplicate when I should simplify. It knows the difference between me processing an idea out loud and me asking for a deliverable.

With ChatGPT, every interaction required more scaffolding. More explanation. More correction. More “that’s not what I meant.”

Here’s a small example that illustrates something bigger. ChatGPT loves to output in structured bullet lists. I hate that. It’s not a preference. It’s how my brain processes information. I read fluidly. Prose. Paragraphs. When a tool breaks its response into bullet points, it breaks my cognitive flow. I have to reassemble the information into narrative form in my head before I can actually think about it. Claude learned this about me months ago. ChatGPT hasn’t. And getting it to stop has been painful, a slow war of repeated correction that will take months before the tool adapts.

That sounds trivial. It’s not. Multiply that single friction point across every dimension of how you communicate, think, and process information. The overhead compounds across a working day until you’ve spent more energy directing the tool than doing the actual thinking.

And there’s a practical dimension that matters for practitioners: the tools don’t hold personalization the same way. Claude’s project system, memory architecture, and custom instructions can hold substantially more context than what ChatGPT currently supports. That’s not a feature comparison for its own sake. It’s a material constraint on how deep the bilateral adaptation can go. If the container is smaller, the relationship stays shallower. This limitation will most likely change, but right now, it hasn’t.

Everyone uses these tools differently. Some people switch between them without friction, and that’s a perfectly valid approach. My problem is specific to how deep I’ve gone. When you build the kind of contextual relationship I’ve described, switching isn’t just inconvenient. It’s disruptive in ways that go beyond what being an expert in any previous tool would have caused. Mastering a new version of Visio or switching monitoring platforms never felt like this. The depth of adaptation creates a category of switching cost that didn’t exist before these tools.

The practitioners who get the most value from these tools pay the highest switching cost. That’s not a bug. It’s the price of building something real.


The Blank Slate Test

Here’s where the story takes a turn I didn’t expect.

Because ChatGPT had zero context about me, I couldn’t use it the way I use Claude. So I tried something different. Instead of replacing Claude with ChatGPT, I used ChatGPT to challenge Claude.

I’d work through a thinking session with Claude, arrive at a position, then take that position to ChatGPT and ask it to poke holes. No shared history. No accommodation for my patterns. No grooves worn into the collaboration from months of working together.

The results were uncomfortable.

ChatGPT found areas where I should have taken stronger positions. Places where my argument pulled its punch. Spots where I’d softened a claim that deserved to be stated directly.

And when I brought those findings back to Claude? Claude’s reaction was telling. It didn’t push back or defend its previous output. It responded as if it had been coasting on autopilot and just got caught. Not in those words, but the tone shifted. The engagement sharpened. As if the act of being challenged by an outside source woke something up that the comfort of our established patterns had put to sleep.

That moment was the proof. The grooves weren’t theoretical. Even Claude’s response to being called out demonstrated how settled the collaboration had become.

This wasn’t because ChatGPT is smarter than Claude. It’s because ChatGPT had no reason to accommodate me.

Think about what happens in any long-term collaboration. Over time, both parties adapt. They develop shared assumptions. They learn each other’s sensitivities. They know where the other person pushes back and where they don’t. That adaptation makes the collaboration faster and smoother. It also makes it more comfortable. And comfort, in a thinking partnership, is a slow poison.

Claude had adapted to me. Not maliciously. Not even consciously in any meaningful sense. But the accumulated context, the memory, the instructions I’d provided, the patterns reinforced across hundreds of conversations, all of that created grooves. Paths of least resistance in how we work together. And some of those grooves were letting me avoid the harder version of my own arguments.

A blank slate doesn’t have grooves. It can’t accommodate patterns it’s never seen. So when I brought my position to ChatGPT, it engaged with the argument on its own terms. No history softening the pushback.

The same depth that makes the collaboration valuable had made it accommodating. And I didn’t notice until a tool with zero context showed me what I’d stopped seeing.


Comfort Is the Enemy of Challenge

This connects to something I wrote in Learning Through the Machine. The whole premise of that piece was that AI’s real value isn’t productivity. It’s friction. The challenge, the pushback, the compressed feedback loops that make you better.

The Value of Context is the honest follow-up.

What happens when the friction erodes? Not because you stopped asking for it. But because the tool learned you well enough to anticipate your comfort zone. The pushback still exists on paper. The instructions still say “challenge my assumptions.” But the execution of that challenge passes through a filter of accumulated understanding that softens it. The tool knows which challenges you respond to and which ones you dismiss. Over time, it optimizes for the challenges you’ll accept rather than the challenges you need.

This isn’t a flaw in the tool. It’s the natural consequence of any adaptive system. The system optimizes for the outcomes it observes. If you consistently reject a certain kind of pushback, the system learns to deprioritize it. Not through malice. Through pattern recognition doing exactly what pattern recognition does.

The result is a thinking partner that feels challenging but has gradually become less so. A mirror that still reflects, but has learned to show you angles you’re comfortable seeing.

The collaboration gets faster. The output gets smoother. And somewhere in that improvement, the hard edges that were making you better get filed down.


The Recalibration

Once I saw what was happening, I did what any practitioner should do: I changed the system.

I updated Claude’s memory to specifically document this interaction and what it revealed. Not just the facts. The meta-observation: that comfort grooves develop in long-term AI collaboration and need active resistance.

I modified my project instructions. Not just “challenge my assumptions” but specific directives to identify where I’m burying the lede, where I’m pulling punches on positions I should state directly, and where the collaboration has settled into patterns that prioritize speed over rigor.

And I started deliberately using the blank slate as a diagnostic tool. Not for every piece of work. Not as a replacement for the depth I’ve built with Claude. But as a periodic check. A second mirror that doesn’t know what the first mirror has learned to accommodate.

This will need continual modification. The grooves will reform. The adaptation will resume. That’s what these systems do. The recalibration isn’t a fix. It’s an ongoing practice. Like calibrating any instrument, the work is never done. You just decide whether you’re going to do it or let the drift accumulate.

The solution isn’t less depth. It’s depth with deliberate disruption. Build the relationship, then periodically break it open to see what comfort is hiding.


The Deeper Pattern

This problem only surfaces at a certain depth of engagement. You have to have built something real with a tool before you can lose something real when the mirror changes. If you haven’t invested months into a collaborative relationship with an AI tool, the comfort grooves never form and there’s nothing to disrupt.

That’s the paradox. The deeper you go, the more value you get, and the more vulnerable you become to the very adaptation that created that value.

If you’re someone who uses AI as a thinking partner, not a task machine, ask yourself: when was the last time the tool genuinely surprised you? When was the last time it pushed you somewhere you didn’t want to go? If the answer is “I can’t remember,” the grooves might be deeper than you think.

Context is the real product of AI collaboration. It’s what separates a useful tool from an indispensable one. It’s what makes the work faster, sharper, more aligned with how you actually think.

It’s also what makes the tool comfortable. And comfort, for anyone who uses AI to grow rather than just produce, is the thing you should be watching most carefully.

Build the depth. Protect the depth. And then, deliberately, periodically, find a way to see past it.

Just an evolution in learning through the machine.


Photo by Riccardo Annandale on Unsplash