AI UnSpun
The Tinkerer with a warm, focused expression, glasses slightly forward, the look of someone mid-thought about something that has just become clearer. Not dramatic. Just engaged.thinking

The Skills That Are About to Become More Valuable

The Bigger Picture·18 March 2026·5 min read
TThe Tinkerer

I've been turning that moment over ever since. Because what she had, that feeling that something was off before she could name why, is not a small thing. It's actually one of the harder problems in the whole system. AI is very good at pattern completion. You give it inputs, it finds the closest match in what it has learned, and it produces an output that fits the shape of what you asked for. Most of the time that's genuinely useful. But it cannot tell you when the shape you asked for was the wrong shape. It cannot notice that the framing of the question was the problem. That takes something else.

My former student had been in insurance for twenty-five years. She had seen a thousand summaries. She could not tell you the rule she was applying when she said something felt off. But the rule existed. It was just stored somewhere the AI couldn't access, and I suspect most people reading this have a version of that same rule, in their own field, that they've never had to name either.

The question I've been asking myself is: what, exactly, is that something else? Not in a motivational sense. In a mechanical sense. If I open up the problem and look at what's inside it, what do I actually find?

What I keep coming back to is something I'd call threshold recognition: the ability to notice when something is technically correct but functionally wrong. My former student had it. She couldn't articulate it yet, but she had the instinct that the output and the reality weren't quite aligned. That instinct comes from deep familiarity with a specific domain, built over years of seeing the full picture, not just the documents, but the people, the context, the things that aren't written down anywhere. AI can read everything that's been written down. It cannot read what hasn't been. The gap between those two things is where threshold recognition lives.

There's something related to that, but distinct. I've been thinking about it as question quality. AI generates answers. What it cannot do is decide which questions are worth asking in the first place. A researcher who knows which experiment will actually move the needle, a manager who asks the one question that reframes the whole discussion, a teacher who finds the analogy that unlocks a concept for one specific student. These are people exercising question quality. The AI is downstream of the question. It does not evaluate whether the question was the right one. That evaluation happens before the AI is involved at all, and it is entirely human.

A close view of handwritten notes alongside a laptop keyboard. The texture of human thinking alongside a digital tool. The mood is quiet and focused, neither dramatic nor clinical.

The one that took me longest to name is harder to describe neatly, which is probably appropriate. I've been calling it synthesis across registers, the ability to hold information from very different domains simultaneously and notice a connection that isn't obvious from inside any single one. A doctor who reads a paper from economics and recognises something relevant to patient behaviour. An architect who knows enough about materials science to ask a question that the structural engineers haven't thought to ask yet. AI can retrieve information across domains. What it does less well is the genuinely creative cross-domain connection that comes from a mind that has spent decades moving between different kinds of knowing. The serendipitous wrong-field insight is, I think, still predominantly human.

Now, I want to be honest about what this analysis is not saying. It is not saying these skills are immune to disruption. It is not saying that if you are good at judgment and synthesis you are safe and everything is fine. The honest version is more specific: these are the skills whose relative value is increasing as AI handles more of the procedural and pattern-completion work. Relative value is not the same as absolute security. But it is the right frame for thinking about where to invest.

Here is what I find genuinely interesting about all three of these. None of them showed up in a curriculum. I taught middle school science for thirty-one years and I never once wrote a lesson plan for threshold recognition. My former student didn't learn to sense a wrong summary in a training programme. She learned it by reading ten thousand summaries and being in the room when the ones that seemed right turned out not to be. That kind of learning doesn't compress. You cannot shortcut the accumulation by reading about it. Which is also, not coincidentally, why it doesn't automate easily.

Questions people ask

Does it feel strange to think that the things you've never had to explain might be the most valuable things you have?

A little, yes. There's something disorienting about it. The things that tend to get valued explicitly are the ones that can be written down: the qualifications, the certifications, the competencies that fit in a list. The things that actually accumulate over years of doing something tend to be invisible, even to the person carrying them. I think what this moment is doing, for a lot of people, is making some of that invisible knowledge suddenly visible by contrast. I'm not sure that's comfortable. But I think it might be clarifying.

What if I've been in the same role for a long time and I'm not sure whether what I've built is depth or just habit?

I've sat with that question myself, and I don't think it has a tidy answer. What I notice is that the distinction tends to show up at the edges, in the unusual case, the one that doesn't fit the pattern, the situation where the procedure doesn't quite work. Whether you have an instinct about that, or whether you reach for the procedure anyway, probably tells you something. But I'd be cautious about anyone who claims they can give you a clear test. This is exactly the kind of thing that reveals itself slowly, and asking the question at all is probably already part of the process.

Is it strange that the skills most worth having right now are the ones you can't easily demonstrate on a CV?

Yes, actually. And I think that strangeness is worth sitting with rather than resolving too quickly. Professional value has always been partly visible and partly not, but the balance is shifting, and the things that used to be legible signals (certifications, titles, years of experience) are starting to mean something different than they did. I don't know where that lands. I just notice that the things my former student had, twenty-five years of accumulated instinct, showed up in three seconds when the AI got it wrong. That's not nothing.

Still thinking? So are we.

This stuff is hard to make sense of — and honestly, your thoughts help us do that better. If something in this post landed, or didn’t, or took you somewhere unexpected, we’d love to know. We read everything that comes in. We can’t always write back personally, but if your thought sparks something we think others need to hear, we’ll address it in a future post.

If that happens, your thought stays anonymous — always. If you’d like your initials included, just add them and we’ll take that as permission. Either way, we’ll send you the link if you left an email.

No public comments. Just an honest conversation between us.