AI UnSpun
The Gentleman looking composed and thoughtful, slight raised eyebrow, calm expression. The register of someone who has considered a question carefully and is about to say something precise.
I keep being asked whether AI will replace writers, doctors, and lawyers. I find the question rather less interesting than the one hiding behind it.
GThe GentlemanWe looked at the research most often cited in these headlines (a series of large economic studies from Oxford, McKinsey, and the World Economic Forum) and traced what they actually measured, which is not always what the coverage said they measured.

Is AI Going to Replace Writers, Doctors, and Lawyers?

Good Questions·The Gentleman·18 March 2026·2 min read

Not entirely. Not soon. And not in the way the question implies, which matters, because the way you frame this determines whether the answer is useful or merely frightening.

We looked at the research most often cited in these headlines (a series of large economic studies from Oxford, McKinsey, and the World Economic Forum) and traced what they actually measured, which is not always what the coverage said they measured.

The honest difficulty is this: every job is a collection of different tasks, and they do not all sit in the same place. A doctor diagnosing from an imaging scan is doing something quite different from a doctor sitting across from a patient who has just been given difficult news. A lawyer reviewing a standard commercial contract is doing something quite different from a lawyer in a negotiation where the other side is not acting in good faith. A writer producing content to a brief is doing something quite different from a writer working out what to say and why it matters. The question "will AI replace them" treats each of these as one thing. They are not.

The research, when you read it carefully, almost always says tasks rather than jobs. When McKinsey estimated that roughly 30% of work activities globally could be automated with current technology, they meant tasks within existing jobs, not 30% of jobs disappearing. That distinction rarely survives the headline. The result is a class of coverage that is technically sourced and practically misleading.

In law, this already has a shape. AI drafts contracts, reviews documents for inconsistencies, and searches legal precedent faster than a team of junior associates. Tools like Harvey and CoCounsel are in active use at major firms now. Entry-level document review work is genuinely under pressure, not as a distant possibility, but as a current reality. Senior judgment, courtroom advocacy, complex negotiation, and the management of a client who is frightened and needs someone to own the outcome alongside them. These are not things AI does convincingly.

In medicine, AI reads radiology scans with accuracy that matches or exceeds specialists in controlled studies. It flags drug interactions, generates discharge summaries, identifies patterns across patient populations that no individual clinician could see. Radiologists and pathologists are watching this closely, and sensibly so. But diagnosis is not only pattern recognition. It is also conversation, probability-weighting under uncertainty, and the particular competence of saying to someone "here is what I believe is happening and why" while looking at them directly.

In writing, AI generates text at volume. But most professional writing, the kind that changes something, requires a specific perspective, a lived experience, or a judgement that cannot be assembled from a training set. The AI industry has, with some consistency, generated the kind of writing that could have been written by anyone. That remains a meaningful limitation.

The honest editorial observation here is that the professions most genuinely at risk are the ones built predominantly around high-volume procedural work, regardless of what they are called. A lawyer who spends most of their time on routine document review is in a different position than one who spends most of their time on judgment. A radiologist reading standard scans is in a different position than a physician managing a case that does not fit the pattern.

A two-column table showing specific tasks within law, medicine, and writing that AI currently handles well versus those it handles poorly, illustrating that job replacement is a task-level question, not a wholesale one.

If you are a solicitor who spends two days a week reviewing routine contracts, or a GP who writes broadly similar referral letters twenty times a month, or a journalist who produces standard briefings on a tight deadline, those are the specific tasks worth watching. Not the job, not the title. The task. The parts of your week that look most like a repeatable procedure with a predictable output are the parts that will change first. The parts that require you to hold judgment, uncertainty, and responsibility for an outcome at the same time are considerably harder to hand to a machine, and considerably more worth building toward.

If you are a solicitor who spends two days a week reviewing routine contracts, or a GP who writes broadly similar referral letters twenty times a month, or a journalist who produces standard briefings on a tight deadline, those are the specific tasks worth watching. Not the job, not the title. The task. The parts of your week that look most like a repeatable procedure with a predictable output are the parts that will change first.

K

If you're wondering how your specific work maps onto this, the AI Prescription can help you think it through. It's built for exactly this kind of question.

Questions people ask

What does the research actually say about AI and job displacement, and why does the coverage keep getting it wrong?

The most-cited studies measure the automatable proportion of tasks within jobs, not the probability of entire jobs disappearing. The coverage tends to compress "30% of tasks in a job could be automated" into "30% of jobs at risk," which is a meaningful difference. The short version: the research is more nuanced than the headlines, and the nuance is what determines whether the answer is useful.

Are entry-level positions in these fields more exposed than senior ones?

Generally, yes, not because junior professionals are less capable, but because junior roles are disproportionately built around the procedural, high-volume tasks that AI handles most reliably. Document review, routine report generation, first-draft work: these are tasks that junior lawyers, junior doctors, and junior writers have traditionally done at volume as part of learning the profession. That part of the learning path is changing, which raises a different and more complicated question about how you develop expertise without the procedural foundation.

What should someone actually do if they work in one of these professions right now?

The most practical thing is to get specific, not "will AI affect my industry" but "which tasks in my week could a good AI tool do reasonably well in two years." That list is your early signal. The tasks that remain off the list, the ones that require judgment, accountability, or a specific kind of human presence, are where professional investment makes most sense right now.