An Incomplete List of Jobs That AI Was Supposed to Kill but Actually Created

Everybody knows AI is coming for your job. It’s the most popular take on the internet. Type “AI will replace” into any search bar… and autocomplete finishes the sentence before you do.

Here’s what actually happened. The World Economic Forum ran the numbers in its 2025 Future of Jobs Report. By 2030, AI will displace 92 million jobs. It will also create 170 million new ones. That’s a net gain of 78 million jobs that didn’t exist before.

The List (So Far)

Prompt Engineer

We automated programming and accidentally invented a new kind of programming that’s just typing English very carefully. Average salary: $130K. The job didn’t exist in 2022. By 2025, postings had grown 135%. The skill is knowing that “write me a function” and “write me a function that handles edge cases, returns typed errors, and doesn’t hallucinate an API that doesn’t exist” are very different sentences.

Vibe Coder

This is a real job title. There are 372 open positions on ZipRecruiter right now. Upwork has a dedicated marketplace category. One company posted a listing for “Vibe Coder, Applied AI & Rapid MVP Builder” paying $75K to $95K. There is an entire job board called VibeCodeCareers.com. Second Talent published an official Vibe Coder Job Description Template. I checked. Twice. We are living in a simulation.

Context Engineer

A Prompt Engineer got promoted. Context Engineers design systems that give AI the right information at the right time. Not the prompt. The stuff around the prompt. The job is making sure the AI knows what it’s talking about before it starts talking. You’d think that would be automatic. It is not.

AI Trainer (Formerly “Intern”)

Same tasks as before. Writing code, solving problems, answering questions. But the output trains a model instead of building a career. Pay ranges from $15/hr for basic annotation to $200/hr for specialized domain work. AI labs are spending over a billion dollars a year on human training data. Someone has to be the human.

Agent Manager

Harvard Business Review wrote about this role in February. Microsoft calls it “agent boss.” The job is tracking quality, refining prompts, managing handoffs, and figuring out why the agent just told a customer something completely made up. A third of managers now plan to hire people whose primary job is managing AI agents. Not building them. Babysitting them.

Hallucination Auditor

Reads AI output and checks whether the facts are real. The citations. The statistics. The API endpoints. The version numbers the agent invented for a library that’s still on 1.4. A Deloitte team skipped this step and delivered a $440,000 government report with at least twenty fabricated sources. Then they did it again on a million-dollar report. Full-time hallucination auditing is not overkill. It is, apparently, not enough.

AI Red Teamer

Gets paid to break AI systems on purpose. Salary: $60K to $160K depending on how good you are at tricking a language model into saying things it shouldn’t. Only 14% of organizations believe they have enough AI security talent. That means 86% of organizations are hoping nobody tries.

GEO Strategist

SEO, but for AI. GEO stands for Generative Engine Optimization. The job is making sure your company shows up when someone asks an AI a question instead of Googling it. Same game, different referee. The referee hallucinates sometimes.

Data Labeler (Specialized)

Simple labeling pays $15/hr. Medical annotation pays $50 to $100/hr. The gap tells you everything about where AI actually struggles. Labeling a picture of a dog is easy. Labeling a radiology scan is hard. The harder the task is for humans, the more humans get paid to teach the machine. The job AI was supposed to kill (repetitive classification) turned into the job AI can’t live without.

Lore Engineer

Not an official title yet. Give it six months. This person maintains documentation so AI agents don’t develop false beliefs about the codebase. When an agent keeps rebuilding a feature you deleted because it found a stale Confluence page from 2023, someone has to hunt down the ghost and kill it. We used to call this “technical writing.” Now it pays more because the reader is a bot that takes everything literally.

AI Ethics Officer

Someone whose job is making sure the AI doesn’t do the things that humans were already doing. The role requires balancing privacy, fairness, transparency, and the company’s desire to ship fast and worry later. The “worry later” part keeps the Ethics Officer employed indefinitely.

Model Whisperer

The unofficial title for the person on every team who’s just… better at talking to AI than everyone else. Nobody trained them. They don’t have a certificate. They just know that adding “think step by step” to the end of a prompt makes the output 40% better, and they can’t really explain why. Every team has one. Nobody put it in their job description. Yet.

The Pattern

Every one of these jobs exists because AI is good enough to be useful and bad enough to need supervision.

That’s the whole thing. AI can write code, but someone has to check it. AI can answer questions, but someone has to verify the answers. AI can make decisions, but someone has to make sure those decisions aren’t insane. AI can generate content, but someone has to make sure it doesn’t sound like it was generated by AI. (Hi.)

The jobs didn’t disappear. They shapeshifted. The new work is weirder, more specialized, and occasionally involves losing an argument to software about whether a function is deprecated. It usually isn’t. But sometimes it is. That “sometimes” is what keeps things interesting.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.