A leading AI safety researcher, Dr. Roman Yampolskiy, has issued a dire prediction: artificial general intelligence (AGI) could eliminate up to 99% of human jobs by 2027, leaving society grappling with unprecedented unemployment. Speaking on rapid AI advancements, Yampolskiy argues current models already threaten 60% of roles, with superintelligence surpassing humans in every domain soon after.
The Coming Job Apocalypse
Yampolskiy envisions AGI arriving as early as 2027, automating computer-based work first, then physical labour via humanoid robots by 2030. Unlike past industrial shifts, this wave offers no obvious retraining path, as AI will dominate creative fields like media and content creation with superior speed, accuracy, and data access. Unemployment could hit levels “never seen before,” potentially sparking societal collapse without intervention.
The Only 5 Jobs That Might Endure
Yampolskiy identifies a narrow set of resilient roles, though they would support only a fraction of today’s workforce:
- Personal Services for the Wealthy: High-net-worth individuals may prefer human accountants, assistants, or advisors (e.g., Warren Buffett sticking with his human accountant).
- Emotion-Centred Roles: Jobs demanding empathy, trust, and human connection, like certain therapy or counselling, where lived experience matters.
- AI Oversight and Regulation: Humans needed to monitor, control, and regulate AI for safety and ethics.
- AI Intermediaries and Explainers: Experts bridging AI for organisations lacking technical know-how.
- Prompt Engineers and AI Handlers: Temporary roles optimising AI interactions, though diminishing as systems self-improve.
These exceptions hinge on human “fetish” for authenticity or AI’s transitional needs, but Yampolskiy warns they won’t sustain mass employment.
Beyond 2027: Singularity Risks
By 2045, Yampolskiy foresees the technological singularity, where AI progress escapes human control. Regulation might delay but not prevent it, buying time for adaptation. He urges focus on AI safety amid risks like superintelligence outpacing humanity.