Source: LessWrong
The concept of gradual disempowerment—where humans lose agency incrementally rather than catastrophically—has become a serious organizing principle for AI safety research at major labs like DeepMind. Researchers are converging on a concern that doesn't require superintelligence or dramatic moments: systems can erode human decision-making power through accumulation of small capability gains and dependency lock-in. The governance problem is primarily institutional design and power dynamics, not technical alignment alone. This reframes AI risk from philosophical thought experiment into an operational problem that existing organizations already face—one that's harder to dismiss and easier for non-specialists to reason about.