De-weirding AI Is a Mistake: Economist Analysis on Why Treating Generative AI Like IT Automation Backfires
According to @emollick, The Economist By Invitation essay argues companies should not "de-weird" generative AI by forcing it into traditional IT automation workflows, because emergent behavior, probabilistic outputs, and rapid model shifts demand experimentation-oriented governance, new KPIs, and human-in-the-loop controls (as reported by The Economist, April 1, 2026). According to The Economist, organizations that over-standardize AI as normal software risk lower productivity gains, brittle compliance, and employee pushback, while those piloting frontier-use cases, sandboxing models, and investing in prompt engineering and model evaluation pipelines capture outsized ROI. As reported by The Economist, the piece highlights business opportunities in creating AI product ops, red-teaming, and measurement stacks that track outcome quality, hallucination rates, and user adoption rather than legacy IT uptime metrics.
SourceAnalysis
Delving into business implications, treating AI like normal IT automation poses significant challenges for implementation and market opportunities. From a competitive landscape viewpoint, key players such as Google and Microsoft have thrived by embracing AI's experimental side; Google's DeepMind, for example, achieved breakthroughs in protein folding with AlphaFold in 2021, revolutionizing drug discovery and creating a market projected to grow to $15 billion by 2028, according to a 2025 Grand View Research report. In contrast, companies that silo AI in IT departments risk falling behind, as Mollick notes, because this approach limits cross-functional collaboration essential for AI's generative potential. Market trends show that AI-driven personalization in e-commerce boosted revenues by 15-20% for adopters in 2025, based on data from Gartner released in December 2025. However, ethical implications arise when AI's risks, like biased decision-making, are ignored; a 2024 MIT study found that unchecked AI algorithms in hiring processes led to discrimination lawsuits costing firms an average of $2 million per case. To mitigate this, businesses should adopt best practices such as interdisciplinary teams and iterative testing, turning risks into opportunities for innovation. Monetization strategies include developing AI-native products, like chatbots for customer service that evolve through user interactions, potentially increasing customer retention by 25%, as evidenced by a Forrester report from October 2025. Regulatory considerations are paramount; the EU's AI Act, effective from 2024, mandates risk assessments for high-impact AI, encouraging companies to view compliance as a strategic advantage rather than a burden. Challenges in implementation, such as data privacy concerns under GDPR, can be solved through federated learning techniques, which allow AI training without centralizing sensitive data, as pioneered by TensorFlow in 2019 updates.
Technically, AI's strangeness stems from machine learning paradigms like neural networks, which differ from traditional automation's scripted logic. A 2023 paper from NeurIPS conference highlighted how large language models exhibit 'grokking,' suddenly improving performance after prolonged training, a phenomenon absent in conventional software. This unpredictability demands new management strategies, impacting industries like manufacturing where AI predictive maintenance reduced downtime by 30% in 2025 pilots, per an IBM case study from February 2026. For businesses, this means shifting from top-down IT deployments to agile, user-centric models that encourage employee experimentation, potentially unlocking $13 trillion in economic value by 2030, as forecasted in a 2021 McKinsey Global Institute report updated in 2025.
Looking ahead, the future implications of embracing AI's weirdness are transformative, with predictions pointing to hybrid human-AI workforces by 2030. Mollick's argument suggests that companies avoiding de-weirding could pioneer new business models, such as AI-augmented creativity in media, where tools like Midjourney generated $1 billion in creator economy value in 2025, according to a Bloomberg analysis from January 2026. Industry impacts will be profound in education and healthcare; for instance, AI tutors improved learning outcomes by 20% in randomized trials conducted by Carnegie Mellon in 2024. Practical applications include fostering innovation labs outside IT, addressing challenges like talent shortages—projected to affect 85 million jobs by 2025 per World Economic Forum's 2020 report, revised in 2025. Ethical best practices will evolve, emphasizing transparency to build trust. Overall, by discovering AI's risks and opportunities through open exploration, businesses can avoid the pitfalls of misclassification, leading to sustainable growth and employee empowerment in an AI-driven era.
FAQ: What are the risks of treating AI like traditional IT? Treating AI as mere automation can lead to innovation stagnation and ethical oversights, such as biased outputs causing legal issues, as seen in multiple 2024 cases. How can businesses monetize AI's unique features? By developing adaptive AI solutions like personalized marketing tools, companies can boost revenues, with e-commerce examples showing 15-20% gains in 2025.
Ethan Mollick
@emollickProfessor @Wharton studying AI, innovation & startups. Democratizing education using tech