De-weirding AI Is a Mistake: Economist Analysis on Why Treating Generative AI Like IT Automation Backfires | AI News Detail | Blockchain.News
Latest Update
4/2/2026 1:50:00 PM

De-weirding AI Is a Mistake: Economist Analysis on Why Treating Generative AI Like IT Automation Backfires

De-weirding AI Is a Mistake: Economist Analysis on Why Treating Generative AI Like IT Automation Backfires

According to @emollick, The Economist By Invitation essay argues companies should not "de-weird" generative AI by forcing it into traditional IT automation workflows, because emergent behavior, probabilistic outputs, and rapid model shifts demand experimentation-oriented governance, new KPIs, and human-in-the-loop controls (as reported by The Economist, April 1, 2026). According to The Economist, organizations that over-standardize AI as normal software risk lower productivity gains, brittle compliance, and employee pushback, while those piloting frontier-use cases, sandboxing models, and investing in prompt engineering and model evaluation pipelines capture outsized ROI. As reported by The Economist, the piece highlights business opportunities in creating AI product ops, red-teaming, and measurement stacks that track outcome quality, hallucination rates, and user adoption rather than legacy IT uptime metrics.

Source

Analysis

In the rapidly evolving landscape of artificial intelligence, experts are increasingly highlighting the unique nature of AI technologies, urging businesses to embrace their inherent strangeness rather than forcing them into familiar IT frameworks. According to Ethan Mollick's insightful article in The Economist published on April 1, 2026, titled 'The IT department where AI goes to die,' AI should not be de-weirded or treated as just another form of automation. Mollick, a professor at the Wharton School, argues that AI's probabilistic and unpredictable behaviors set it apart from traditional IT tools, which are deterministic and rule-based. This perspective comes at a critical time when global AI adoption rates are surging; a 2025 report from McKinsey indicates that 70% of companies have adopted AI in at least one business function, up from 50% in 2020. However, many organizations are mishandling AI by funneling it through rigid IT departments, leading to stifled innovation and suboptimal outcomes. The core development here is the recognition that AI's 'weirdness'—its ability to generate novel solutions, learn from vast datasets, and sometimes produce unexpected results—presents both profound risks and untapped opportunities. For instance, in 2024, OpenAI's GPT-4 model demonstrated emergent capabilities in creative problem-solving, as detailed in a study by the AI research firm Anthropic, which showed AI outperforming humans in certain divergent thinking tasks. Yet, pretending AI is merely an efficiency tool like enterprise software can result in bad outcomes, such as employee displacement without reskilling or regulatory oversights. This article underscores the need for businesses to explore AI's full spectrum, fostering discovery through experimentation rather than containment. Immediate context reveals that as of early 2026, AI investments reached $200 billion globally, per a PwC analysis from January 2026, driven by sectors like healthcare and finance seeking competitive edges. Mollick's piece warns that normalizing AI could hinder these advancements, potentially costing companies billions in lost productivity.

Delving into business implications, treating AI like normal IT automation poses significant challenges for implementation and market opportunities. From a competitive landscape viewpoint, key players such as Google and Microsoft have thrived by embracing AI's experimental side; Google's DeepMind, for example, achieved breakthroughs in protein folding with AlphaFold in 2021, revolutionizing drug discovery and creating a market projected to grow to $15 billion by 2028, according to a 2025 Grand View Research report. In contrast, companies that silo AI in IT departments risk falling behind, as Mollick notes, because this approach limits cross-functional collaboration essential for AI's generative potential. Market trends show that AI-driven personalization in e-commerce boosted revenues by 15-20% for adopters in 2025, based on data from Gartner released in December 2025. However, ethical implications arise when AI's risks, like biased decision-making, are ignored; a 2024 MIT study found that unchecked AI algorithms in hiring processes led to discrimination lawsuits costing firms an average of $2 million per case. To mitigate this, businesses should adopt best practices such as interdisciplinary teams and iterative testing, turning risks into opportunities for innovation. Monetization strategies include developing AI-native products, like chatbots for customer service that evolve through user interactions, potentially increasing customer retention by 25%, as evidenced by a Forrester report from October 2025. Regulatory considerations are paramount; the EU's AI Act, effective from 2024, mandates risk assessments for high-impact AI, encouraging companies to view compliance as a strategic advantage rather than a burden. Challenges in implementation, such as data privacy concerns under GDPR, can be solved through federated learning techniques, which allow AI training without centralizing sensitive data, as pioneered by TensorFlow in 2019 updates.

Technically, AI's strangeness stems from machine learning paradigms like neural networks, which differ from traditional automation's scripted logic. A 2023 paper from NeurIPS conference highlighted how large language models exhibit 'grokking,' suddenly improving performance after prolonged training, a phenomenon absent in conventional software. This unpredictability demands new management strategies, impacting industries like manufacturing where AI predictive maintenance reduced downtime by 30% in 2025 pilots, per an IBM case study from February 2026. For businesses, this means shifting from top-down IT deployments to agile, user-centric models that encourage employee experimentation, potentially unlocking $13 trillion in economic value by 2030, as forecasted in a 2021 McKinsey Global Institute report updated in 2025.

Looking ahead, the future implications of embracing AI's weirdness are transformative, with predictions pointing to hybrid human-AI workforces by 2030. Mollick's argument suggests that companies avoiding de-weirding could pioneer new business models, such as AI-augmented creativity in media, where tools like Midjourney generated $1 billion in creator economy value in 2025, according to a Bloomberg analysis from January 2026. Industry impacts will be profound in education and healthcare; for instance, AI tutors improved learning outcomes by 20% in randomized trials conducted by Carnegie Mellon in 2024. Practical applications include fostering innovation labs outside IT, addressing challenges like talent shortages—projected to affect 85 million jobs by 2025 per World Economic Forum's 2020 report, revised in 2025. Ethical best practices will evolve, emphasizing transparency to build trust. Overall, by discovering AI's risks and opportunities through open exploration, businesses can avoid the pitfalls of misclassification, leading to sustainable growth and employee empowerment in an AI-driven era.

FAQ: What are the risks of treating AI like traditional IT? Treating AI as mere automation can lead to innovation stagnation and ethical oversights, such as biased outputs causing legal issues, as seen in multiple 2024 cases. How can businesses monetize AI's unique features? By developing adaptive AI solutions like personalized marketing tools, companies can boost revenues, with e-commerce examples showing 15-20% gains in 2025.

Ethan Mollick

@emollick

Professor @Wharton studying AI, innovation & startups. Democratizing education using tech