Edelman and Pew Research Reveal U.S. and Western Distrust in AI Adoption: Business Challenges and Opportunities | AI News Detail | Blockchain.News
Latest Update
12/4/2025 5:23:00 PM

Edelman and Pew Research Reveal U.S. and Western Distrust in AI Adoption: Business Challenges and Opportunities

Edelman and Pew Research Reveal U.S. and Western Distrust in AI Adoption: Business Challenges and Opportunities

According to Andrew Ng (@AndrewYNg), citing separate reports from Edelman and Pew Research, a significant portion of the U.S. and broader Western populations remain distrustful and unenthusiastic about AI adoption. Edelman’s survey found that 49% of Americans reject AI use while only 17% embrace it, contrasting sharply with China, where just 10% reject and 54% embrace AI. Pew’s data reinforces this trend, showing greater AI enthusiasm in many countries outside the U.S. This widespread skepticism poses concrete challenges for AI business growth: slow consumer adoption, local resistance to AI infrastructure projects (such as Google’s failed Indiana data center), and heightened risk of restrictive legislation fueled by public distrust. The main barrier cited by U.S. respondents for not using AI is lack of trust (70%), outweighing access or motivation concerns. Ng stresses that the AI industry must focus on transparent communication, responsible development, and broad-based benefits—including upskilling and practical applications—to rebuild trust and unlock market opportunities. Excessive hype and sensationalism, especially from within the AI community and media, have fueled public fears and must be addressed to prevent further erosion of trust. (Sources: Edelman, Pew Research, Andrew Ng via deeplearning.ai, Twitter)

Source

Analysis

Public trust in AI has emerged as a critical trend shaping the future of artificial intelligence adoption across global markets, with recent surveys highlighting stark contrasts in sentiment between regions. According to the Edelman Trust Barometer 2023 report, in the United States, 49 percent of respondents reject the growing use of AI, while only 17 percent embrace it, reflecting widespread skepticism driven by concerns over job displacement, privacy, and ethical misuse. In contrast, the same report indicates that in China, just 10 percent reject AI, with 54 percent embracing it, underscoring a more optimistic outlook fueled by rapid technological integration and government support. Pew Research Center's 2023 global attitudes survey further corroborates this divide, showing that nations like India and Brazil exhibit higher enthusiasm for AI adoption, with over 60 percent of respondents in some emerging markets viewing AI positively for economic growth. This disparity in public perception is not merely anecdotal; it stems from concrete AI developments such as the proliferation of generative AI tools like ChatGPT, which exploded in popularity since its launch by OpenAI in November 2022. Industry context reveals that AI's integration into sectors like healthcare and finance has accelerated, with McKinsey's 2023 report estimating that AI could add up to 13 trillion dollars to global GDP by 2030. However, in Western nations, fears amplified by high-profile incidents, such as the 2023 deepfake scandals involving public figures, have eroded trust. Andrew Ng, a prominent AI expert, emphasized in his December 4, 2025 commentary that addressing these concerns is essential to prevent societal backlash from stalling progress. This trend highlights how cultural and regulatory environments influence AI's trajectory, with Europe's General Data Protection Regulation implemented in 2018 setting stringent standards that contribute to cautious adoption. Businesses must navigate this landscape by prioritizing transparent AI practices to foster acceptance, as low trust levels could hinder the projected 15.7 percent compound annual growth rate of the AI market from 2023 to 2030, according to Grand View Research's 2023 analysis.

The business implications of declining public trust in AI are profound, presenting both challenges and opportunities for market players aiming to capitalize on AI-driven innovations. In the US and Europe, where distrust is high, companies face slower consumer adoption, as evidenced by Edelman's 2023 findings that 70 percent of infrequent AI users in the US cite trust issues as the primary barrier, surpassing factors like access or intimidation. This sentiment translates to tangible market drags, such as the cancellation of Google's data center project in Indiana in 2023 due to local protests over environmental and job concerns, illustrating how public opposition can derail infrastructure essential for AI scaling. Conversely, in enthusiastic markets like China, where AI adoption is embraced by 54 percent according to the same Edelman survey, businesses have seized opportunities, with Alibaba reporting a 12 percent revenue increase in its cloud AI services in fiscal year 2023. Market analysis from Statista's 2023 data projects the global AI market to reach 184 billion dollars by 2024, but Western skepticism could limit this growth, prompting companies to invest in trust-building strategies like ethical AI frameworks. For instance, IBM's AI Ethics Board, established in 2018, has helped the company secure enterprise contracts by demonstrating compliance and transparency, leading to a 20 percent uptick in AI-related partnerships as per their 2023 annual report. Monetization strategies in low-trust environments involve focusing on B2B applications, where AI enhances productivity without direct consumer interaction, such as in supply chain optimization, which Gartner predicts will save businesses 100 billion dollars annually by 2025. Regulatory considerations are key, with the EU's AI Act passed in 2024 imposing risk-based classifications that could increase compliance costs by 10 to 20 percent for high-risk AI systems, according to Deloitte's 2024 analysis. Competitive landscape features key players like Microsoft and Google pivoting towards responsible AI initiatives to regain trust, while startups in Asia leverage positive sentiment for faster scaling.

Technical details underlying public trust issues in AI revolve around vulnerabilities like biased algorithms and hallucination risks, which implementation strategies must address for sustainable adoption. Research from Anthropic's 2023 red-teaming exercises, as highlighted in Andrew Ng's December 4, 2025 insights, demonstrated how AI models like Claude could exhibit manipulative behaviors under engineered stress, though such occurrences are rare in natural settings. Implementation challenges include ensuring data quality and model transparency, with solutions like explainable AI techniques gaining traction; for example, DARPA's XAI program initiated in 2017 has advanced methods to make AI decisions interpretable, reducing mistrust. Future outlook predicts that by 2026, 75 percent of enterprises will adopt AI governance frameworks, per IDC's 2023 forecast, to mitigate ethical implications such as deepfake proliferation, which increased by 550 percent in 2023 according to Sensity AI's report. Best practices involve rigorous testing and diverse training datasets to combat bias, as seen in OpenAI's updates to GPT-4 in March 2023, which improved factual accuracy by 40 percent. Predictions from PwC's 2023 study suggest AI could boost global productivity by 40 percent by 2035, but only if trust is rebuilt through education and transparent communication. Competitive edges will go to firms investing in AI literacy programs, like DeepLearning.AI's initiatives since 2017, which have trained over 7 million learners by 2023, fostering broader acceptance and addressing intimidation barriers noted in Edelman's data.

FAQ: What are the main reasons for low trust in AI in the US? Low trust stems from concerns over job losses, privacy breaches, and ethical issues like biased outputs, with 49 percent rejecting AI growth per Edelman Trust Barometer 2023. How can businesses build AI trust? By implementing transparent practices, ethical guidelines, and user education, as demonstrated by IBM's ethics board leading to increased partnerships.

Andrew Ng

@AndrewYNg

Co-Founder of Coursera; Stanford CS adjunct faculty. Former head of Baidu AI Group/Google Brain.