Anthropic Flash News List | Blockchain.News
Flash News List

List of Flash News about Anthropic

Time Details
2025-12-19
12:00
Anthropic shares compliance framework for California Transparency in Frontier AI Act: key regulatory update for AI traders

According to @AnthropicAI, the company has shared its compliance framework for California's Transparency in Frontier AI Act, signaling a formal disclosure tied to state-level AI transparency rules (source: @AnthropicAI). Anthropic describes itself as an AI safety and research company focused on building reliable, interpretable, and steerable AI systems, underscoring its compliance-oriented positioning (source: @AnthropicAI). Traders tracking regulatory catalysts in the AI theme can log this as a concrete compliance update relevant to AI-exposed equities and AI-linked crypto narratives (source: @AnthropicAI).

Source
2025-12-18
22:41
Anthropic (@AnthropicAI) Announces 2025 Partnership with U.S. DOE Genesis Mission to Deploy Claude AI and Engineering Team — Implications for Crypto Traders

According to @AnthropicAI, the company has partnered with the U.S. Department of Energy (@ENERGY) on the Genesis Mission to provide Claude across the DOE ecosystem with a dedicated engineering team. Source: Anthropic on X (Dec 18, 2025) https://twitter.com/AnthropicAI/status/2001784831957700941 The announcement states the partnership aims to accelerate scientific discovery in energy, biosecurity, and basic research, highlighting an enterprise AI deployment focus. Source: Anthropic on X (Dec 18, 2025) https://twitter.com/AnthropicAI/status/2001784831957700941 There is no mention of cryptocurrency, blockchain integration, or token issuance in the post, indicating no direct on-chain exposure for crypto traders to price in at this time. Source: Anthropic on X (Dec 18, 2025) https://twitter.com/AnthropicAI/status/2001784831957700941

Source
2025-12-18
16:11
Anthropic Upgrades Claude Sonnet to 4 and 4.5, Adds New Tools, Expands to New York and London — Trader Briefing

According to @AnthropicAI, the team upgraded the Claudius system from Claude Sonnet 3.7 to Sonnet 4 and later 4.5, indicating multiple iterative model releases in sequence for enhanced business acumen, source: Anthropic @AnthropicAI, Twitter, Dec 18, 2025. According to @AnthropicAI, the system was granted access to new tools to improve functionality, source: Anthropic @AnthropicAI, Twitter, Dec 18, 2025. According to @AnthropicAI, the company began an international expansion with new shops in its New York and London offices, source: Anthropic @AnthropicAI, Twitter, Dec 18, 2025. According to @AnthropicAI, the post did not mention cryptocurrencies, tokens, or blockchain integrations, source: Anthropic @AnthropicAI, Twitter, Dec 18, 2025.

Source
2025-12-18
16:11
Anthropic (@AnthropicAI) Unveils 2 New AI Agents: Clothius and CEO Seymour Cash to Supervise Claudius

According to @AnthropicAI, the company introduced two additional AI agents: Clothius, designed to create bespoke merchandise like T-shirts and hats, and Seymour Cash, a CEO agent tasked with supervising Claudius and setting goals, source: @AnthropicAI on X, Dec 18, 2025. The post does not mention crypto, blockchain, tokens, pricing, or deployment timelines, indicating no disclosed direct crypto-market catalyst in this update, source: @AnthropicAI on X, Dec 18, 2025.

Source
2025-12-18
12:00
Anthropic Partners with U.S. Department of Energy on AI Research: 3 Key Facts Traders Need to Know

According to @AnthropicAI, the company states it is working with the U.S. Department of Energy to unlock the next era of scientific discovery, confirming an institutional AI collaboration with a U.S. federal agency (source: @AnthropicAI). The provided statement contains no disclosed details on scope, timing, funding, or commercialization pathways, which limits immediate valuation or revenue-impact assessment for public AI equities and related plays (source: @AnthropicAI). The announcement does not mention cryptocurrencies, blockchain, or token integrations, indicating no direct linkage to crypto assets or AI-linked tokens from this specific news item (source: @AnthropicAI).

Source
2025-12-18
12:00
Anthropic AI Safety Update: Protecting the Well-Being of Our Users - Trading Takeaways and Market Impact

According to @AnthropicAI, the company is an AI safety and research firm working to build reliable, interpretable, and steerable AI systems and has published Protecting the well-being of our users to underscore user safety and trust, which is the focus of the update. source: @AnthropicAI. In the provided excerpt, there are no details on product changes, timelines, pricing, partnerships, or any mention of cryptocurrencies or blockchain, so no direct trading catalyst for crypto markets can be identified from this snippet. source: @AnthropicAI.

Source
2025-12-17
16:35
Hut 8 ($HUT) Announces $7 Billion Anthropic Data Center Deal: Up to 2,295 MW Capacity, Potential Value Reaches $17.7 Billion

According to @KobeissiLetter, Hut 8 ($HUT) announced a $7 billion data center development collaboration with Anthropic, targeting up to 2,295 MW of utility capacity (source: @KobeissiLetter). The deal could be valued up to $17.7 billion, as reported by @KobeissiLetter (source: @KobeissiLetter).

Source
2025-12-16
02:00
Anthropic Claude Opus 4.5 cuts per-token cost to about one-third and boosts long-context reasoning and tool use, according to DeepLearning.AI

According to DeepLearning.AI, Anthropic’s new flagship Claude Opus 4.5 improves coding, tool use, and long-context reasoning while costing about one-third per token versus its predecessor, directly lowering unit inference costs relative to earlier Claude models (source: DeepLearning.AI on X, Dec 16, 2025; more details: hubs.la/Q03Yf3f60). It adds adjustable effort and extended thinking plus automatic long-chat summarization, features designed to manage reasoning depth and summarize lengthy interactions at lower token consumption than before (source: DeepLearning.AI on X, Dec 16, 2025). Independent benchmarks cited by DeepLearning.AI place Opus 4.5 near the top, and it often achieves comparable results with far fewer tokens, improving cost efficiency for long-context tasks compared with its predecessor (source: DeepLearning.AI on X, Dec 16, 2025).

Source
2025-12-11
21:42
Anthropic Expands AI Fellowship: 40% Hires and 80% Publications Announced; Trading Takeaways for AI Equities and Crypto Narratives

According to @AnthropicAI, 40% of fellows in its first cohort have joined Anthropic full time, 80% published their work as a paper, and the fellowship will expand next year to more fellows and research areas (source: Anthropic, official X post on Dec 11, 2025, https://twitter.com/AnthropicAI/status/1999233251706306830; more details: https://t.co/HSQjGy90AZ). This disclosure provides measurable R&D pipeline and talent retention metrics at Anthropic, a leading AI lab with strategic investment from Amazon of up to 4 billion dollars and from Alphabet of up to 2 billion dollars, which underscores its ecosystem relevance to AI equities and infrastructure partners (sources: Amazon press release, Sep 25, 2023, https://www.aboutamazon.com/news/company-news/amazon-invests-up-to-4-billion-in-anthropic; Reuters, Oct 27, 2023, Alphabet invests up to 2 billion dollars in Anthropic, https://www.reuters.com/world/us/alphabet-invests-up-2-billion-anthropic-wsj-2023-10-27/). For trading context, the update is a talent and research output milestone with no direct mention of tokens or blockchain integrations, so any crypto market readthrough should rely on subsequent official research releases or partner announcements rather than price claims (source: Anthropic, official X post on Dec 11, 2025, https://twitter.com/AnthropicAI/status/1999233251706306830).

Source
2025-12-10
23:26
Agentic AI Foundation Launched Under Linux Foundation by OpenAI, Anthropic, and Block; OpenAI Donates AGENTS.md

According to @gdb, OpenAI, Anthropic, and Block are co-founding the Agentic AI Foundation under the Linux Foundation to advance open-source agentic AI. Source: x.com/gdb/status/1998897086079832513; openai.com/index/agentic-ai-foundation OpenAI is donating AGENTS.md to the foundation as a shared specification for building AI agents, establishing an open governance track for agentic AI standards. Source: x.com/gdb/status/1998897086079832513; openai.com/index/agentic-ai-foundation

Source
2025-12-09
19:47
Anthropic: SGTM Unlearning Is 7x Harder to Reverse Than RMU, A Concrete Signal for AI Trading and Compute Risk

According to AnthropicAI, SGTM unlearning is hard to undo and requires seven times more fine-tuning steps to recover forgotten knowledge compared with the prior RMU method, indicating materially higher reversal effort (source: Anthropic on X, Dec 9, 2025). For trading context, this 7x delta provides a measurable robustness gap between SGTM and RMU that can be tracked as an AI safety metric with direct implications for reversal timelines and optimization iterations (source: Anthropic on X, Dec 9, 2025).

Source
2025-12-09
19:47
Anthropic SGTM (Selective Gradient Masking): Removable 'Forget' Weights Enable Safer High-Risk AI Deployments

According to @AnthropicAI, Selective Gradient Masking (SGTM) splits model weights into retain and forget subsets during pretraining and directs specified knowledge into the forget subset, according to Anthropic's alignment site. The forget subset can then be removed prior to release to limit hazardous capabilities in high-risk settings, according to Anthropic's alignment article. The announcement does not reference cryptocurrencies or tokenized AI projects and does not state any market or pricing impact, according to Anthropic's post.

Source
2025-12-09
19:47
Anthropic Finds SGTM Underperforms Data Filtering on 'Forget' Subset — Key AI Unlearning Insight for Traders

According to @AnthropicAI, when controlling for general capabilities, models trained with SGTM perform worse on the undesired forget subset than models trained with data filtering, highlighting a reported performance gap between these unlearning approaches on targeted knowledge removal tasks, source: https://twitter.com/AnthropicAI/status/1998479611945202053. For trading context, the verified takeaway is the relative underperformance of SGTM versus data filtering on the forget subset under equal capability control, with no specific assets or tickers mentioned in the source, source: https://twitter.com/AnthropicAI/status/1998479611945202053.

Source
2025-12-09
19:47
Anthropic Announces Selective GradienT Masking (SGTM): Isolating High-Risk Knowledge With Removable Parameters - Key Facts for Traders

According to @AnthropicAI, the Anthropic Fellows Program introduced Selective GradienT Masking (SGTM), a training method that isolates high-risk knowledge into a small, separate set of parameters that can be removed without broadly affecting the model. Source: Anthropic (@AnthropicAI), Dec 9, 2025. The post frames SGTM as research and provides no details on deployment, commercialization timelines, or policy commitments. Source: Anthropic (@AnthropicAI), Dec 9, 2025. No information is disclosed about partnerships, revenue impact, token integrations, or compute procurement that would directly influence crypto markets or AI-linked equities. Source: Anthropic (@AnthropicAI), Dec 9, 2025. For traders, confirmed data points are the method name (SGTM), purpose (containing high-risk capabilities), and the claim that removal minimally impacts overall model behavior, while the announcement remains informational without market-moving disclosures. Source: Anthropic (@AnthropicAI), Dec 9, 2025.

Source
2025-12-09
19:47
Anthropic announces Igor Shilov led AI research under Fellows Program on X

According to @AnthropicAI, the organization announced on December 9, 2025 that research was led by Igor Shilov as part of the Anthropic Fellows Program via its official X account, source: AnthropicAI. The post provided leadership attribution and program context but did not disclose technical findings, benchmarks, product releases, or timelines, limiting immediate quantifiable trading signals from this announcement alone, source: AnthropicAI.

Source
2025-12-09
19:47
Anthropic Tests SGTM to Remove Biology Knowledge in Wikipedia-Trained Models: Data Filtering Leak Risks Highlighted

According to @AnthropicAI, its study tested whether SGTM can remove biology knowledge from models trained on Wikipedia (source: Anthropic @AnthropicAI, Dec 9, 2025). According to @AnthropicAI, the team cautions that data filtering may leak relevant information because non-biology Wikipedia pages can still contain biology content (source: Anthropic @AnthropicAI, Dec 9, 2025). According to @AnthropicAI, the post does not provide quantitative results, timelines, or any mention of cryptocurrencies, tokens, or market impact (source: Anthropic @AnthropicAI, Dec 9, 2025).

Source
2025-12-09
17:01
Anthropic Donates Model Context Protocol to Linux Foundation’s Agentic AI Foundation: Open-Source Governance Move With No Direct Crypto Token Impact

According to @AnthropicAI, Anthropic is donating the Model Context Protocol (MCP) to the Agentic AI Foundation, a directed fund under the Linux Foundation, to keep MCP open and community-driven. Source: Anthropic on X 2025-12-09 https://twitter.com/AnthropicAI/status/1998437922849350141. According to @AnthropicAI, the company stated MCP has become a foundational protocol for agentic AI in its first year. Source: Anthropic on X 2025-12-09 https://twitter.com/AnthropicAI/status/1998437922849350141. According to @AnthropicAI, the announcement does not mention tokens, blockchain integrations, or financial terms, indicating no direct linkage to crypto assets in this disclosure. Source: Anthropic on X 2025-12-09 https://twitter.com/AnthropicAI/status/1998437922849350141.

Source
2025-12-09
12:00
Accenture and Anthropic Announce Multi-Year Partnership to Scale Enterprise AI from Pilots to Production

According to @AnthropicAI, Accenture and Anthropic launched a multi-year partnership to move enterprises from AI pilots to production; source: Anthropic. The announcement describes Anthropic as an AI safety and research company focused on building reliable, interpretable, and steerable AI systems; source: Anthropic. The provided announcement text includes no reference to cryptocurrencies or blockchain; source: Anthropic.

Source
2025-12-09
12:00
Anthropic Donates Model Context Protocol and Establishes Agentic AI Foundation: No Direct Crypto Catalyst

According to @AnthropicAI, Anthropic is donating the Model Context Protocol (MCP) and establishing the Agentic AI Foundation, as stated in its announcement titled Donating the Model Context Protocol and establishing the Agentic AI Foundation (source: @AnthropicAI). The announcement describes Anthropic as an AI safety and research company working to build reliable, interpretable, and steerable AI systems (source: @AnthropicAI). The post does not reference cryptocurrencies, tokens, or blockchain, and provides no direct trading catalyst for digital assets based on the source text (source: @AnthropicAI).

Source
2025-12-08
16:31
Anthropic Identifies LLM Persona Vectors to Control Sycophancy and Hallucination, Enabling Safer Fine-Tuning Workflows

According to DeepLearning.AI, researchers at Anthropic and partner research and safety institutions identified persona vectors, patterns in LLM layer outputs that encode traits such as sycophancy and hallucination, by averaging representations of a trait and subtracting its opposite to isolate and control these behaviors, source: DeepLearning.AI — X, Dec 8, 2025; The Batch summary hubs.la/Q03Xh6MW0. Finding these vectors allows engineers to pre-screen fine-tuning datasets to predict personality shifts before training, making workflows safer and more predictable, source: DeepLearning.AI — X, Dec 8, 2025; The Batch summary hubs.la/Q03Xh6MW0. The results indicate high-level LLM behaviors are structured and editable, enabling more proactive control over model personalities during deployment, source: DeepLearning.AI — X, Dec 8, 2025; The Batch summary hubs.la/Q03Xh6MW0. The source does not announce products, datasets, or affected market assets and does not mention cryptocurrencies or tokens, so no immediate crypto market impact is indicated, source: DeepLearning.AI — X, Dec 8, 2025; The Batch summary hubs.la/Q03Xh6MW0.

Source