List of Flash News about SGTM
| Time | Details |
|---|---|
|
2025-12-09 19:47 |
Anthropic SGTM (Selective Gradient Masking): Removable 'Forget' Weights Enable Safer High-Risk AI Deployments
According to @AnthropicAI, Selective Gradient Masking (SGTM) splits model weights into retain and forget subsets during pretraining and directs specified knowledge into the forget subset, according to Anthropic's alignment site. The forget subset can then be removed prior to release to limit hazardous capabilities in high-risk settings, according to Anthropic's alignment article. The announcement does not reference cryptocurrencies or tokenized AI projects and does not state any market or pricing impact, according to Anthropic's post. |
|
2025-12-09 19:47 |
Anthropic Finds SGTM Underperforms Data Filtering on 'Forget' Subset — Key AI Unlearning Insight for Traders
According to @AnthropicAI, when controlling for general capabilities, models trained with SGTM perform worse on the undesired forget subset than models trained with data filtering, highlighting a reported performance gap between these unlearning approaches on targeted knowledge removal tasks, source: https://twitter.com/AnthropicAI/status/1998479611945202053. For trading context, the verified takeaway is the relative underperformance of SGTM versus data filtering on the forget subset under equal capability control, with no specific assets or tickers mentioned in the source, source: https://twitter.com/AnthropicAI/status/1998479611945202053. |
|
2025-12-09 19:47 |
Anthropic Announces Selective GradienT Masking (SGTM): Isolating High-Risk Knowledge With Removable Parameters - Key Facts for Traders
According to @AnthropicAI, the Anthropic Fellows Program introduced Selective GradienT Masking (SGTM), a training method that isolates high-risk knowledge into a small, separate set of parameters that can be removed without broadly affecting the model. Source: Anthropic (@AnthropicAI), Dec 9, 2025. The post frames SGTM as research and provides no details on deployment, commercialization timelines, or policy commitments. Source: Anthropic (@AnthropicAI), Dec 9, 2025. No information is disclosed about partnerships, revenue impact, token integrations, or compute procurement that would directly influence crypto markets or AI-linked equities. Source: Anthropic (@AnthropicAI), Dec 9, 2025. For traders, confirmed data points are the method name (SGTM), purpose (containing high-risk capabilities), and the claim that removal minimally impacts overall model behavior, while the announcement remains informational without market-moving disclosures. Source: Anthropic (@AnthropicAI), Dec 9, 2025. |
|
2025-12-09 19:47 |
Anthropic Tests SGTM to Remove Biology Knowledge in Wikipedia-Trained Models: Data Filtering Leak Risks Highlighted
According to @AnthropicAI, its study tested whether SGTM can remove biology knowledge from models trained on Wikipedia (source: Anthropic @AnthropicAI, Dec 9, 2025). According to @AnthropicAI, the team cautions that data filtering may leak relevant information because non-biology Wikipedia pages can still contain biology content (source: Anthropic @AnthropicAI, Dec 9, 2025). According to @AnthropicAI, the post does not provide quantitative results, timelines, or any mention of cryptocurrencies, tokens, or market impact (source: Anthropic @AnthropicAI, Dec 9, 2025). |