List of AI News about content filtering
| Time | Details |
|---|---|
|
2025-11-22 02:11 |
Quantitative Definition of 'Slop' in LLM Outputs: AI Industry Seeks Measurable Metrics
According to Andrej Karpathy (@karpathy), there is an ongoing discussion in the AI community about defining 'slop'—a qualitative sense of low-quality or imprecise language model output—in a quantitative and measurable way. Karpathy suggests that while experts might intuitively estimate a 'slop index,' a standardized metric is lacking. He mentions potential approaches involving LLM miniseries and token budgets, reflecting a need for practical measurement tools. This trend highlights a significant business opportunity for AI companies to develop robust 'slop' quantification frameworks, which could enhance model evaluation, improve content filtering, and drive adoption in enterprise settings where output reliability is critical (Source: @karpathy, Twitter, Nov 22, 2025). |
|
2025-06-15 13:00 |
Columbia University Study Reveals LLM-Based AI Agents Vulnerable to Malicious Links on Trusted Platforms
According to DeepLearning.AI, Columbia University researchers have demonstrated that large language model (LLM)-based AI agents can be manipulated by embedding malicious links within posts on trusted websites such as Reddit. The study shows that attackers can craft posts with harmful instructions disguised as thematically relevant content, luring AI agents into visiting compromised sites. This vulnerability highlights significant security risks for businesses using LLM-powered automation and underscores the need for robust content filtering and monitoring solutions in enterprise AI deployments (source: DeepLearning.AI, June 15, 2025). |