Search Results for "ai security"
AI Developer Tools Pose New Security Challenges as Attack Surfaces Expand
Explore how AI-enabled developer tools are creating new security risks. Learn about the potential for exploits and how to mitigate them.
NVIDIA AI Red Team Offers Critical Security Insights for LLM Applications
NVIDIA's AI Red Team has identified key vulnerabilities in AI systems, offering practical advice to enhance security in LLM applications, focusing on code execution, access control, and data exfiltration.
Anthropic Enhances AI Security Through Collaboration with US and UK Institutes
Anthropic partners with US CAISI and UK AISI to strengthen AI safeguards. The collaboration focuses on testing and improving AI security measures, including the development of robust defense mechanisms.
Meta Introduces Agents Rule of Two for Enhanced AI Security
Meta AI unveils the 'Agents Rule of Two' to mitigate security risks in AI agents, focusing on reducing vulnerabilities such as prompt injection.
Prompt Injection: A Growing Security Concern in AI Systems
Prompt injections are emerging as a significant security challenge for AI systems. Explore how these attacks function and the measures being taken to mitigate their impact.
GitHub's AI Security Protocols: Ensuring Safe and Reliable Agentic Operations
GitHub introduces robust security principles to safeguard AI agents like Copilot, focusing on minimizing risks such as data exfiltration and prompt injection.
NVIDIA Research Exposes Critical VLM Security Flaws in AI Vision Systems
NVIDIA researchers demonstrate how adversarial image attacks can manipulate vision language models, turning traffic light recognition from 'stop' to 'go' with imperceptible changes.
NVIDIA Red Team Releases AI Agent Security Framework Amid Rising Sandbox Threats
NVIDIA's AI Red Team publishes mandatory security controls for AI coding agents, addressing prompt injection attacks and sandbox escape vulnerabilities.
Anthropic Launches Claude Code Security to Hunt Zero-Day Vulnerabilities
Anthropic's new Claude Code Security tool found 500+ vulnerabilities in open-source projects. Enterprise and open-source maintainers can apply for early access.
Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs
Anthropic reveals DeepSeek, Moonshot, and MiniMax ran industrial-scale distillation attacks using 24,000 fake accounts to steal Claude AI capabilities.
OpenAI Deploys Web Index Defense Against AI Agent Data Theft
OpenAI reveals new security architecture using independent web indexing to prevent URL-based data exfiltration from ChatGPT and agentic AI systems.
OpenAI and Paradigm Launch EVMbench to Test AI Smart Contract Hacking
New benchmark evaluates AI agents' ability to detect, patch, and exploit smart contract vulnerabilities. GPT-5.3-Codex scores 72.2% on exploit tasks.