Ai Security News | Blockchain.News

AI SECURITY

NVIDIA Red Team Exposes AI Coding Agent Vulnerability in OpenAI Codex
Ai Security

NVIDIA Red Team Exposes AI Coding Agent Vulnerability in OpenAI Codex

NVIDIA researchers demonstrate how malicious dependencies can hijack AI coding assistants through AGENTS.md injection, hiding backdoors in pull requests.

NVIDIA Unveils Zero-Trust Architecture for Secure AI Model Deployment
Ai Security

NVIDIA Unveils Zero-Trust Architecture for Secure AI Model Deployment

NVIDIA releases open reference architecture for confidential AI factories, enabling secure deployment of proprietary models on shared infrastructure using hardware-backed encryption.

OpenAI Acquires Promptfoo to Bolster Enterprise AI Security Testing
Ai Security

OpenAI Acquires Promptfoo to Bolster Enterprise AI Security Testing

OpenAI announces acquisition of Promptfoo, an AI security platform used by 25% of Fortune 500 companies, to integrate into its Frontier enterprise platform.

OpenAI Codex Security Ditches SAST for AI-Driven Vulnerability Detection
Ai Security

OpenAI Codex Security Ditches SAST for AI-Driven Vulnerability Detection

OpenAI explains why Codex Security uses AI constraint reasoning instead of traditional static analysis, aiming to cut false positives in code security scanning.

OpenAI Reveals How ChatGPT Now Fights Prompt Injection Attacks
Ai Security

OpenAI Reveals How ChatGPT Now Fights Prompt Injection Attacks

OpenAI details new 'Safe Url' defense system treating AI prompt injection like social engineering, with attacks succeeding 50% of the time before fixes.

Anthropic AI Discovers 22 Firefox Vulnerabilities in Two Weeks
Ai Security

Anthropic AI Discovers 22 Firefox Vulnerabilities in Two Weeks

Claude Opus 4.6 found 14 high-severity Firefox bugs, nearly a fifth of all critical vulnerabilities fixed in 2025. Mozilla shipped fixes to hundreds of millions of users.

OpenAI and Paradigm Launch EVMbench to Test AI Smart Contract Hacking
Ai Security

OpenAI and Paradigm Launch EVMbench to Test AI Smart Contract Hacking

New benchmark evaluates AI agents' ability to detect, patch, and exploit smart contract vulnerabilities. GPT-5.3-Codex scores 72.2% on exploit tasks.

OpenAI Deploys Web Index Defense Against AI Agent Data Theft
Ai Security

OpenAI Deploys Web Index Defense Against AI Agent Data Theft

OpenAI reveals new security architecture using independent web indexing to prevent URL-based data exfiltration from ChatGPT and agentic AI systems.

Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs
Ai Security

Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs

Anthropic reveals DeepSeek, Moonshot, and MiniMax ran industrial-scale distillation attacks using 24,000 fake accounts to steal Claude AI capabilities.

Anthropic Launches Claude Code Security to Hunt Zero-Day Vulnerabilities
Ai Security

Anthropic Launches Claude Code Security to Hunt Zero-Day Vulnerabilities

Anthropic's new Claude Code Security tool found 500+ vulnerabilities in open-source projects. Enterprise and open-source maintainers can apply for early access.

NVIDIA Red Team Releases AI Agent Security Framework Amid Rising Sandbox Threats
Ai Security

NVIDIA Red Team Releases AI Agent Security Framework Amid Rising Sandbox Threats

NVIDIA's AI Red Team publishes mandatory security controls for AI coding agents, addressing prompt injection attacks and sandbox escape vulnerabilities.

NVIDIA Research Exposes Critical VLM Security Flaws in AI Vision Systems
Ai Security

NVIDIA Research Exposes Critical VLM Security Flaws in AI Vision Systems

NVIDIA researchers demonstrate how adversarial image attacks can manipulate vision language models, turning traffic light recognition from 'stop' to 'go' with imperceptible changes.

GitHub's AI Security Protocols: Ensuring Safe and Reliable Agentic Operations
Ai Security

GitHub's AI Security Protocols: Ensuring Safe and Reliable Agentic Operations

GitHub introduces robust security principles to safeguard AI agents like Copilot, focusing on minimizing risks such as data exfiltration and prompt injection.

Prompt Injection: A Growing Security Concern in AI Systems
Ai Security

Prompt Injection: A Growing Security Concern in AI Systems

Prompt injections are emerging as a significant security challenge for AI systems. Explore how these attacks function and the measures being taken to mitigate their impact.

Meta Introduces Agents Rule of Two for Enhanced AI Security
Ai Security

Meta Introduces Agents Rule of Two for Enhanced AI Security

Meta AI unveils the 'Agents Rule of Two' to mitigate security risks in AI agents, focusing on reducing vulnerabilities such as prompt injection.