OpenAI Acquires Promptfoo to Strengthen AI Agent Security and Red-Teaming
OpenAI has agreed to acquire Promptfoo, the open-source AI security and red-teaming platform used by over 25% of the Fortune 500, in a deal that will integrate the tool directly into OpenAI's enterprise agent platform. The acquisition signals OpenAI's growing focus on safety infrastructure as it pushes deeper into autonomous AI agent deployment.
OpenAI has agreed to acquire Promptfoo, the open-source AI security and red-teaming platform that has become the de facto standard for testing AI model behavior in enterprise environments. The acquisition, announced on March 9, will integrate Promptfoo's testing and evaluation capabilities directly into OpenAI's enterprise platform, particularly its Frontier product for building AI agent workflows.
Why Promptfoo Matters
Promptfoo started in 2024 as an open-source framework for evaluating AI prompts and model behavior — essentially unit testing for AI systems. Developers use it to define test cases, run them against models, and catch regressions in model behavior before they reach production. The tool can detect jailbreaks, prompt injection vulnerabilities, hallucinations, and other failure modes that are unique to AI applications.
The platform has grown rapidly, with over 350,000 developers having used it and 130,000 active monthly users. More than 25% of the Fortune 500 use Promptfoo in some capacity, making it arguably the most widely deployed AI security testing tool in the enterprise.
Open Source Commitment
OpenAI has committed to keeping Promptfoo open source — a notable pledge given the company's own complicated history with open-source commitments. The Promptfoo team will continue to develop the community edition while also building deeper integrations with OpenAI's commercial products.
The acquisition is part of a broader pattern of AI companies investing in safety and security infrastructure. As AI agents become more autonomous — executing code, making API calls, managing data — the attack surface grows exponentially. A compromised AI agent with access to production systems is potentially more dangerous than a compromised employee, because agents operate faster, don't question unusual requests, and can be manipulated through carefully crafted inputs.
Strategic Context
The deal also positions OpenAI to compete more effectively with Anthropic, which has made AI safety a central part of its brand identity and recently launched Claude Code Security. By acquiring the leading open-source AI security testing tool, OpenAI can credibly claim that its enterprise platform includes best-in-class security evaluation — a message that resonates with the CISOs and compliance teams who increasingly have veto power over AI deployments.
Financial terms of the acquisition were not disclosed.
Related Articles
NVIDIA GTC 2026 Keynote: Jensen Huang Unveils Vera Rubin Platform and Six New Chips
NVIDIA CEO Jensen Huang opened GTC 2026 in San Jose with the formal unveiling of the complete Vera Rubin GPU platform — six new chips featuring 288 GB of HBM4 memory, 336 billion transistors, and 50 PetaFLOPS of FP4 performance. Over 30,000 attendees from 190 countries gathered for the AI industry's most anticipated annual event.
NVIDIA Releases Nemotron 3 Super: Open 120B-Parameter Model Targets Enterprise Agentic AI
NVIDIA has released Nemotron 3 Super, a 120-billion-parameter open-weights model built on a hybrid Mamba-Transformer architecture with a one-million-token context window. The model delivers 5x throughput improvements over its predecessor and is designed specifically for enterprise agentic AI workflows.
NVIDIA GTC 2026 Preview: Jensen Huang Teases Chips That Will "Surprise the World"
With GTC 2026 just days away, NVIDIA CEO Jensen Huang promises to unveil "a few new chips the world has never seen before" — analysts expect the Vera Rubin GPU platform and first architectural details on Feynman, next-generation silicon designed for AI agent reasoning workloads.