Advancing Artificial Intelligence Security: Our Partnership with OpenAI and Red Team Operations

Red Team Operations and offensive security assessments have always been a critical part to a mature security program, whether as a validation exercise or to identify new attack paths in a technology implementation. AI and LLMs are advancing at such a rapid pace that it is natural for both users and organizations to question the security implications of these technologies. That is why we are incredibly proud to announce our partnership with OpenAI to strengthen their security posture, to conduct joint research, and to develop open-source techniques and tools. OpenAI’s announcement provides additional details on how we’ve partnered together.

Partnership

Our partnership is founded on a shared vision to secure AI systems and user’s data ensuring trustworthy AI is accessible to all. SpecterOps has always believed in transparency, both with our customers and the community — and in the advent of machine learning, artificial intelligence, and society’s increased usage of large language models; security and privacy are more critical to organizations than ever before. Leveraging OpenAI’s expertise in developing models and SpecterOps industry leadership in understanding attack paths within technologies, we are collaborating on:

  • Security Research to jointly discover and share novel approaches to defend against threat actors and detect malicious activity
  • Continuous Security Assessments that evaluate emerging technologies attack paths unique to artificial intelligence and sharing the outcomes where possible
  • Red Team Exercises to validate and improve the detection and response program at scale by looking at the unique complexities of the scale of OpenAI’s mission

AI and Security

A 2024 report from McKinsey captured that 72% of worldwide respondents have adopted AI and a post from National University states “83% of companies claim that AI is a top priority in their business plans”. With rapid developments by foundation model providers and businesses increasingly adopting AI comes new and additional risks. Some of the more distinct threats to AI systems can include:

  • Sensitive information disclosure including PII, trade secrets, confidential information, or access credentials to internal or cloud computing resources
  • Model and data poisoning by introducing malicious information into model training, fine-tuning, or databases for retrieval augmented generation processes
  • Prompt injection to bypass or maliciously control the system in unintended ways

Because of the growing AI use and the potential risks to creators and consumers, getting ahead of security issues today will better serve how AI impacts humanity tomorrow.

SpecterOps AI Red Team Services

SpecterOps is an industry leader at thinking like an adversary and leveraging red team operations to challenge assumptions to improve the security of assessed technologies. We do this by leveraging years of experience working with clients across all industries to identify and execute novel attack paths and through research efforts to create and publish tools and techniques accessible to all.

We deliver AI red team services by leveraging our adversarial mindset and security expertise to evaluate AI technologies through their design, development, deployment, and operations and maintenance stages. AI systems are decomposed into their individual components and holistically evaluated for attack vectors and vulnerabilities both unique to artificial intelligence and traditional technologies.

Our AI red team services are composed of:

  • Threat modeling to understand a model’s acceptable use and failures modes and then mapping out unique attack vectors that can negatively impact model development
  • Direct model inference assessments for security, safety and trustworthiness, alignment, and privacy
  • Penetration tests to identify and exploit weaknesses in AI systems’ full applications stack, identity and access management services, data storage, cloud and compute resources, agentic workflows, pipelines, and all other supporting infrastructure
  • Red Team Operations to exploit attack paths providing stimuli for monitoring, detection, and incident response

In partnership with OpenAI we are deepening the quality of our assessments by having direct insights into state-of-the-art model technologies. As we work together, we’re able to iterate faster and incorporate lessons learned, communicate risks that have significant value, and generate actionable remediation guidance to ensure systems and data are both secured and resilient.

Conclusion

SpecterOps is excited to partner with OpenAI to continue advancing the safety and security of AI. This new partnership marks the start of continuous assessments, research, and innovative improvements to defending systems from risks distinctive to artificial intelligence. We are even more excited to be able share outcomes with a larger audience. In collaboration with OpenAI, our world class red team services will be at the forefront of security ensuring a more secure world for our clients and the community.

References

Advancing Artificial Intelligence Security: Our Partnership with OpenAI and Red Team Operations was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introduction to Malware Binary Triage (IMBT) Course

Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.

Enroll Now and Save 10%: Coupon Code MWNEWS10

Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.

Article Link: Advancing Artificial Intelligence Security: Our Partnership with OpenAI and Red Team Operations | by Robby Winchester | Mar, 2025 | Posts By SpecterOps Team Members