OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history
OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history | TechCrunch
===============
OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history | TechCrunch
===============
[Skip to content](https://techcrunch.com/2025/03/06/openais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history/#wp–skip-link–target)
[TechCrunch Desktop Logo](https://techcrunch.com/)[TechCrunch Mobile Logo](https://techcrunch.com/)
* [Latest](https://techcrunch.com/latest/)
* [Startups](https://techcrunch.com/category/startups/)
* [Venture](https://techcrunch.com/category/venture/)
* [Apple](https://techcrunch.com/tag/apple/)
* [Security](https://techcrunch.com/category/security/)
* [AI](https://techcrunch.com/category/artificial-intelligence/)
* [Apps](https://techcrunch.com/category/apps/)
* [Google I/O 2025](https://techcrunch.com/storyline/google-i-o-2025-live-coverage-gemini-android-16-updates-and-more/)
* [Events](https://techcrunch.com/events/)
* [Podcasts](https://techcrunch.com/podcasts/)
* [Newsletters](https://techcrunch.com/newsletters/)
Search
Submit
Site Search Toggle
Mega Menu Toggle
### Topics
[Latest](https://techcrunch.com/latest/)
[AI](https://techcrunch.com/category/artificial-intelligence/)
[Amazon](https://techcrunch.com/tag/amazon/)
[Apps](https://techcrunch.com/category/apps/)
[Biotech & Health](https://techcrunch.com/category/biotech-health/)
[Climate](https://techcrunch.com/category/climate/)
[Cloud Computing](https://techcrunch.com/tag/cloud-computing/)
[Commerce](https://techcrunch.com/category/commerce/)
[Crypto](https://techcrunch.com/category/cryptocurrency/)
[Enterprise](https://techcrunch.com/category/enterprise/)
[EVs](https://techcrunch.com/tag/evs/)
[Fintech](https://techcrunch.com/category/fintech/)
[Fundraising](https://techcrunch.com/category/fundraising/)
[Gadgets](https://techcrunch.com/category/gadgets/)
[Gaming](https://techcrunch.com/category/gaming/)
[Google](https://techcrunch.com/tag/google/)
[Government & Policy](https://techcrunch.com/category/government-policy/)
[Hardware](https://techcrunch.com/category/hardware/)
[Instagram](https://techcrunch.com/tag/instagram/)
[Layoffs](https://techcrunch.com/tag/layoffs/)
[Media & Entertainment](https://techcrunch.com/category/media-entertainment/)
[Meta](https://techcrunch.com/tag/meta/)
[Microsoft](https://techcrunch.com/tag/microsoft/)
[Privacy](https://techcrunch.com/category/privacy/)
[Robotics](https://techcrunch.com/category/robotics/)
[Security](https://techcrunch.com/category/security/)
[Social](https://techcrunch.com/category/social/)
[Space](https://techcrunch.com/category/space/)
[Startups](https://techcrunch.com/category/startups/)
[TikTok](https://techcrunch.com/tag/tiktok/)
[Transportation](https://techcrunch.com/category/transportation/)
[Venture](https://techcrunch.com/category/venture/)
### More from TechCrunch
[Events](https://techcrunch.com/events/)
[Startup Battlefield](https://techcrunch.com/startup-battlefield/)
[StrictlyVC](https://strictlyvc.com/)
[Newsletters](https://techcrunch.com/newsletters/)
[Podcasts](https://techcrunch.com/podcasts/)
[Videos](https://techcrunch.com/video/)
[Partner Content](https://techcrunch.com/sponsored/)
[TechCrunch Brand Studio](https://techcrunch.com/brand-studio/)
[Crunchboard](https://www.crunchboard.com/)
[Contact Us](https://techcrunch.com/contact-us/)

**Image Credits:**FABRICE COFFRINI/AFP / Getty Images
[AI](https://techcrunch.com/category/artificial-intelligence/)
[](https://www.facebook.com/sharer.php?u=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F)[](https://twitter.com/intent/tweet?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&text=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&via=techcrunch)[](https://www.linkedin.com/shareArticle?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&title=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&summary=A+high-profile+ex-OpenAI+policy+researcher%2C+Miles+Brundage%2C+took+to+social+media+on+Wednesday+to+criticize+OpenAI+for+%E2%80%9Crewriting+the+history%E2%80%9D+of+its+deployment+approach+to+potentially+risky+AI+systems.+Earlier+this+week%2C+OpenAI+published+a+document+outlining+its+current+philosophy+on+AI+safety+and+alignment%2C+the+process+of+designing+AI+systems+that+behave+in+desirable+%5B%E2%80%A6%5D&mini=1&source=TechCrunch)[](https://www.reddit.com/submit?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&title=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history)[](mailto:?subject=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&body=Article%3A+https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F)[](https://techcrunch.com/2025/03/06/openais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history/)
OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history
====================================================================================
[Kyle Wiggers](https://techcrunch.com/author/kyle-wiggers/)
9:09 AM PST · March 6, 2025
[](https://www.facebook.com/sharer.php?u=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F)[](https://twitter.com/intent/tweet?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&text=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&via=techcrunch)[](https://www.linkedin.com/shareArticle?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&title=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&summary=A+high-profile+ex-OpenAI+policy+researcher%2C+Miles+Brundage%2C+took+to+social+media+on+Wednesday+to+criticize+OpenAI+for+%E2%80%9Crewriting+the+history%E2%80%9D+of+its+deployment+approach+to+potentially+risky+AI+systems.+Earlier+this+week%2C+OpenAI+published+a+document+outlining+its+current+philosophy+on+AI+safety+and+alignment%2C+the+process+of+designing+AI+systems+that+behave+in+desirable+%5B%E2%80%A6%5D&mini=1&source=TechCrunch)[](https://www.reddit.com/submit?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&title=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history)[](mailto:?subject=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&body=Article%3A+https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F)[](https://techcrunch.com/2025/03/06/openais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history/)
A high-profile ex-OpenAI policy researcher, [Miles Brundage](https://techcrunch.com/2024/10/23/longtime-policy-researcher-miles-brundage-leaves-openai/), [took to social media](https://x.com/Miles_Brundage/status/1897426207131705739) on Wednesday to criticize OpenAI for “rewriting the history” of its deployment approach to potentially risky AI systems.
Earlier this week, OpenAI published a [document](https://openai.com/safety/how-we-think-about-safety-alignment/) outlining its current philosophy on AI safety and alignment, the process of designing AI systems that behave in desirable and explainable ways. In the document, OpenAI said that it sees the development of AGI, broadly defined as AI systems that can perform any task a human can, as a “continuous path” that requires “iteratively deploying and learning” from AI technologies.
“In a discontinuous world […] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2,” OpenAI wrote. “We now view the first AGI as just one point along a series of systems of increasing usefulness […] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system.”
But Brundage claims that GPT-2 did, in fact, warrant abundant caution at the time of its release, and that this was “100% consistent” with OpenAI’s iterative deployment strategy today.
“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent [with and] foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage [wrote in a post on X](https://x.com/Miles_Brundage/status/1897426208658502046). “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”
Brundage, who joined OpenAI as a research scientist in 2018, was the company’s head of policy research for several years. On OpenAI’s “AGI readiness” team, he had a particular focus on the responsible deployment of language generation systems such as OpenAI’s AI chatbot platform ChatGPT.
[GPT-2](https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/), which OpenAI announced in 2019, was a progenitor of the AI systems powering [ChatGPT](https://techcrunch.com/2025/02/12/chatgpt-everything-to-know-about-the-ai-chatbot/). GPT-2 could answer questions about a topic, summarize articles, and generate text on a level sometimes indistinguishable from that of humans.
Techcrunch event
### Join us at TechCrunch Sessions: AI
#### Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking.
### Exhibit at TechCrunch Sessions: AI
#### Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.
Berkeley, CA|June 5
[REGISTER NOW](https://techcrunch.com/events/tc-sessions-ai/?promo=tc_inline_exhibit&utm_campaign=tcsessionsai2025&utm_content=exhibit&utm_medium=ad&utm_source=tc)
While GPT-2 and its outputs may look basic today, they were cutting-edge at the time. Citing the risk of malicious use, OpenAI initially refused to release GPT-2’s source code, opting instead to give selected news outlets limited access to a demo.
The decision was met with mixed reviews from the AI industry. Many experts argued that the threat posed by GPT-2 [had been exaggerated](https://en.wikipedia.org/wiki/GPT-2#cite_note-ethics-21), and that there wasn’t any evidence the model could be abused in the ways OpenAI described. AI-focused publication The Gradient went so far as to publish an [open letter](https://thegradient.pub/openai-please-open-source-your-language-model/) requesting that OpenAI release the model, arguing it was too technologically important to hold back.
OpenAI eventually did release a partial version of GPT-2 six months after the model’s unveiling, followed by the full system several months after that. Brundage thinks this was the right approach.
“What part of [the GPT-2 release] was motivated by or premised on thinking of AGI as discontinuous? None of it,” he said in a post on X. “What’s the evidence this caution was ‘disproportionate’ ex ante? Ex post, it prob. would have been OK, but that doesn’t mean it was responsible to YOLO it [sic] given info at the time.”
Brundage fears that OpenAI’s aim with the document is to set up a burden of proof where “concerns are alarmist” and “you need overwhelming evidence of imminent dangers to act on them.” This, he argues, is a “very dangerous” mentality for advanced AI systems.
“If I were still working at OpenAI, I would be asking why this [document] was written the way it was, and what exactly OpenAI hopes to achieve by poo-pooing caution in such a lop-sided way,” Brundage added.
OpenAI has historically[been accused](https://techcrunch.com/2024/05/18/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says/)of prioritizing “shiny products” at the expense of safety, and of[rushing product releases](https://www.washingtonpost.com/technology/2024/07/12/openai-ai-safety-regulation-gpt4/)to beat rival companies to market. Last year, OpenAI dissolved its AGI readiness team, and a string of AI safety and policy researchers departed the company for rivals.
Competitive pressures have only ramped up. [Chinese AI lab DeepSeek](https://techcrunch.com/2025/01/28/deepseek-everything-you-need-to-know-about-the-ai-chatbot-app/) captured the world’s attention with its openly available [R1](https://techcrunch.com/2025/01/27/deepseek-claims-its-reasoning-model-beats-openais-o1-on-certain-benchmarks/) model, which matched OpenAI’s o1 “reasoning” model on a number of key benchmarks. OpenAI CEO Sam Altman has[admitted](https://techcrunch.com/2025/01/31/sam-altman-believes-openai-has-been-on-the-wrong-side-of-history-concerning-open-source/)that DeepSeek has lessened OpenAI’s technological lead, and[said](https://www.businessinsider.com/sam-altman-openai-release-better-models-in-response-to-deepseek-2025-1)that OpenAI would “pull up some releases” to better compete.
There’s a lot of money on the line. OpenAI loses billions annually, and the company has [reportedly](https://www.theinformation.com/articles/openai-projections-imply-losses-tripling-to-14-billion-in-2026?rc=d8pcat) projected that its annual losses could triple to $14 billion by 2026. A faster product release cycle could benefit OpenAI’s bottom line near-term, but possibly at the expense of safety long-term. Experts like Brundage question whether the trade-off is worth it.
Topics
[AI](https://techcrunch.com/category/artificial-intelligence/), [miles Brundage](https://techcrunch.com/tag/miles-brundage/), [OpenAI](https://techcrunch.com/tag/openai/), [policy](https://techcrunch.com/tag/policy-2/)
[](https://www.facebook.com/sharer.php?u=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F)[](https://twitter.com/intent/tweet?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&text=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&via=techcrunch)[](https://www.linkedin.com/shareArticle?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&title=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&summary=A+high-profile+ex-OpenAI+policy+researcher%2C+Miles+Brundage%2C+took+to+social+media+on+Wednesday+to+criticize+OpenAI+for+%E2%80%9Crewriting+the+history%E2%80%9D+of+its+deployment+approach+to+potentially+risky+AI+systems.+Earlier+this+week%2C+OpenAI+published+a+document+outlining+its+current+philosophy+on+AI+safety+and+alignment%2C+the+process+of+designing+AI+systems+that+behave+in+desirable+%5B%E2%80%A6%5D&mini=1&source=TechCrunch)[](https://www.reddit.com/submit?url=https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F&title=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history)[](mailto:?subject=OpenAI%E2%80%99s+ex-policy+lead+criticizes+the+company+for+%E2%80%98rewriting%E2%80%99+its+AI+safety+history&body=Article%3A+https%3A%2F%2Ftechcrunch.com%2F2025%2F03%2F06%2Fopenais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history%2F)[](https://techcrunch.com/2025/03/06/openais-ex-policy-lead-criticizes-the-company-for-rewriting-its-ai-safety-history/)

Kyle Wiggers
AI Editor
[](https://twitter.com/Kyle_L_Wiggers)[](https://bsky.app/profile/kylelwiggers.bsky.social)[](https://techhub.social/@kylelwiggers)[](https://www.linkedin.com/in/kyle-lee-wiggers/)
Kyle Wiggers is TechCrunch’s AI Editor. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Manhattan with his partner, a music therapist.
[View Bio](https://techcrunch.com/author/kyle-wiggers/)

June 5, 2025
Berkeley, California
LIMITED TIME: Save $300+ on your ticket, plus an additional 50% on a second for a full immersive day of AI! Hear from AI pioneers from Google DeepMind, OpenAI, Anthropic, and more on the main stage and in breakouts, and network like never before.
[REGISTER NOW](https://techcrunch.com/events/tc-sessions-ai/?utm_source=tc&utm_medium=ad&utm_campaign=tcsessionsai2025&utm_content=earlybirdbogo&promo=tceventssidebar_ebbogo&display=)
Most Popular
————
*
### [At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line](https://techcrunch.com/2025/05/23/when-ai-crosses-the-line-artemis-seaford-and-ion-stoica-confront-the-ethical-crisis-at-techcrunch-sessions-ai/)
* [TechCrunch Events](https://techcrunch.com/author/techcrunch-events/)
*
### [X continues to suffer bugs following Thursday outage](https://techcrunch.com/2025/05/23/x-continues-to-suffer-bugs-following-thursday-outage/)
* [Kyle Wiggers](https://techcrunch.com/author/kyle-wiggers/)
*
### [Founders First: Iliana Quinonez of Google Cloud on AI agents, infrastructure, and democratization at TechCrunch Sessions: AI](https://techcrunch.com/2025/05/23/founders-first-iliana-quinonez-of-google-cloud-on-ai-agents-infrastructure-and-democratization-at-techcrunch-sessions-ai/)
* [TechCrunch Events](https://techcrunch.com/author/techcrunch-events/)
*
### [Tick tock: Just 3 days left to save up to $900 on your TechCrunch Disrupt 2025 pass](https://techcrunch.com/2025/05/23/tick-tock-just-3-days-left-to-save-up-to-900-on-your-techcrunch-disrupt-2025-pass/)
* [TechCrunch Events](https://techcrunch.com/author/techcrunch-events/)
*
### [Apple could launch AI-powered smart glasses in 2026](https://techcrunch.com/2025/05/23/apple-could-launch-ai-powered-smart-glasses-in-2026/)
* [Dominic-Madori Davis](https://techcrunch.com/author/dominic-madori-davis/)
*
### [Trump threatens 25% tariffs on iPhones made outside the US](https://techcrunch.com/2025/05/23/trump-threatens-25-tariffs-on-iphones-made-outside-the-us/)
* [Dominic-Madori Davis](https://techcrunch.com/author/dominic-madori-davis/)
*
### [Mysterious hacking group Careto was run by the Spanish government, sources say](https://techcrunch.com/2025/05/23/mysterious-hacking-group-careto-was-run-by-the-spanish-government-sources-say/)
* [Lorenzo Franceschi-Bicchierai](https://techcrunch.com/author/lorenzo-franceschi-bicchierai/)
Loading the next article
Error loading the next article
[](https://techcrunch.com/)
* [X](https://twitter.com/techcrunch)
* [LinkedIn](https://www.linkedin.com/company/techcrunch)
* [Facebook](https://www.facebook.com/techcrunch)
* [Instagram](https://instagram.com/techcrunch)
* [youTube](https://www.youtube.com/user/techcrunch)
* [Mastodon](https://mstdn.social/@TechCrunch)
* [Threads](https://www.threads.net/@techcrunch)
* [Bluesky](https://bsky.app/profile/techcrunch.com)
* [TechCrunch](https://techcrunch.com/)
* [Staff](https://techcrunch.com/about-techcrunch/)
* [Contact Us](https://techcrunch.com/contact-us/)
* [Advertise](https://techcrunch.com/advertise/)
* [Crunchboard Jobs](https://www.crunchboard.com/)
* [Site Map](https://techcrunch.com/site-map/)
* [Terms of Service](https://techcrunch.com/terms-of-service/)
* [Privacy Policy](https://techcrunch.com/privacy-policy/)
* [RSS Terms of Use](https://techcrunch.com/rss-terms-of-use/)
* [Code of Conduct](https://techcrunch.com/code-of-conduct/)
* [Anthropic](https://techcrunch.com/2025/04/30/anthropic-suggests-tweaks-to-proposed-u-s-ai-chip-export-controls/)
* [OpenAI](https://techcrunch.com/2025/04/29/openai-explains-why-chatgpt-became-too-sycophantic/)
* [Duolingo](https://techcrunch.com/2025/04/30/duolingo-launches-148-courses-created-with-ai-after-sharing-plans-to-replace-contractors-with-ai/)
* [Wikipedia](https://techcrunch.com/2025/04/30/__trashed-4/)
* [Hugging Face](https://techcrunch.com/2025/04/28/hugging-face-releases-a-3d-printed-robotic-arm-starting-at-100/)
* [Tech Layoffs](https://techcrunch.com/2025/02/28/tech-layoffs-2024-list/)
* [ChatGPT](https://techcrunch.com/2025/01/28/chatgpt-everything-to-know-about-the-ai-chatbot/)
© 2025 TechCrunch Media LLC.

Some areas of this page may shift around if you resize the browser window. Be sure to check heading and document order.