Where did they say this? No linked sources. (Edit: just posted on their site - Disrupting malicious uses of AI https://openai.com/global-affairs/disrupting-malicious-uses-...)
Also, is this some kind of passive 'blame Meta', 'vilify open source' campaign by OpenAI?
Guess after DeepSeek-R1 using openai api's to distillate openAI models to train R1 they started looking more thoroughly into what their users are doing?
An incredible article on so many levels.
1. That they have built such a system based on Llama
2. That OpenAI has a āprincipal investigatorā (with a hilarious picture)
3. That OpenAI can monitor what type of software people are building, when they use Copilot/ChatGPT etc..