OpenAI Launches Safety Fellowship for Outside Researchers
OpenAI is recruiting external talent to independently study AI safety and alignment through a new pilot fellowship program.
OpenAI is opening its doors — slightly — to outside experts. The company just unveiled a Safety Fellowship program designed to bring in external researchers, engineers, and practitioners to tackle AI safety and alignment head-on.
The pilot program has two goals: support independent research into keeping advanced AI systems safe, and build up the next wave of talent in the field. It's a deliberate move to get fresh eyes on problems that OpenAI can't — or shouldn't — solve entirely in-house.
The fellowship targets people already working in safety-adjacent spaces but gives them dedicated support to dig deeper. Think of it as OpenAI admitting that alignment research needs a bigger tent.
Whether this translates into meaningful safety improvements or just good PR remains to be seen. But the structure signals OpenAI is at least investing in external accountability infrastructure.