OpenAI is ramping up its security approach to better protect its innovations from corporate espionage. Following the release of a competitive model by Chinese startup DeepSeek in January, which allegedly used ‘distillation’ techniques to mimic OpenAI’s work, the company has reconfigured its protocols. Now, access to critical algorithms and upcoming products is tightly controlled through a method called ‘information tenting’. Even during development of its latest model, dubbed o1, discussions were held only within a small, trusted circle in secured office areas.
Additional measures include isolating key technology on offline systems, requiring biometric fingerprint entry for sensitive zones, and enforcing a deny-by-default internet policy that only allows pre-approved external connections. These steps address both external threats and internal leaks — like those involving CEO Sam Altman’s comments — while also reinforcing physical security at data centres and expanding cybersecurity teams. It’s a clear sign that OpenAI is serious about keeping its cutting-edge work out of the wrong hands.