Key Takeaway
OpenAI’s ChatGPT can assist in reviewing team documents and conducting competitive research, but it has strict limitations. It cannot execute code, download files, or access local systems. When interacting with sensitive sites, it pauses for user oversight. OpenAI acknowledges security risks, noting that agents can be vulnerable to malicious instructions that may compromise data. Despite extensive security testing, the company admits that not all threats can be mitigated as AI agents become more prevalent. Users have control over browser memories, which can be reviewed or deleted at any time within the settings.
At work, OpenAI states, “you can ask ChatGPT to open and review past team documents, conduct new competitive research – and compile insights into a team brief.”
OpenAI is maintaining strict control over the agent’s capabilities.
It cannot execute code in browsers, download files, or install extensions, nor can it access other applications or local file systems.
When it encounters sensitive sites, such as financial platforms, the agent pauses to ensure that the user can monitor its actions.
The company recognizes the security risks, noting that “agents are vulnerable to hidden malicious instructions, which may be concealed in places like a webpage or email, with the intention of overriding the ChatGPT agent’s intended behavior.”
Such exploits could lead to data exposure or unintended actions.
Despite conducting thousands of hours of security testing, OpenAI admits that “our safeguards will not prevent every attack that arises as AI agents become more prevalent.”
How privacy controls give users the final say
Browser memories are an optional feature.
Users can review or archive them at any time within the settings, and OpenAI confirms that “deleting browsing history removes any associated browser memories.”



