It might already be too late.
Leaky Faucet
After catching snippets of text generated by OpenAI’s powerful ChatGPT tool that looked a lot like company secrets, Amazon is now trying to head its employees off from leaking anything else to the algorithm.
According to internal Slack messages that were leaked to Insider, an Amazon lawyer told workers that they had “already seen instances” of text generated by ChatGPT that “closely” resembled internal company data.
This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.
While this isn’t necessarily a problem from a proprietary data perspective, it’s a different story when employees start using the AI to improve upon existing internal code — which is already happening, according to the lawyer.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”
Copycat Killer
The lawyer also revealed, per Insider, that Amazon is developing “similar technology” to ChatGPT — a revelation that appeared to pique the interest of employees who said that using the AI to assist their code-writing had resulted in a tenfold productivity boost.
“If there is a current initiative to build a similar service,” one employee said in the Slack exchanges, “I would be interested in committing time to helping build it if needed.”
While other industries flail and fuss at the concept of being replaced by AI, tech workers are seemingly more inclined to welcome it as a helpful coding tool — to the dismay of their employers’ lawyers.
More on AI: Rival Slams CNET: “We Will Never Have an Article Written by a Machine”