In May 2017, a phishing attack now known as “the Google Docs worm” spread across the internet. It used special web applications to impersonate Google Docs and request deep access to the emails and contact lists in Gmail accounts. The scam was so effective because the requests appeared to come from people the target knew. If they granted access, the app would automatically distribute the same scam email to the victim’s contacts, thus perpetuating the worm. The incident ultimately affected more than a million accounts before Google successfully contained it. New research indicates, though, that the company’s fixes don’t go far enough. Another viral Google Docs scam could happen anytime.
Google Workspace phishing and scams derive much of their power from manipulating legitimate features and services to abusive ends, says independent security researcher Matthew Bryant. Targets are more likely to fall for the attacks because they trust Google’s offerings. The tactic also largely puts the activity outside the purview of antivirus tools or other security scanners, since it’s web-based and manipulates legitimate infrastructure.
In research presented at the Defcon security conference this month, Bryant found workarounds that attackers could potentially use to get past Google’s enhanced Workspace protections. And the risk of Google Workspace hijinks isn’t just theoretical. A number of recent scams use the same general approach of manipulating real Google Workspace notifications and features to make phishing links or pages look more legitimate and appealing to targets.
Bryant says all of those issues stem from Workspace’s conceptual design. The same features that make the platform flexible, adaptable, and geared toward sharing also offer opportunities for abuse. With more than 2.6 billion Google Workspace users, the stakes are high.
“The design has issues in the first place, and that leads to all of these security problems, which can’t just be fixed—most of them are not magical one-off fixes,” Bryant says. “Google has made an effort, but these risks come from specific design decisions. Fundamental improvement would involve the painful process of potentially re-architecting this stuff.”
After the 2017 incident, Google added more restrictions on apps that can interface with Google Workspace, especially those that request any type of sensitive access, like emails or contacts. Individuals can employ these “Apps Script” apps, but Google primarily supports them so enterprise users can customize and expand Workspace’s functionality. With the strengthened protections in place, if an app has more than 100 users the developer needs to submit it to Google for a notoriously rigorous review process before it can be distributed. Meanwhile, if you try to run an app that has fewer than 100 users and hasn’t been reviewed, Workspace will show you a detailed warning screen that strongly discourages you from going ahead.
Even with those protections in place, Bryant found a loophole. Those small apps can run with no alerts if you receive one attached to a document from someone in your Google Workspace organization. The idea is that you trust your colleagues enough not to need the hassle of stringent warnings and alerts. Those types of design choices, though, leave potential openings for attacks.
For example, Bryant found that by sharing the link to a Google Doc that has one of these apps attached and changing the word “edit” at the end of the URL to the word “copy,” a user who opens the link will see a prominent “Copy document” prompt. You could also close the tab, but if a user thinks a document is legitimate and clicks through to make a copy, they become the creator and owner of that copy. They also get listed as the “developer” of the app that’s still embedded in the document. So when the app asks permission to run and gain access to their Google account data—no warnings appended—the victim will see their own email address in the prompt.
Not all of the components of an app will copy over with the document, but Bryant found a way around this, too. An attacker could embed the lost elements in Google Workspace’s version of a task automation “macro,” which are very similar to the macros that are so often abused in Microsoft Office. Ultimately, an attacker could get someone in an organization to take ownership of and grant access to a malicious app that can in turn request access to other people’s Google accounts within the same organization without any warnings.