Tube4vids logo

Your daily adult tube feed all in one place!

Fears workplace affairs could be exposed as Slack flaw gives hackers access to private channels

PUBLISHED
UPDATED
VIEWS

Hackers have developed a 'difficult to trace' new method to exploit AI tools inside workplace messaging app Slack — tricking its chatbot into sending malware.

The popular collaboration platform has gained prominence for facilitating quick communications between coworkers, with some linking it to a new age of 'micro-cheating' and office affairs

The cybersecurity team within Slack's research program said Tuesday that they had patched the issue on the same day outside experts first reported the flaw to them. 

But the vulnerability, which lets hackers disguise malicious code inside uploaded documents and Google Drive files, highlights the growing risks posed by 'artificial intelligence' that lacks the 'street smarts' to deal with unscrupulous user requests.

While the independent security researcher who first discovered the new flaw praised Slack for its diligent response, they went public with news of the AI's vulnerability 'so that users could turn off the necessary settings to decrease their exposure.'

Slack's AI vulnerability allowed hackers to disguise their malware inside uploaded documents and Google Drive files held within the workplace collaboration app (pictured above)

Slack's AI vulnerability allowed hackers to disguise their malware inside uploaded documents and Google Drive files held within the workplace collaboration app (pictured above)

Cybersecurity researchers from the firm PromptArmor — which specializes in finding vulnerabilities in 'large language model' (LLM) AI software — first identified the issue after a recent update to Slack AI made it more vulnerable to hide malware.

'After August 14th, Slack also ingests uploaded documents, Google Drive files, etc,' PromptArmor explained in their report, 'which increases the risk surface area.'

Slack AI's integration within the app's workforce communications tools made it uniquely easy for bad actors to disguise their attempts to steal private data as 'error messages' asking users to 'reauthenticate' their own access to this same data.

Specifically, the new update made it possible for a hacker to create a private or public chat channel in Slack as a cut-out that helped them disguise the origin of their malicious code. 

'Even though Slack AI clearly ingested the attacker's message, it does not cite the attacker's message as a source for the output,' PromptArmor explained. 

The result, as they put it: 'If Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query.'

A senior manager for cybersecurity strategy with Synopsys Software Integrity Group, Akhil Mittal, said current AI tools have proven much too susceptible to these kinds of attacks since Silicon Valley has rushed its many competing AI offerings to consumers.

The cybersecurity team within Slack's research program said Tuesday that they had patched the issue on the same day outside experts first reported the flaw to them (pictured)

The cybersecurity team within Slack's research program said Tuesday that they had patched the issue on the same day outside experts first reported the flaw to them (pictured)

'This really makes me question how safe our AI tools are,' Mittal told tech news site Dark Reading

'It's not just about fixing problems,' the cybersecurity expert said, 'but making sure these tools manage our data properly.'

And the widespread use of Slack across multiple companies and organizations increases the risk from future flaws of this kind, according to PromptArmor.

'Public channel sprawl is a huge problem,' They noted. 

'There are many Slack channels, and team members can't keep track of the ones they are a part of, let alone keeping track of a [malicious] public channel that was created with only one member.'

Comments