[ad_1]
It might have been as recently as yesterday that a too-obvious typo- and grammatical error-filled scam email weaseled into your inbox. Someday soon, they might be missed.
Mistake-laden text has been one of the first tipoffs to a suspicious email that should be flagged or ignored. But cybercriminals are planning on going way beyond the spellcheck going forward, according to cybersecurity experts.
Steven W. Teppler, a cybersecurity legal chief at Mandelbaum Barrett P.C. who leads his firm’s cybersecurity and privacy practice, said natural language processing tools such as ChatGPT are eliminating some of the “tells” apparent in hackers’ age-old attempts to impersonate and scam the unwary.
“If, all of a sudden they started misspelling or using non-American-style English words, like colour instead of color, that would give it away,” he said. “But, for threat actors who have been engaged in phishing campaigns, (ChatGPT) permits them to have much better use of English than they would otherwise.”
Emerging artificial intelligence-driven platforms also give hackers the potential of replicating audio and even video recordings. Teppler said one can find examples of this in the “deepfakes” that have recently circulated of figures such as former President Barack Obama being imitated with tech trickery.
Those realistic sound and visual recordings could be utilized — and, in some cases, Teppler expects, have already — to get employees, clients and vendors to inadvertently participate in fraud schemes.
“Between the audio, video and chat options, you have this toxic mix of potential criminal access and ways to defraud people,” he said, adding that it was “manipulation of perceived reality — and that’s incredibly difficult to detect and prevent.”
It wouldn’t be a stretch. Attacks involving some cybercriminal impersonating, in a sophisticated way, a finance or accounting executive to dupe lower level employees are already regularly happening. Advanced AI tools just allow for more of the same calamity to ensue.
And, after all that, Teppler dryly adds: Now for the bad news.
A tool such as ChatGPT, which was developed by research outfit OpenAI and released only late last year, is new enough that its full impact on the cybersecurity landscape likely hasn’t been felt yet — or even detected.
“This is really only a few months old,” Teppler said. “We won’t know whether the attacks from this have been reported yet. I expect we’ll be finding out about a slew of these in the coming months.”
As for exactly how these platforms might be utilized, it’s a wait-and-see approach for those who handle clients’ cybersecurity concerns.
Teppler is advising vigilance. More than the normal amount.
“Because, right now, there’s very little in terms of gatekeeping for this technology,” he said. “The technology is racing away way too fast for anyone to manage, despite what might be said about filters for this activity or whatever. It’s the wild, wild West.”
[ad_2]
Source_link