AI can steal passwords with 95% accuracy: Study 

Monitor News Desk

A disconcerting study released earlier this month suggests that hackers could exploit artificial intelligence (AI) to gain near-perfect access to user passwords by “listening” to the sound of keystrokes, even without the user’s knowledge.

Conducted by a group of computer scientists from the United Kingdom, the study focused on creating an AI model designed to recognize keyboard sounds on the 2021 version of a MacBook Pro, a widely used “popular off-the-shelf laptop.”

Results from Cornell University’s study indicate that when the AI model was deployed on a nearby smartphone, it astonishingly replicated the entered password with an accuracy rate of 95%.

The AI tool also showcased remarkable proficiency during a Zoom video call, where it “listened” to keystrokes through the laptop’s microphone. In this scenario, the AI successfully imitated the typing with a record accuracy of 93%.

Researchers emphasized the potential threat posed by malicious actors who could exploit this vulnerability, an attack method known as an “acoustic side-channel attack.” They highlighted that users often underestimate the risks associated with keyboard sounds, neglecting to take measures to hide their audible keystrokes.

The report notes, “The ubiquity of keyboard acoustic emanations not only makes them a readily available attack vector but also prompts victims to underestimate (and thus not try to hide) their output.” As an example, it points out that while individuals often take steps to shield their screens when typing passwords, they commonly overlook masking the sound of keystrokes.

To evaluate the accuracy of the AI model, researchers subjected the laptop to a series of tests. They pressed 36 keys 25 times each, varying pressure and finger placement for each key press. The AI program was capable of distinguishing unique features of each key press, such as sound wavelengths.

Placed approximately 17 centimeters away from the keyboard, an iPhone 13 mini acted as the listening device. The study was conducted by Joshua Harrison of Durham University, Ehsan Toreini of the University of Surrey, and Maryam Mehrnezhad of the Royal Holloway University of London.

The study not only exposes the potential risks posed by AI-powered hacking techniques but also underscores the need for stringent safeguards and awareness among users. Concerns have been raised by prominent figures in academia and the tech industry, including OpenAI founder Sam Altman and entrepreneur Elon Musk, who have emphasized the importance of mitigating AI’s potential adverse impacts.

Share This Article
Leave a comment