GFI – Top Cyber Stories for August 2019
Digital assistant makers admit staff listen in on users
Digital assistants – mainly home speaker/microphone combos such as Amazon’s Alexa and Google’s Assistant range, plus similar technologies deployed in smartphones and other devices – are fast becoming an everyday thing, present in growing numbers of homes and pockets. But with the devices themselves short on computing power, the go-to method has always been to upload users’ speech to the cloud, crunch it there and return a response fully formed for the device to simply read out.
This approach has led to increasing worries about the privacy and security of such devices, often focusing not only on what happens to the interpreted data stored by the big tech firms providing these services, but also on who can access the raw data, the everyday speech picked up from homes, cars and pockets around the world.
For the devices to operate, they have to be constantly listening out for their trigger word or phrase – which in the past has meant some errors have easily led to what should have been private speech being transmitted back to base for interpretation.
The latest raft of privacy concerns about this voluntary surveillance cropped up throughout August, and seemed to show similar problems with most if not all providers – the “AI” powering these systems is far from perfect, and requires considerable human intervention and guidance. That means: human workers listening in on what the devices have picked up to help categorize, interpret and even respond to what users are saying in earshot of their assistant.
The month kicked off with Apple stopping the practice of granting contractors access to Siri recordings, and Google “pausing” human reviews of recordings from its Google Assistant range. Both practices were apparently intended to help “fine-tune” the products. By the end of the month, Apple had issued a major revamp of its privacy rules for Siri.
Motherboard then brought Microsoft into the firing line, with revelations about contractors listening in to Skype calls and XBox Live conversations. Amazon’s Alexa, perhaps the best-known of the standalone device-based assistants, has long been known to rely on large teams of humans reviewing the recordings; Amazon this week unveiled options to disable human review.
As with most things described as “AI”, these systems are basically machine learning, reliant on previously categorized datasets; the technology is still heavily dependent on human input, both to start it up and to correct any errors or gaps in its digital guesswork. The technology may well be of enormous benefit to its users, but it looks like it will always require the sacrifice of some privacy.
Vast swathe of biometric data left unprotected online
Two security researchers reported finding a huge stash of data related to a leading biometrics platform, all left largely unprotected on the internet.
The data, from clients of South Korean biometrics firm Suprema, included usernames and passwords, staff details including security clearance levels, building access logs, and most worryingly raw biometric data, including unencrypted fingerprint info and photos used for facial recognition. Plain-text passwords for administrator accounts, which could allow hackers to easily add or amend records in the access control systems of facilities around the world, were also readily accessible.
Suprema’s “Biostar 2” biometric platform is apparently a leader in the biometric building access control world, deployed in “1.5 million installations worldwide” according to the Register. Facilities secured by the platform include “multinational businesses, many small local businesses, governments, banks, and even the UK Metropolitan Police”.
The data haul was uncovered by a pair of Israeli researchers, who posted their findings at VPN comparison site VPNMentor. They report having difficulties getting Suprema to acknowledge their findings, although once the message eventually got through, the company sealed up the leak pretty quickly.
The open database uncovered by the researchers reportedly contained over 27 million records, including a million fingerprints. The scope of the data available would have made it ideal pickings for phishers as well as more sophisticated felons targeting any of the facilities relying on the system for their security.
Suprema noted, in response to the Register article, that there was no indication that the data had been downloaded, or indeed viewed by anyone other than the researchers. Nevertheless, that such data was accessible, much of it unencrypted and lacking any of the hashing and other security techniques expected when handling highly sensitive biometric data, is another sign that biometrics and “smart locks” have a long way to go before they can be as reliable and trustworthy as old-fashioned hardware-based security such as the simple lock and key.
Microsoft warns of yet another “wormable” Windows vulnerability
With all these shiny new technologies revealing their flaws and wobbles, it’s almost comforting to hear about a rather more old-school computer security problem: the classic “wormable” vulnerability in Microsoft Windows.
Microsoft issued a blog post in mid-August, alongside its regular patch release, describing the latest flaw to be spotted in Windows, which could allow an attacker to execute code remotely, opening up the possibility of a worm jumping from machine to machine with no human interaction – the core driver of many of the major outbreaks of the late 90s and 2000s. Melissa, the LoveBug and the Anna Kournikova worms will be familiar to more senior security watchers, and their ilk has never really gone away.
This latest flaw affects most actively-supported Windows versions and requires urgent patching or workarounds, a practice all admins are surely more than amply experienced at and something which seems likely to keep us busy for some time to come.