A U.S. district judge in California has stated that Google can be sued for collecting data on users even when they use “private browsing mode” on their selected browsers. The lawsuit in question is a class action brought forward by three Google users—Chasom Brown, Maria Nguyen, and William Byatt—who used private browsing mode in Chrome and in Safari, Apple’s web browser, in recent years. It claims that Google tracks and collects consumer browsing history and other web activity data “no matter what safeguards” users implement. In this case, Brown v. Google, the specific safeguard referenced is private browsing mode, a feature offered by many browsers. On Google’s Chrome browser, this is referred to as “Incognito mode.” Nonetheless, the complaint alleges that Google still tracks users in private browsing mode using Google Analytics, Google Ad Manager, the Google app on mobile devices, and the Google sign-in button for websites.
Today, machine learning and artificial intelligence systems, trained by data, have become so effective that many of the largest and most well-respected companies in the world use them almost exclusively to make mission-critical business decisions. The outcome of a loan, insurance or job application, or the detection of fraudulent activity is now determined using processes that involve no human involvement whatsoever. In a past life, I worked on machine learning infrastructure at Uber. From estimating ETAs to dynamic pricing and even matching riders with drivers, Uber relies on machine learning and artificial intelligence to enhance customer happiness and increase driver satisfaction. Frankly, without machine learning, I question whether Uber would exist as we know it today. For data-driven businesses, there is no doubt that machine learning and artificial intelligence are enduring technologies that are now table stakes in business operations, not differentiating factors.
Imagine if some not-too-distant future version of Tinder was able to crawl inside your brain and extract the features you find most attractive in a potential mate, then scan the romance-seeking search space to seek out whichever partner possessed the highest number of these physical attributes. We’re not just talking qualities like height and hair color, either, but a far more complex equation based on a dataset of everyone you’ve ever found attractive before. In the same way that the Spotify recommendation system learns the songs you enjoy and then suggests others that conform to a similar profile — based on features like danceability, energy, tempo, loudness, and speechiness — this hypothetical algorithm would do the same for matters of the heart. Or, at least, the loins. Call it physical attractiveness matchmaking by way of A.I. To be clear, Tinder isn’t — as far as I’m aware — working on anything remotely like this. But researchers from the University of Helsinki and Copenhagen University are. And while that description might smack somewhat of a dystopian shallowness pitched midway between Black Mirror and Love Island, in reality their brain-reading research is pretty darn fascinating.
Cyberattacks targeting healthcare are putting patients at unnecessary risk and more must be done to hold the cyber criminals involved to account, warns the CyberPeace Institute, an international body dedicated to protecting the vulnerable in cyberspace. The healthcare industry has been under increased strain over the past year due to the impact of the COVID-19 pandemic, which has prompted some cyber criminals to conduct ransomware campaigns and other cyberattacks. Faced with a ransomware attack, a hospital might pay the cyber criminals the ransom they demand in return for the decryption key because it’s perceived to be the quickest and easiest way to restore the network – and, therefore, the most direct route to restoring patient care. That doesn’t stop the incident being traumatic for staff, who might suddenly find themselves unable to be involved in procedures, while patients may get sent to other hospitals for treatment – something that could prove risky if time is a factor. But even months on from a cyberattack, patient care can remain affected.
I didn’t expect it to be that quick. While I was on a Google Hangouts call with a colleague, the hacker sent me screenshots of my Bumble and Postmates accounts, which he had broken into. Then he showed he had received texts that were meant for me that he had intercepted. Later he took over my WhatsApp account, too, and texted a friend pretending to be me. Looking down at my phone, there was no sign it had been hacked. I still had reception; the phone said I was still connected to the T-Mobile network. Nothing was unusual there. But the hacker had swiftly, stealthily, and largely effortlessly redirected my text messages to themselves. And all for just $16. I hadn’t been SIM swapped, where hackers trick or bribe telecom employees to port a target’s phone number to their own SIM card. Instead, the hacker used a service by a company called Sakari, which helps businesses do SMS marketing and mass messaging, to reroute my messages to him. This overlooked attack vector shows not only how unregulated commercial SMS tools are but also how there are gaping holes in our telecommunications infrastructure, with a hacker sometimes just having to pinky swear they have the consent of the target.