The word “deepfake” is a combination of “deep learning” and “fake.” Deepfakes are falsified pictures, videos, or audio recordings. Sometimes the people in them are computer-generated, fake identities that look and sound like they could be real people. Sometimes the people are real, but their images and voices are manipulated into doing and saying things they didn’t do or say. For example, a deepfake video could be used to recreate a celebrity or politician saying something they never said. Using these very lifelike fakes, attackers can spin up an alternate reality where you can’t always trust your eyes and ears. Some deepfakes have legitimate purposes, like movies bringing deceased actors back to life to recreate a famous character. But cyber attackers are starting to leverage the potential of deepfakes. They deploy them to fool your senses, so they can steal your money, harass people, manipulate voters or political views, or create fake news. In some cases, they have even created sham companies made up of deepfake employees. You must become even more careful of what you believe when reading news or social media in light of these attacks.
Homomorphic encryption is considered a next generation data security technology, but researchers have identified a vulnerability that allows them to steal data even as it is being encrypted. “We weren’t able to crack homomorphic encryption using mathematical tools,” says Aydin Aysu, senior author of a paper on the work and an assistant professor of computer engineering at North Carolina State University. “Instead, we used side-channel attacks. Basically, by monitoring power consumption in a device that is encoding data for homomorphic encryption, we are able to read the data as it is being encrypted. This demonstrates that even next generation encryption technologies need protection against side-channel attacks.”
While the world watches Ukraine, the British government has quietly dropped a requirement for mass surveillance of UK internet users by their service providers. A public consultation on the Electronic Communications (Security Measures) Regulations 2022, currently in draft, revealed that a controversial plan to bring back internet connection records monitoring has been deleted after pushback from ISPs. The latest version of the regulations, published this week, now says that the 13-month logging requirement applies only to monitoring “security critical functions” of telcos and ISP networks. Contained in a draft code of practice issued at the same time is a clear explanation that the legally required monitoring is intended to help “post-incident analysis and other such activity.”
The Russian government on Wednesday published a list of more than 17,500 IP addresses and 174 internet domains it says are involved in ongoing distributed denial-of-service attacks on Russian domestic targets. The list include the FBI and CIA’s home pages, and other sites with top-level domain (TLD) extensions denoting they are registered through countries such as Belarus, Germany, Ukraine and Georgia, as well as the European Union. The Russian government did not publish any proof or evidence backing up its claims about the IP addresses or domains on its list. DDoS incidents can be tough to attribute to any specific actor, and otherwise benign internet domains can be hijacked by attackers to misdirect attention. Russia’s National Computer Incident Response & Coordination Center posted the data in a notice that includes 20 recommendations to ward off attacks, such as robust logging, using Russia-based DNS servers, conducting “an unscheduled change of passwords” and disabling external plugins for websites, according to a Google translation.
The secret police: Cops built a shadowy surveillance machine in Minnesota after George Floyd’s murder
Law enforcement agencies in Minnesota have been carrying out a secretive, long-running surveillance program targeting civil rights activists and journalists in the aftermath of the murder of George Floyd in May 2020. Run under a consortium known as Operation Safety Net, the program was set up a year ago, ostensibly to maintain public order as Minneapolis police officer Derek Chauvin went on trial for Floyd’s murder. But an investigation by MIT Technology Review reveals that the initiative expanded far beyond its publicly announced scope to include expansive use of tools to scour social media, track cell phones, and amass detailed images of people’s faces. Documents obtained via public records requests show that the operation persisted long after Chauvin’s trial concluded. What’s more, they show that police used the extensive investigative powers they’d been afforded under the operation to monitor individuals who weren’t suspected of any crime.
Twitter Inc. said today that some of its users will start seeing notes under their posts telling them that they’ve been flagged by members of the public for sharing dubious information. The program started last year as part of a program called “Birdwatch,” although during the trial stage it was made available only to 10,000 users in the U.S. Rather than ask a third-party fact-checking company to review posts, Birdwatch asks other users to check out information for its reliability and, if deemed necessary, flag the post and explain why in the notes. At the time, Twitter said it was important to get regular people to fact-check posts, adding that in the pilot program people had been chosen from various parts of the political spectrum. Until now the details of what happened in the program have been available only on a dedicated Birdwatch website. Now it’s coming to the wider public, but still only to a selected group of people. When a person’s post is flagged by the fact-checkers, there will now be a rating system in which they can approve or disprove the added context in the notes.