In what has been referred to as an “unprecedented anomaly”, cyber criminals are increasingly targeting the financial services sector during the Covid-19 coronavirus pandemic, with attacks on banks and other financial institutions spiking by 38% between February and March to account for 52% of all attacks observed by VMware’s Carbon Black Cloud. The sudden shift observed by Carbon Black threat researchers Patrick Upatham and Jim Treinen was also reflected by equally sharp declines in other verticals. Retail, for example, accounted for 31% of observed threats in February, but this dropped to 1.6% in March, suggesting that the shutdown of vast swathes of the industry has caused cyber criminals to turn their attention elsewhere. Equally, healthcare, which usually falls in the top three verticals for targeting by malicious actors, ended March as the seventh most frequently attacked industry.
Hackers are selling two critical vulnerabilities for the video conferencing software Zoom that would allow someone to hack users and spy on their calls, Motherboard has learned. The two flaws are so-called zero-days, and are currently present in Zoom’s Windows and MacOS clients, according to three sources who are knowledgeable about the market for these kinds of hacks. The sources have not seen the actual code for these vulnerabilities, but have been contacted by brokers offering them for sale. Zero-day exploits or just zero-days or 0days are unknown vulnerabilities in software or hardware that hackers can take advantage of to hack targets. Depending on what software they’re in, they can be sold for thousands or even millions of dollars.
Facebook will begin showing notifications to users who have interacted with posts that contain “harmful” coronavirus misinformation, the company announced on Thursday, in an aggressive new move to address the spread of false information about Covid-19. The new policy applies only to misinformation that Facebook considers likely to contribute to “imminent physical harm”, such as false claims about “cures” or statements that physical distancing is not effective. Facebook’s policy has been to remove those posts from the platform. Under the new policy, which will be rolled out in the coming weeks, users who liked, shared, commented or reacted with an emoji to such posts before they were deleted will see a message in their news feed directing them to a “myth busters” page maintained by the World Health Organization (WHO).
Bad news: So much of your personal data has been hacked that lesson manuals on how to use it are the latest hot property
With more people looking to get into the online crime racket and huge caches of personal information cheap and easy to come by, documents describing the process of committing (and getting away with) online fraud are becoming hot commodities. This according to a study [PDF] from security biz Terbium Labs, which analyzed three massive darknet markets, and found that fraud guides were by far the most popular item being sold. The study was based on observations of Empire Market, White House Market, and Canadian HeadQuarters, three underground souks the researchers likened to Amazon and eBay in their massive footprints and use of ratings to rank merchants.
Stanford Medicine scientists hope to use data from wearable devices to predict illness, including COVID-19
Stanford Medicine researchers and their collaborators, Fitbit and Scripps Research, are launching a new effort that aims to detect early signs of viral infection through data from smartwatches and other wearable devices. By using wearable devices to measure things such as heart rate and skin temperature, which are known to elevate when the body is fighting off an infection, the team seeks to train a series of algorithms that indicates when your immune system is acting up. If the algorithms succeed, the team hopes they could help curb the spread of viral infections, such as COVID-19.
Apple’s no stranger to the audio space, but it would appear the company is now gearing up to go toe-to-toe with the likes of Bose and Sony. The next headphones from the company, according to a Bloomberg report, might be some fancy over-ear wireless headphones featuring swappable parts. Citing unnamed sources, Bloomberg’s report notes that the modular headphones have been in the works since at least 2018 but have faced multiple delays. The prototypes supposedly sport a retro look with swiveling “oval-shaped ear cups” and a headband with thin metal arms. As you might expect, tech-wise these headphones will have similar features to the AirPods Pro, including wireless pairing and noise cancellation. Also included are Siri compatibility and some type of touch control.
If you’ve been on Facebook lately you’ve probably seen an influx of “fun” games suggesting you tell everyone the names of all the streets you’ve lived on or all the cars you’ve had or the song that was popular the year you were born. Or how about the one suggesting you post your senior-year high school photo? On the surface, all those suggestions seem like fun things you might share with friends. In reality, all those posts can be used by hackers to get access to your private accounts. Of course, your friend that posted it is probably not a hacker. They probably saw another friend post it, they were bored, and decided to participate as well. If you pay attention, however, you’ll notice that the answers to all those fun games are also the same things you might enter when you’re trying to verify your identity on a website in order to reset your password.
A security lapse at controversial facial recognition startup Clearview AI meant that its source code, some of its secret keys and cloud storage credentials, and even copies of its apps were publicly accessible. TechCrunch reports that an exposed server was discovered by Mossab Hussein, Chief Security Officer at cybersecurity firm SpiderSilk, who found that it was configured to allow anyone to register as a new user and log in. Clearview AI first made headlines back in January, when a New York Times exposé detailed its massive facial recognition database, which consists of billions of images scraped from websites and social media platforms. Users upload a picture of a person of interest, and Clearview AI’s software will attempt to match it with any similar images in its database, potentially revealing a person’s identity from a single image.