AboutDFIR.com – The Definitive Compendium Project
Digital Forensics & Incident Response

Blog Post

InfoSec News Nuggets 7/30/2024

Passwords disappear for millions of Windows users thanks to Google

To put it bluntly, it’s not been a great month for tech giants. Earlier this month, the CrowdStrike bug brought many businesses to a complete standstill and left millions facing the Blue Screen of Death, causing disruption many are still recovering from following postponed flights and surgeries, to name just a few inconveniences. Well, not to be left out, Google had to cause its own chaos, according to this report from Forbes. Windows users clearly haven’t suffered enough and an estimated 15 million of them were locked out of their own passwords for nearly 18 hours from July 24 to July 25 due to “a change in product behavior” with Google Chrome. 

 

Intruders at HealthEquity rifled through storage, stole 4.3M people’s data

HealthEquity, a US fintech firm for the healthcare sector, admits that a “data security event” it discovered at the end of June hit the data of a substantial 4.3 million individuals. Stolen details include addresses, telephone numbers and payment data. The incident began in March but was only detected in June. The company said in a letter to those affected that it received an alert on March 25 about a “systems anomaly requiring extensive technical investigation and ultimately resulting in data forensics” and that work continued until June 26 – the point at which it became aware that criminals had stole sensitive data.

 

European banks gain insight from first-ever cyber stress test

The European Central Bank (ECB) on Friday released the results from its first-ever cyber resilience stress test on over 100 European banks – declaring there was “room for improvement.” The “predominantly qualitative exercise,” which was carried out in January, was designed to assess how banks respond to and recover from a cyberattack, as opposed to simply looking at their ability to prevent it, the ECB said. ECB officials had set ‘improving cyber resilience’ as one of its key focus areas over the next two years.

 

Proofpoint Email Routing Flaw Exploited to Send Millions of Spoofed Phishing Emails

An unknown threat actor has been linked to a massive scam campaign that exploited an email routing misconfiguration in email security vendor Proofpoint’s defenses to send millions of messages spoofing various popular companies like Best Buy, IBM, Nike, and Walt Disney, among others. “These emails echoed from official Proofpoint email relays with authenticated SPF and DKIM signatures, thus bypassing major security protections — all to deceive recipients and steal funds and credit card details,” Guardio Labs researcher Nati Tal said in a detailed report shared with The Hacker News.

 

France: Telecom fiber optic networks sabotaged

France has been hit by a new round of sabotage acts, this time targeting telecommunications operators, police said. The fiber optic networks of several operators were “sabotaged” in six areas of France, according to the police. The capital Paris, which currently hosts the Olympics games, was not affected. Installations belonging to French telecom companies SFR and Bouygues Telecom were vandalized, the French Le Parisien newspaper and BFM TV reported. The cables had been cut in southern France, with installations near Luxembourg and Paris sabotaged.

 

From sci-fi to state law: California’s plan to prevent AI catastrophe

California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall “safety” of large artificial intelligence models. But critics are concerned that the bill’s overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to “safety incidents.”

Related Posts