AboutDFIR.com – The Definitive Compendium Project
Digital Forensics & Incident Response

Blog Post

InfoSec News Nuggets 02/08/2021

How will ‘chipageddon’ affect you?

For the most part they go unseen but computer chips are at the heart of all the digital products that surround us – and when supplies run short, it can halt manufacturing. There was a hint of the problem last year when gamers struggled to buy new graphics cards, Apple had to stagger the release of its iPhones, and the latest Xbox and PlayStation consoles came nowhere close to meeting demand. Then, just before Christmas, it emerged the resurgent car industry was facing what one insider called “chipageddon”. New cars often include more than 100 microprocessors – and manufacturers were quite simply unable to source them all. Since then, one technology company after another has warned they too face constraints. Samsung is struggling to fulfil orders for the memory chips it makes for its own and others’ products.

 

Sloppy patches are a breeding ground for zero-day exploits, says Google

Security researchers at Google have claimed that a quarter of all zero-day software exploits could have been avoided if more effort had been made by vendors when creating patches for vulnerabilities in their software. In a blog post, Maddie Stone of Google’s Project Zero team says that 25% of the zero-day exploits detected in 2020 are closely related to previously publicly disclosed vulnerabilities, and “potentially could have been avoided if a more thorough investigation and patching effort” were made. Stone argues that there can often be multiple ways to trigger a vulnerability, or paths to access it – but often vendors will only block the method shown in a proof-of-concept code or exploit sample. In the case of some zero-day exploits, it was only “only necessary to change a line or two of code”, with minimal differences, to create a new working zero-day exploit.

 

SitePoint discloses data breach after stolen info used in attacks

The SitePoint web professional community has disclosed a data breach after their user database was sold and eventually leaked for free on a hacker forum.

SitePoint is a website launched in 1999 that offers content and a community devoted to web professionals and developers. The site offers a premium membership that provides access to over 600 books, courses, and talks. At the end of December 2020, BleepingComputer learned of a data breach broker selling the user databases for 26 different companies.  One of the databases was for SitePoint.com, which the broker stated contained one million user records.

 

9 scary revelations from 40 years of facial recognition research

In science fiction, facial recognition technology is a hallmark of a dystopian society. The truth of how it was created, and how it’s used today, is just as freaky. In a new study, researchers conduct a historical survey of over 100 data sets used to train facial recognition systems compiled over the last 43 years. The broadest revelation is that, as the need for more data (i.e. photos) increased, researchers stopped bothering to ask for the consent of the people in the photos they used as data. Researchers Deborah Raji of Mozilla and Genevieve Fried of AI Now published the study on Cornell University’s free distribution service, arXiv.org. The MIT Technology Review published its analysis of the paper Friday, describing it as “the largest ever study of facial-recognition data” that “shows how much the rise of deep learning has fueled a loss of privacy.”

 

Researchers develop approach that can recognize fake news

Social media is increasingly used to spread fake news. The same problem can be found on the capital market – criminals spread fake news about companies in order to manipulate share prices. Researchers at the Universities of Göttingen and Frankfurt and the Jožef Stefan Institute in Ljubljana have developed an approach that can recognize such fake news, even when the news contents are repeatedly adapted. In order to detect false information – often fictitious data that presents a company in a positive light – the scientists used machine learning methods and created classification models that can be applied to identify suspicious messages based on their content and certain linguistic characteristics. “Here we look at other aspects of the text that makes up the message, such as the comprehensibility of the language and the mood that the text conveys,” says Professor Jan Muntermann from the University of Göttingen.

Related Posts