AboutDFIR.com – The Definitive Compendium Project
Digital Forensics & Incident Response

Blog Post

InfoSec News Nuggets 07/06/2022

Rising threats spark US scramble for cyber workers

The federal government and private sector are facing increasing pressure to fill key cyber roles as high-profile attacks and international threats rattle various U.S. sectors. Workforce shortages have been a long-running issue in cyber, but they have taken on renewed importance amid rising Russian threats stemming from the war in Ukraine. “It’s an issue that the government faces as well as the private sector, state and local communities,” Iranga Kahangama, a cyber official at the Department of Homeland Security (DHS), said at a House hearing this week. Kahangama said the shortage has been a top priority for his agency, which conducted a 60-day hiring sprint last summer to hire cybersecurity professionals. Out of 500 job offers DHS sent out, the department was able to hire nearly 300 new cyber workers.

 

Germany unveils plan to tackle cyberattacks on satellites

The German Federal Office for Information Security (BSI) has put out an IT baseline protection profile for space infrastructure amid concerns that attackers could turn their gaze skywards. The document, published last week, is the result of a year of work by Airbus Defence and Space, the German Space Agency at the German Aerospace Center (DLR), and BSI, among others. It is focused on defining minimum requirements for cyber security for satellites and, a cynic might say, is a little late to the party considering how rapidly companies such as SpaceX are slinging spacecraft into orbit. The guide categorizes the protection requirements of various satellite missions from “Normal” to “Very High” with the goal of covering as many missions as possible. It is also intended to cover information security from manufacture through to operation of satellites.

 

European Union passes landmark laws to rein in big tech

Today, after months of negotiations and procedural hurdles, the European Union has passed a pair of landmark bills designed to rein in Big Tech’s power. The Digital Markets Act and Digital Services Act are intended to promote fairer competition, improve privacy protection, as well as banning both the use of some of the more egregious forms of targeted advertising and misleading practices. The Digital Services Act, for instance, focuses on online platforms like Facebook, Amazon and Google. They will be tasked with being more proactive both with content moderation and also to prevent the sale of illegal or unsafe goods being sold on their platforms. Users will also be able to learn how and why an algorithm recommended them a certain piece of content, and to challenge any moderation decision that was made algorithmically. Finally, companies will no longer be able to use sensitive personal data for ad-targeting, sell ads to children, or use dark patterns — deceptive page design that can manipulate you into saying yes to something even when you’d much rather say no, such as joining a service or preventing you from leaving one you no longer wish to use.

 

Marriott confirms latest data breach, possibly exposing information on hotel guests, employees

Marriott International confirmed Tuesday that unknown criminal hackers broke into its computer networks and then attempted to extort the company, marking the latest in a string of successful cyberattacks against one of the world’s biggest hotel chains. The incident, first reported early Tuesday by databreaches.net, allegedly occurred roughly a month ago and was the work of a group claiming to be “an international group working for about five years,” according to the site. A Marriott spokesperson told CyberScoop that the company “is aware of a threat actor who used social engineering to trick one associate at a single Marriott hotel into providing access to the associate’s computer.” The access “only occurred for a short amount of time on one day. Marriott identified and was investigating the incident before the threat actor contacted the company in an extortion attempt, which Marriott did not pay.”

 

Bias in Artificial Intelligence: Can AI be Trusted?

In June 2022, Microsoft released the Microsoft Responsible AI Standard, v2 (PDF). Its stated purpose is to “define product development requirements for responsible AI”. Perhaps surprisingly, the document contains only one mention of bias in artificial intelligence (AI): algorithm developers need to be aware of the potential for users to over-rely on AI outputs (known as ‘automation bias’). In short, Microsoft seems more concerned with bias from users aimed at its products, than bias from within its products adversely affecting users. This is good commercial responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic bias having a negative effect on individuals or groups of individuals).

Related Posts