President Joe Biden is expected to meet with a cohort of experts and researchers in the expanding field of artificial intelligence, part of the ongoing executive effort to integrate more private sector and academic expertise into federal technology policy. Announced on Monday by a White House official, the specific experts set to meet with Biden work specifically in studying the impact AI is slated to have on work and careers, bias and prejudice, and children’s issues. These focus areas represent some of the societal elements AI systems stand to impact as they are further integrated into technology networks and infrastructure.
Microsoft has addressed an Azure Active Directory (Azure AD) authentication flaw that could allow threat actors to escalate privileges and potentially fully take over the target’s account. This misconfiguration (named nOAuth by the Descope security team who discovered it) could be abused in account and privilege escalation attacks against Azure AD OAuth applications configured to use the email claim from access tokens for authorization. An attacker only had to change the email on their Azure AD admin account to the victim’s email address and use the “Log in with Microsoft” feature for authorization on the vulnerable app or website.
Two security flaws have been discovered in popular smart pet feeders that could lead to data theft and privacy invasion. According to cybersecurity experts at Kaspersky, the first of these vulnerabilities relates to certain smart pet feeders using hard-coded credentials for MQTT (Message Queuing Telemetry Transport), a messaging protocol designed for communication between devices over networks with limited bandwidth or unreliable connections. Exploiting this flaw, hackers could execute unauthorized code and gain control of one feeder to launch subsequent attacks on other network devices. They could also tamper with the feeding schedules, potentially endangering the pet’s health and adding an extra financial and emotional burden on the owner.
Google has warned its own employees not to disclose confidential information or use the code generated by its AI chatbot, Bard. The policy isn’t surprising, given the chocolate factory also advised users not to include sensitive information in their conversations with Bard in an updated privacy notice. Other large firms have similarly cautioned their staff against leaking proprietary documents or code, and have banned them using other AI chatbots. The internal warning at Google, however, raises concerns that AI tools built by private concerns cannot be trusted – especially if the creators themselves don’t use them due to privacy and security risks.
As part of VulnCheck’s Exploit Intelligence offering, we monitor and review large amounts of GitHub repositories. The review process exists to filter out useless, malicious, and/or scam repositories. In early May, during routine reviews, we came across an obviously malicious GitHub repository that claimed to be a Signal 0-day. We reported the repository to GitHub, and it was quickly taken down. The very next day, an almost identical repository was created under a different account, but this time claiming to be a WhatsApp zero-day. Again, we worked with GitHub to get the repository taken down. This process kept repeating itself throughout May. More recently, however, the individual(s) creating these repositories have put more effort into making them look legitimate by creating a network of accounts.
A cyberespionage and hacking campaign tracked as ‘RedClouds’ uses the custom ‘RDStealer’ malware to automatically steal data from drives shared through Remote Desktop connections. The malicious campaign was discovered by Bitdefender Labs, whose researchers have seen the hackers targeting systems in East Asia since 2022. While they have been unable to attribute the campaign to specific threat actors, they mention that the threat actors’ interests align with China and have the sophistication of a state-sponsored APT level.