Category: Cyber News

  • Hackers Allegedly Destroy Aeroflot’s IT Infrastructure

    Hackers Allegedly Destroy Aeroflot’s IT Infrastructure

    Two hacktivist groups, the pro-Ukraine “Silent Crow” and the Belarusian “Cyber Partisans BY,” have claimed to have completely dismantled the internal IT infrastructure of Russia’s national carrier, Aeroflot, following a covert, year-long operation

    The attackers assert they achieved deep access to critical systems, from booking engines to executive email, by penetrating the network in mid-2024, reportedly using targeted phishing and zero-day exploits. This persistent access eventually escalated to “Tier-0 domain controllers,” giving them full administrative control over essential platforms like Sirax, SharePoint, Exchange, CRM, and ERP.

    The claimed culmination of the operation, which they termed a “strategic strike,” was the erasure or “bricking” of approximately 7,000 physical and virtual servers on July 27, 2025. This was coupled with the theft of over 20 TB of sensitive data, including flight logs, passenger records, and internal communications. Screenshots allegedly showing Active Directory folders were posted on Telegram as proof.

    The Consequences

    • On Monday morning, Aeroflot cited an “information-system failure” as it was forced to cancel 49 domestic and regional flights out of Moscow’s Sheremetyevo Airport, causing terminals to be overrun with stranded passengers.
    • The disruption has caused Aeroflot’s stock price on the Moscow Exchange to drop by over 4%.
    • Russia’s Prosecutor General has initiated a criminal investigation into “unauthorised access,” confirming the severity of the cyber-attack. Kremlin spokesperson Dmitry Peskov labeled the situation “quite alarming.”
    • Cybersecurity analysts estimate that rebuilding the airline’s digital infrastructure could take months and cost “tens of millions of dollars,” marking a significant operational and symbolic blow in the context of the Russo-Ukrainian conflict.

    The hackers have since threatened to release the stolen personal data of Aeroflot passengers. If confirmed, this leak would expose millions of customer records and escalate the geopolitical tensions surrounding the incident.

  • Critical RCE Vulnerability Found in Livewire Framework

    Critical RCE Vulnerability Found in Livewire Framework

    A critical Remote Code Execution (RCE) vulnerability (CVE-2025-54068, CVSS 9.2) has been discovered in Livewire v3 that puts potentially millions of Laravel applications at risk of compromise. The flaw allows unauthenticated attackers to execute code on vulnerable web servers remotely.

    The Problem

    The vulnerability resides in how Livewire v3 handles component state updates—specifically, its hydration mechanism. This flaw allows an attacker to manipulate server-side processes to achieve remote command execution without needing any user interaction or login credentials.

    While the attack complexity is rated as high (requiring a specific component configuration), the lack of authentication or user interaction makes this an extremely dangerous, network-based threat.

    Affected Systems

    • Product: Livewire v3 framework
    • Affected Versions: 3.0.0-beta.1 through 3.6.3
    • Impact: Complete system compromise (Confidentiality, Integrity, and Availability).

    The Fix

    • There is no viable workaround. Users must treat this as an emergency.
    • All users running affected versions of Livewire v3 must upgrade immediately to version 3.6.4.

    Key Takeaways

    • The Threat: RCE flaw in Livewire v3 allows attackers to take control of Laravel web apps.
    • The Danger: No authentication or user interaction is required for exploitation.
    • The Scope: Affects millions of applications running versions 3.0.0-beta.1 through 3.6.3.
    • The Action: Patch immediately to Livewire v3.6.4.
  • RenderShock: Critical 0-Click Flaw Delivers Payloads Silently

    RenderShock: Critical 0-Click Flaw Delivers Payloads Silently

    A critical new attack methodology called RenderShock has emerged, enabling attackers to compromise systems with zero user interaction. The attack exploits file preview and indexing features built into modern operating systems like Windows and macOS, completely bypassing traditional security assumptions.

    Attack Mechanism

    Unlike phishing, which relies on a user clicking, RenderShock attacks start immediately when a malicious file is passively processed by the system.

    The flaw targets automatic file-handling services, including:

    • Windows Explorer Preview Pane
    • macOS Quick Look
    • Windows Search Indexer

    By embedding malicious code in files like PDFs, Office documents, and even basic LNK files, the attacker can silently trigger actions when the system attempts to generate a preview thumbnail or index the content.

    Attackers’ Primary Goal

    The primary goal of RenderShock is initial access and information theft. Key capabilities include:

    1. NTLM Credential Theft: By leveraging UNC paths in a file’s metadata, the attack forces the system to automatically send NTLMv2 password hashes to an attacker’s remote server when the file is simply previewed.
    2. Remote Code Execution: Advanced payloads can execute code by exploiting flaws in preview handlers, achieving full system compromise.

    Action for Defenders

    Since this is a fundamental design weakness, security teams must implement immediate mitigations:

    • Disable Preview Features: Turn off the Preview Pane in Windows Explorer and Quick Look on macOS.
    • Block SMB Traffic: Restrict outbound Server Message Block (SMB) traffic (TCP 445) to untrusted networks to prevent NTLM hash leaks.
    • Behavioral Monitoring: Deploy EDR and behavioral tools to detect unusual network connections from typically “safe” processes like explorer.exe and searchindexer.exe.

    Key Takeaways

    The Threat: RenderShock is a 0-Click attack that requires no user action.

    The Vulnerability: Exploits systems that automatically preview and index files (e.g., Quick Look).

    The Result: Silent NTLM credential harvesting and remote code execution.

    The Fix: Disable system preview features and block outbound SMB.

  • Ring Reaper: New linux EDR Evasion Tool

    Ring Reaper: New linux EDR Evasion Tool

    The Evasion Technique

    RingReaper completely sidesteps the primary method EDRs use for detection: monitoring system calls (syscalls).

    Instead of using traditional syscalls like read, write, and connect, the tool performs all its malicious operations (such as network communication and file access) through io_uring‘s asynchronous Input/Output (I/O) operations. This approach is designed for speed but, in this case, allows the attacker to:

    1. Generate minimal auditable events.
    2. Operate below the radar of EDR solutions that are only listening for standard syscalls.

    Why It Matters

    This is considered a paradigm shift in Linux malware. The technique effectively makes RingReaper “Fully Undetectable” (FUD) by current EDRs, allowing attackers to perform sophisticated actions like privilege escalation and data exfiltration without being seen.

    Key Takeaways

    The Threat: RingReaper is a new Linux tool capable of fully evading EDRs.

    The Method: It exploits the io_uring kernel feature to perform operations without using traditional syscalls.

    The Gap: Current EDRs only monitor traditional syscalls, leaving a blind spot for io_uring activity.

    The Defense: Security monitoring must be updated to track operations within the io_uring kernel feature.

  • Bluetooth Vulnerabilities Let Hackers Spy on Your Headphones and Earbuds

    Bluetooth Vulnerabilities Let Hackers Spy on Your Headphones and Earbuds

    Researchers have uncovered severe security flaws in Bluetooth headphones, earbuds, and other audio devices (from major brands) that allow attackers to hijack them without pairing or authentication. What’s worse: these flaws let attackers eavesdrop, steal data, spread malware, and more — all from about 10 meters away.

    Main Takeaways

    • Critical vulnerabilities in Airoha chip-based devices permit full control over device memory (RAM/flash) via BLE GATT and RFCOMM without any pairing.
    • Affected brands include Sony, Bose, Marshall, Beyerdynamic, JBL, etc.
    • Fixes were supplied to manufacturers in June 2025, but no firmware updates have been made public yet.

    Researchers at ERNW found flaws affecting Bluetooth audio devices (headphones, earbuds, speakers) using Airoha SoCs (System on Chips).
    The vulnerabilities allow an attacker within ~10 meters to exploit Bluetooth Low Energy (BLE) GATT, Bluetooth Classic (RFCOMM), and a custom protocol to:

    • Read/write device memory (RAM/flash).
    • Extract Bluetooth link keys (used to authenticate/bond devices).
    • Impersonate trusted devices.
    • Establish unauthorized hands-free (HFP) connections to eavesdrop via microphone.

    Scope

    Some of the affected models include:

    • Sony: WH-1000XM4, WH-1000XM5, WF-1000XM5, WF-C500
    • Marshall: ACTON III, MAJOR V, MINOR IV, STANMORE III
    • Bose QuietComfort Earbuds; Beyerdynamic Amiron 300; Jabra Elite 8 Active; plus various JBL models.
    • Wireless speakers, dongles, and pro audio gear are also impacted. Often manufacturers weren’t aware their devices used vulnerable Airoha chips.

    Vulnerability Details:

    CVEName / DescriptionImpactCVSS Score
    CVE-2025-20700Missing Authentication for GATT ServicesRead/write device memory; access sensitive data8.8 (High)
    CVE-2025-20701Missing Authentication for Bluetooth BR/EDRFull device takeover over Classic Bluetooth8.8 (High)
    CVE-2025-20702Critical Capabilities of a Custom ProtocolFull RAM & flash access, link key extraction, impersonation potential9.6 (Critical)

    These vulnerabilities let attackers operate without being paired to or recognized by the Bluetooth device. Just proximity is sufficient.

    Impact

    • Eavesdrop via the mic (Hands-Free Profile)
    • Listen in on what the device is playing (media) or trick the device to play/stop/share media
    • Extract stored link keys to impersonate the device or gain persistent access even after disconnects
    • Spread malware to other nearby vulnerable devices via GATT services (“wormable” behavior)

    High-value individuals (journalists, diplomats, business leaders) are especially at risk.

    Mitigation

    • Monitor the device maker’s website or support portal for firmware updates.
    • Remove Bluetooth pairing if you suspect your device may be targeted.
    • Limit device Bluetooth exposure; turn off Bluetooth when not needed.
    • Use devices in environments where nearby attackers are less likely.
    • Check for unusual behavior: unexpected voice transmission, unknown connections, etc.
  • Vulnerability Discovered in Meshtastic Wireless Messaging Tool

    Vulnerability Discovered in Meshtastic Wireless Messaging Tool

    A security weakness has been identified in Meshtastic, a popular open-source off-grid messaging platform, which could allow attackers to intercept messages or manipulate network traffic. The flaw affects how certain packets are processed and could be exploited to disrupt device communication or perform message spoofing.

    Main Takeaways

    • There’s a flaw in Meshtastic’s packet-handling logic that could let an attacker intercept or manipulate messages across the mesh network.
    • Malicious actors might exploit this to inject false messages, tamper with routing, or downgrade message integrity.
    • Users should update to patched versions, validate firmware integrity, and monitor for unexpected network behavior.

    Meshtastic allows devices to form peer-to-peer mesh networks for messaging without relying on cellular or Wi-Fi infrastructure. The vulnerability lies in how nodes process and forward certain packet types. Under specific circumstances, crafted packets can confuse nodes or bypass verification steps, letting an adversary inject or alter traffic.

    Because mesh networks rely on trust and propagation across nodes, a malicious node or attacker in proximity could interfere widely—even if only a single device is compromised.

    Risks & Attack Scenarios

    • Message interception / eavesdropping: Attackers could insert themselves in the routing path and view messages not originally intended for them.
    • Spoofing & fake messages: Malicious actors might inject false messages that appear valid, misleading users.
    • Network disruption: By tampering with routing or packet flow, attackers could degrade or partition parts of the mesh.
    • Downgrade or integrity attack: Under certain conditions, integrity checks or authentication routines may be bypassed or weakened.

    Mitigation

    • Install updates: Use the patched version of Meshtastic as soon as it’s available.
    • Check firmware integrity: Use signed firmware and verify checksums before flashing devices.
    • Restrict physical access: Prevent attackers from gaining close proximity, since many exploits require local radio access.
    • Monitor routing anomalies: Look for unexpected node behavior, routing detours, or traffic patterns that deviate from the norm.
    • Enable stronger cryptography: Where possible, enforce end-to-end encryption and validate node identities.
    • Segment mesh networks: If feasible, limit mesh reach to trusted nodes and avoid open participation.
  • Google’s Massive Cloud Outage Traced to API Management Glitch

    Google’s Massive Cloud Outage Traced to API Management Glitch

    On June 12, 2025, Google Cloud and several Google services were down for up to seven hours. The root cause: a malfunction in Google’s Service Control system, which handles API authorization and quota policies across Google’s infrastructure.

    Takeaways

    • A bug in Service Control triggered by a policy update with blank fields caused the system to crash globally.
    • The failure led to a cascading outage across multiple Google Cloud and Workspace products.
    • Google disabled the problematic feature, scaled back changes, and is rearchitecting Service Control to “fail open” in future incidents.

    What Happened

    • Google had added a feature for more granular quota validation. However, the new code lacked proper error handling and wasn’t behind a feature flag.
    • A policy change with unintended blank metadata fields was inserted into regional databases and replicated globally.
    • When Service Control tried to process that policy, it encountered a null pointer exception, causing the binary to crash across all regions.
    • The binary crash loops triggered a vast disruption in API services.
    • In the most affected region (us-central1), restarting Service Control caused overload on the underlying Spanner database due to a “herd effect” — many tasks restarted at once without backoff.
    • Recovery took longer in that region; Google throttled restarts and rerouted traffic to multi-regional databases to reduce load.

    Impact

    • Disruption spanned Google Cloud Platform, Workspace, and numerous dependent services (Compute Engine, BigQuery, Cloud Storage, and more).
    • Third-party platforms relying on Google infrastructure were also hit (Spotify, Discord, Snapchat, etc.).
    • The outage led to widespread 503 errors and degraded access across many regions.
    • Regions outside us-central1 largely restored in a couple of hours; us-central1 took nearly 2h 40m just to fully recover.

    Mitigations

    • Google immediately froze changes to the Service Control stack and halted manual policy pushes.
    • They disabled the offending quota checks with a “red-button” kill switch.
    • They’re redesigning Service Control so that if an internal check fails, the system “fails open” rather than blocking all API traffic.
    • Planned improvements include better error handling, stricter feature flags, modular architecture, and avoiding global replication of unvalidated metadata.
    • They also intend to audit systems consuming globally replicated data and implement randomized backoff to avoid database overloads during recovery.
  • OpenAI Bans ChatGPT Accounts Used by Russian, Chinese & Iranian Hacker Groups

    OpenAI Bans ChatGPT Accounts Used by Russian, Chinese & Iranian Hacker Groups

    OpenAI has taken down a network of ChatGPT accounts tied to state-sponsored threat actors from Russia, China, and Iran. These accounts were reportedly using the AI platform for cyber operations, influence campaigns, malware development, and other malicious activities.

    Main Takeaways

    • OpenAI disabled hundreds of accounts linked to malicious actors in various countries.
    • The accounts were involved in operations such as social engineering, espionage, influence efforts, and scam infrastructure.
    • The action highlights both the misuse of AI tools by adversaries and the role AI providers play in policing abuse.

    Details

    The threat actors used ChatGPT to assist with tasks like writing code (including for malware or infrastructure), automating social media posting, or preparing influence content.
    One operation, dubbed “Operation Sneer Review,” focused on content around Taiwan and included campaigns in English and Chinese.Some accounts also appear tied to North Korean IT worker schemes, where ChatGPT was used to draft resumes, enable fraudulent job applications, or automate parts of operations.

    OpenAI’s investigative teams used their AI capabilities to detect abusive patterns and associations, then acted to disable accounts.The banned operations had targeting beyond a single country, with focus areas including the U.S., Europe, and regions of geopolitical interest.

    Risks

    • AI tools like ChatGPT are increasingly used by threat actors as force multipliers — improving speed, scale, and sophistication of attacks.
    • Because these actors use legitimate infrastructure and plausible tasks (coding, translation, social media), detection is challenging.
    • The bans show that AI platform providers have to be vigilant about misuse and increasingly act as gatekeepers.
    • There’s ongoing risk of such actors finding new accounts, shifting tactics, or exploring other AI models.

    Mitigation

    • Monitor AI usage logs — track unusual or high-volume queries, especially those involving code, translation, or political content.
    • Apply identity vetting & risk scoring — more stringent checks on accounts or usage patterns that match threat actor profiles.
    • Share threat intelligence — collaborate across AI providers and cybersecurity communities to flag abusive actors.
    • Limit privileged use cases — confine usage of critical features (e.g. code generation, system advisories) to vetted users.
    • Audit content & output — analyze AI-generated outputs for patterns, reused prompts, or batch behaviors that suggest automation.
    • Respond quickly to abuse — have processes to disable accounts, revoke API keys, and investigate suspicious activity.
  • Deepfake Attacks: What’s Growing & How to Fight Back

    Deepfake Attacks: What’s Growing & How to Fight Back

    Deepfake attacks—AI-driven fake audio, video, images, and documents—have surged dramatically. What used to be rare fraud attempts are now a regular danger. This article lays out what’s changing, what detection tools are being developed, and what individuals and organizations should do to protect themselves.

    Takeaways

    • Deepfake fraud has exploded: increasing from a negligible share of fraud attempts to now making up over 6% of cases.
    • Losses in early 2025 alone topped $200 million; everyone is a potential target—not just high-profile figures.
    • To defend yourself, use strong identity checks, invest in detection tech (multimodal, physiological signals, etc.), limit what you share publicly, and train people to spot deepfake tricks.

    What’s Changing

    • Fraud caused by deepfakes rose by over 2,000% in just a few years.
    • The frequency is alarming: in 2024, deepfake attacks were happening every ~five minutes.
    • More than financial loss are the consequences: reputation damage, extortion, even emotional or social harm—especially among women, children, and institutions like schools.
    • Many incidents cross borders, making law enforcement and legal recourse more complex.

    Attack Types & Methods

    • Presentation attacks, e.g., someone using a deepfake live video (during a video call) to impersonate another for scams or identity theft.
    • Injection attacks, meaning prerecorded or edited deepfake content used later—for example during identity verification, onboarding, or document checks.
    • Formats used vary: video is almost half of all deepfake incidents; images and audio make up the rest. Also, document forgeries are spiking: fake IDs and falsified official documents are now more common than old-style paper counterfeits.

    Detection Tools & Techniques

    • Machine learning trained on large datasets is identifying subtle signs: odd blinking, strange face or expression dynamics, unnatural light or shadow behavior, mismatched audio & lips, etc.
    • Other methods: analyzing physiological cues (heartbeats, micro-movements) that current deepfake tools have trouble mimicking convincingly.
    • Multi-modal detection (comparing audio + image + behavior) is emerging as the strongest approach; in labs these methods are already achieving over ~90% accuracy in controlled tests.

    Prevention

    For organizations & individuals:

    • Use identity verification processes that force “live presence”—don’t just accept uploaded photos; ask for actions in real time.
    • Use biometric systems that check for signs of life (e.g., gestures, voice) to make sure the person is real.
    • Be careful about how much and what kind of content you share online: high-quality photos/videos in public can become raw material for deepfake creation.
    • Use multi-step verification for sensitive operations—things like financial transfers, identity checks, or onboarding should have confirmations, maybe even verbal or internal checks.
    • Educate staff, especially executives, to recognize deepfake risks: unusual requests, unethical/urgent pressure, unsolicited video calls, etc.

    What to Expect Going Forward

    • Deepfake tools are getting cheaper, more powerful, and more broadly available—even to less technical actors.
    • Regions with rapid digital adoption, like Asia-Pacific, are expected to see especially large growth in both generation and exploitation of deepfakes.
    • The “deepfake economy” (tech, tools, services) is projected to grow rapidly in value over the next few years.
    • To stay ahead, security strategies need to be both technical (detection, verification) and human (awareness, policy, training).
  • Tenable Network Monitor Flaws Could Let Attackers Manipulate Alerts, Execute Code

    Tenable Network Monitor Flaws Could Let Attackers Manipulate Alerts, Execute Code

    A trio of security vulnerabilities has been disclosed in Tenable Network Monitor (TNM), including one remote code execution flaw and two other high-severity bugs. These issues allow attackers to tamper with alerts, execute arbitrary code, or abuse misconfigurations.

    Key Takeaways

    • Three flaws affect Tenable Network Monitor: CVE-2025-4647, CVE-2025-4648, and CVE-2025-4649.
    • The most serious is an RCE in the alerting mechanism, letting attackers run code remotely.
    • Patches are available; administrators should upgrade and validate integrity of rule sets and permissions.

    Vulnerabilities Overview

    • CVE-2025-4647 (Remote Code Execution in Email Alerting)
      A specially crafted email can trigger code execution in the TNM alerting subsystem. If attackers can send emails that the system processes, they may execute commands under the context of the monitoring application.
    • CVE-2025-4648 (Alert Rule Manipulation)
      This flaw permits local authenticated users to manipulate alert rules—adding, deleting, or modifying rules to hide malicious activity or suppress detection.
    • CVE-2025-4649 (Data Leakage / Unauthorized Access)
      In certain scenarios, attackers may gain access to sensitive internal data due to mispermission handling across modules, causing unauthorized disclosure.

    Impact

    • Attack scope: The RCE via email alerts presents the most direct external risk, especially in environments where TNM is exposed to mail or untrusted sources.
    • Insider threat risks: Manipulating alert rules or suppressing detection gives malicious insiders or compromised accounts an opportunity to hide malicious actions.
    • Operational risk: Tampering with the monitoring system undermines trust in alerts, potentially causing teams to miss real incidents.
    • Prerequisites: Some vulnerabilities require local or authenticated access; others hinge on email channels being improperly protected.

    Recommended Actions

    • Apply patches/updates immediately: Upgrade to the fixed versions provided by Tenable.
    • Harden mail ingestion paths: Restrict which email addresses or domains TNM will process alerts from, ideally using allowlists and authentication.
    • Restrict TNM config permissions: Limit which users/processes can modify alert rules and rule sets.
    • Validate rule integrity: Periodically compare active alert rules against baselines or approved templates.
    • Monitor for unauthorized changes: Use file integrity monitoring or change detection on config directories and rule files.
    • Isolate the monitoring system: Ensure network segmentation so that TNM isn’t exposed to untrusted networks or email paths.