top of page

The Ghost in the Machine: Why Your IDE’s Recommendations Are the Next Supply Chain Attack Surface

1. A Quiet Near-Miss in the Software Supply Chain

In January 2026, the security community caught a glimpse of the future of software supply chain attacks. It was not a zero-day exploit, a malicious dependency, or a leaked signing key. It was something far more mundane, and far more dangerous: a name.

Koi Security disclosed a subtle but serious weakness affecting several popular AI-powered IDEs, including Cursor, Windsurf, and Google Antigravity. These tools inherit Visual Studio Code’s extension recommendation logic but cannot use Microsoft’s official extension marketplace. Instead, they rely on OpenVSX, a community-run registry.

The problem was simple. Several extensions that were hard-coded as “recommended” by VS Code did not actually exist in OpenVSX. Their namespaces were unclaimed.

That meant anyone could register those exact identifiers and publish arbitrary code. Because the IDEs proactively surfaced these extensions as trusted recommendations, users could have been nudged into installing malicious software without phishing, social engineering, or warnings.


Koi’s team pre-emptively claimed the affected namespaces and published inert placeholder extensions, preventing attackers from exploiting the gap. This was not a malware outbreak. It was a trust-model failure narrowly avoided.


2. Why Traditional AppSec Misses Trust-Based Attacks

From a traditional application security perspective, nothing here appears broken.

There was no vulnerable dependency. There was no CVE. There was no exploit chain. There was no malicious binary to analyse.


Most AppSec and Software Composition Analysis (SCA) tools are designed to answer a single question:


“Is this code known to be vulnerable or malicious?”


In this incident, the risk existed before any code was installed.

The failure emerged from:

  • unclaimed namespaces,

  • implicit trust conveyed through recommendations,

  • and registries that do not enforce publisher continuity.


This is the blind spot of CVE-centric security models. They are reactive by design, focused on known vulnerabilities in known artefacts. They do not reason about how trust is constructed, inherited, or abused across modern software ecosystems.

This class of issue falls squarely into the category of trust-based attacks, where the attack surface is not code, but an

assumption.


Prefer a 3 min video instead of reading?


3. The Hidden Trust Graph Problem

Modern software is no longer assembled solely through explicit engineering decisions. It is shaped by recommendation engines, registries, package managers, AI copilots, and automated “best practice” prompts.


Each of these systems embeds assumptions about what is official, safe, or expected.

In the Koi incident, the trust graph looked like this:

  1. VS Code defines a list of recommended extensions

  2. AI-powered IDE forks inherit that list

  3. Users assume recommendations imply legitimacy

  4. Open registries allow unclaimed or recently claimed namespaces


Trust flowed through the system without being explicitly verified at any step.

This is the core issue. We increasingly consume software through inherited trust graphs, not direct evaluation. When those graphs break, attackers do not need exploits. They only need convincing names.


Namespace hijacking, dependency confusion, and similar attacks all exploit this same structural weakness. The industry has treated these as edge cases. They are not.



4. Beyond Static SBOMs: Why Metadata-Aware Security Matters

When teams talk about SBOMs (Software Bill of Materials), they often think of inventories created for compliance or audit purposes. That framing is insufficient for modern supply chain security.


A static SBOM answers:

“What components are present?”


It does not answer:

“Why do we trust this component?”


This is where metadata-aware SBOMs become essential. They incorporate signals such as:

  • publisher identity and continuity,

  • namespace ownership history,

  • ecosystem alignment and expected provenance,

  • package age relative to implied authority,

  • and whether trust is explicit or inherited through tooling.


This is precisely where Zerberus’s Trace-Ai operates.


Trace-Ai does not simply catalogue components. It builds a dynamic map of trust across the software supply chain. Instead of asking only “is this code bad?”, it asks a more fundamental question:


“Why do we trust this publisher, this namespace, or this recommendation?”

By monitoring provenance, publisher behaviour, and ecosystem signals, Trace-Ai identifies high-risk conditions, such as unclaimed namespaces or broken trust continuity, before a single line of malicious code is written.

This shifts supply chain security from reactive detection to proactive risk detection.



5. What Teams Should Do Today

This issue is not limited to IDEs or extension marketplaces.

If your product or platform:

  • recommends plugins or extensions,

  • auto-selects integrations or models,

  • curates templates or dependencies,

  • or surfaces “official” tooling,

Then you are part of your users’ software supply chain.


Practical steps teams can take today:

  • Audit recommendations as security assets

    • Treat recommendations with the same scrutiny as bundled dependencies.

  • Verify ownership, not just availability

    • Ensure recommended artefacts are owned by who users expect them to be owned by.

  • Map inherited trust paths

    • Understand which upstream assumptions your systems silently rely on.

  • Flag namespace and publisher anomalies

    • Especially unclaimed, recently claimed, or authority-implying names.

  • Design for the pre-exploit phase

    • Focus on identifying conditions that make exploitation possible, not just exploits themselves.

The lesson from this incident is not that malware was narrowly avoided.

It is that, "Trust has become an attack surface."


Security teams that continue to focus exclusively on code will keep missing the most dangerous failures, the ones that happen upstream, quietly, and by design.



If you want to understand how your organisation’s trust graph is constructed, and where it can fail, Trace-Ai is designed to make those invisible assumptions visible.

 
 
 

Comments


bottom of page