← Blog

Public Artifact Leaks: How Attackers Turn JavaScript, Error Pages, and Exposed Files Into Real Access

Fusionstek Research

External Exposure Flow

How a public artifact becomes a real attack path

Attackers do not stop at the homepage. They enumerate what the browser loads, what the application reveals when it breaks, and what public files expose about internal routes, secrets, and control weaknesses.

1

Public App

Start with the domain, load the app, and capture every JavaScript bundle, source map, public asset, and third-party request the browser reveals.

2

Artifact Leak

Extract tokens, route maps, cloud object URLs, debug traces, config values, or internal API hints from those public artifacts.

3

Validation

Test whether the exposed route, token, signed URL, or admin path is actually reachable and whether it creates a meaningful control gap.

4

Exploit Path

Tie the evidence back to the affected asset so teams can see the real attacker path, not just a noisy static secret match.

Public artifact leaks are one of the most consistently underestimated external attack surface problems. The issue is not just a token in a JavaScript bundle. It is the broader class of browser-accessible files, error pages, build outputs, and debug traces that tell an attacker how your application works before they ever touch a login page.

Attackers do not need internal access to benefit from this. They start with what your own application serves to the public internet. That includes JavaScript bundles, source maps, exposed configuration files, framework error pages, public storage objects, and signed URLs accidentally embedded in front-end code. OWASP's client-side risk guidance explicitly calls out sensitive data stored client-side and proprietary information shipped to the browser as repeatable sources of exposure, not edge cases (OWASP Top 10 Client-Side Security Risks).

See the sanitized bundle excerpt below for a concrete example of how public client-side code can reveal routes, signed object references, and operational context.

What attackers actually look for

There are five categories that matter most in practice:

  • Secrets and tokens: API keys, cloud tokens, signed URLs, and service credentials mistakenly shipped to the browser.
  • Internal route maps: API paths, admin endpoints, staging routes, GraphQL endpoints, and hidden workflow URLs referenced in bundles.
  • Debug and framework leakage: stack traces, filesystem paths, framework versions, and error handlers that expose internal structure.
  • Public build artifacts: source maps, static config files, environment fragments, and downloadable front-end bundles that reveal how the app is wired.
  • Cloud object exposure: storage URLs, signed links, object paths, and public artifacts that point to broader cloud misconfiguration.

Why most teams miss it

Traditional web scanning often stops too early. It checks the homepage, follows a few links, and looks for obvious server-side issues. It does not always inspect everything the browser loads, correlate exposed values to the asset that served them, or explain which leaks are real attack paths versus harmless strings. OWASP guidance has been consistent for years on this point: client-side code paths, loaded resources, and browser-accessible objects deserve their own testing discipline because they change what an attacker can learn or control from the outside (OWASP WSTG Client-Side Resource Testing).

That creates two opposite failures at once: teams miss real exposure chains, and they drown in low-context secret matches that are technically interesting but operationally useless.

What the attacker sees that the defender often does not

A single public artifact can collapse the attacker's discovery time. A source map can reveal route structure. A leaked config file can expose service names. A ThinkPHP debug page can reveal framework version and filesystem paths. A signed storage URL can expose how long a token remains valid and whether access scope is too broad. Even government advisories still document information exposure through error messages as a real exploitability signal because technical leakage helps attackers chain more precise follow-on attacks (CISA: Information Exposure Through an Error Message; CISA: Error Messages Containing Sensitive Information).

The point is not that every exposed artifact is immediately exploitable. The point is that public artifacts shorten the path from internet-visible surface to verified access opportunity.

Evidence Walkthrough

Redacted bundle excerpt

This is a sanitized example styled to resemble a real client-side bundle excerpt. The point is not the exact string. The point is what an attacker can infer from operational context, route names, signed object references, and exposed configuration values.

Client-side JavaScript
<
JS main.e37f5e586c33bc4536a7.js
1(()=>{const e={env:"production",build:"2026.02.24-rc7",region:"eu-west-1",prefix:"",phoneNumber:"",features:["self_serve_refunds","admin_override"],capabilities:["phone_request","ops_export","audit_lookup"]},t=(n,o)=>n?.length&&0!==n.length||!!(n&&o);window.__APP_BOOT__={country:"",capabilities:e.capabilities,release:"2026.02.24"};
2const r="lin_api",u="https://api.portal-example.io/graphql",a="https://cdn-demo-assets.example.net/reports/q1-summary.pdf?sig=demo-redacted",i="media-prod-eu-assets",c="/ops/reconciliation/export",s="/debug/stack/render";try{const d="\n query{ teams{ id nodes { id } name } }\n",l=await fetch(u,{method:"POST",headers:{"Content-Type":"application/json","Authorization":tkn},body:JSON.stringify({query:d})});
3const n=(await l.json()).data?.teams?.nodes.find(e=>"Customer Success"===e.name);if(!n)throw new Error("Customer Success team not found");const p="\n mutation issueCreate(input:{title:\"Contact value\",description:\"## Phone Number Request Details\\n- **Prefix:** "+e.prefix+"\\n- **Type of Number:** "+typeOfNumber+"\\n- **Capabilities:** "+JSON.stringify(e.features)+"\"}){issue{id}}\n";
4await fetch(u,{method:"POST",headers:{"Content-Type":"application/json","x-public-key":"pk_live_demo_4T8J9M2R1Q","x-client-build":window.__APP_BOOT__.release},body:JSON.stringify({query:p})});const m={artifact:a,route:c,debug:s,bucket:i,release:window.__APP_BOOT__.release};return Object.assign({ok:!0},m)}catch(err){console.error("bundle error",err)}
5function renderSummary(n){return n?.map(e=>e.title).join(", ")||""}function getOpsRoutes(){return ["/ops/reconciliation/export","/ops/ledger/review","/ops/claims/reopen"]}const h={supportEmail:"ops-team@example.net",signedAsset:a,routeHints:getOpsRoutes()};
6const g="https://api.portal-example.io/graphql",b={method:"POST",headers:{"Content-Type":"application/json","x-trace":"client_bundle_surface"}};export const fetchIssue=(query)=>fetch(g,{...b,body:JSON.stringify({query})}).then(r=>r.json()).catch(()=>({error:!0}))
7window.__PUBLIC_RUNTIME__={env:"prod",region:"eu-west-1",debugPath:s,artifactHost:"cdn-demo-assets.example.net",assetBucket:i,api:g,trace:"client_bundle_surface"};
8/* sanitized example: values redacted and domains replaced to illustrate browser-visible operational context only */

Why this matters

Operational context

Build and environment metadata can reveal deployment structure, release cadence, and clues about how the public app is assembled.

Cloud exposure hint

Signed URLs and bucket references can reveal object paths, token patterns, and cloud storage dependencies worth validating externally.

Route discovery aid

Internal route hints and debug paths can shorten attacker discovery time by exposing likely workflow names and sensitive application areas.

Illustrative example only. Sanitized values are shown to demonstrate how public bundles can expose operational context, internal routes, storage references, and security-sensitive clues.

How Fusionstek approaches the problem

We treat public artifact leakage as an external assurance problem, not just a string-matching exercise. The workflow is simple but strict. It is also aligned with well-established defensive guidance: do not place secrets in client-side code, do not rely on the browser for security-sensitive logic, and validate exposure from the same external perspective an attacker uses (OWASP AJAX Security Cheat Sheet; OWASP DevSecOps Secrets Management).

  • Discover: enumerate what the public app actually loads and references.
  • Extract: identify secrets, routes, storage objects, debug traces, and config clues.
  • Validate: determine whether the exposed value or path is live, reachable, and security-relevant.
  • Correlate: tie the evidence back to the specific asset, technology, and attack path it affects.
  • Redact: keep evidence actionable without irresponsibly displaying sensitive material.

Examples of exposure classes worth treating seriously

  • JavaScript token exposure: front-end bundles containing high-entropy values tied to API or cloud access patterns.
  • Source map disclosure: route names, internal modules, or environment-specific clues disclosed through map files.
  • Framework debug leakage: stack traces, path disclosure, or framework identity from unhandled production errors.
  • Public artifact URL exposure: downloadable files or build outputs that give away implementation details and sensitive references.
  • Cloud link overexposure: signed links, public object URLs, or static-site artifacts that reveal larger cloud surface weaknesses.

What this means to the client

The business issue is not “a scanner found a string.” The issue is that a public application revealed enough information to shorten attacker effort, increase exploitability, or expose sensitive access paths that should never have been available to unauthenticated users.

That is why evidence matters. Security teams need to know which asset served the artifact, what was exposed, whether it was verified, and what should be fixed first. Auditors and insurers need to see that the issue was identified, reviewed, and remediated with defensible external evidence.

What teams should do next

  • Remove secrets and operational tokens from client-side code and public artifacts.
  • Disable production debug output and framework error pages.
  • Restrict source map and build artifact exposure where they are not needed publicly.
  • Rotate any exposed keys or signed URLs and reduce their lifetime/scope.
  • Retest externally so remediation is verified from the same attacker-visible surface.

Why this belongs in an assurance program

Public artifact leaks are not just a developer hygiene issue. They are a repeatable source of external exposure, a real part of attacker reconnaissance, and a strong example of why external assurance has to be evidence-backed and continuous. If you only look at what the app is supposed to expose, you will miss what the app is actually serving.

For related industry reading on the narrower problem of secrets in JavaScript specifically, see Intruder's research on secrets detection in JavaScript. Fusionstek's framing is broader: public artifact leakage should be treated as an attack-surface and assurance problem, not just a pattern-matching problem.