Nine Hosts, One Wildcard, Zero Access Control
The engagement covered the full external attack surface of a media infrastructure company. The company operated a platform used by broadcasters and content distributors to manage live streaming workflows — channel scheduling, playout automation, distribution routing, and the monitoring interfaces that tied all of it together. Most of the interesting infrastructure lived in subdomains: dozens of API hosts, administrative interfaces, monitoring dashboards, and internal tooling that had been exposed to the internet, intentionally or otherwise, over years of growth.
Subdomain enumeration returned a long list. I worked through it methodically — resolving each hostname, checking what was listening, identifying which hosts accepted unauthenticated connections, and categorizing the remainder by what authentication they required.
By the time I reached the API hosts, I had already found several issues worth documenting. What I had not expected was to find the same misconfiguration, identically configured, across nine separate API subdomains.
The First Host
The first indication came from a routine check I run against any API endpoint: sending a request with a fabricated Origin header and examining what comes back.
http
GET /api/v2/channels HTTP/1.1
Host: api-prod.example.com
Authorization: Bearer <test_token>
Origin: https://attacker.exampleThe response:
http
HTTP/1.1 200 OK
Content-Type: application/json
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: Authorization, Content-Type, X-Requested-With
{"channels": [...], "total": 47, "account_id": "..."}A wildcard CORS policy on an endpoint that returned channel configuration, account identifiers, and operational data. Not a public informational endpoint. An authenticated endpoint that the application treated as sensitive.
The wildcard header told browsers — all browsers, for all origins — that the response content could be read by cross-origin JavaScript. The authorization check on the backend was intact; the token was still required. But the CORS policy negated any protection that same-origin enforcement would otherwise provide to clients holding valid tokens.
Checking the Adjacent Hosts
Finding one misconfigured host always raises the question of whether adjacent infrastructure shares the same configuration. In a large deployment, consistent misconfiguration across many hosts typically means the configuration came from a shared template — a Terraform module, a Kubernetes ConfigMap, an Nginx configuration block applied uniformly. Individual host-by-host misconfiguration of this kind is unusual.
I had enumerated seventeen API-pattern subdomains during the initial discovery phase. I ran the same CORS probe against each of them.
Nine returned the wildcard header.
The nine hosts were not a random selection. They corresponded to the company's primary production API clusters — the hosts that actual clients used to interact with the platform. The remaining eight hosts either did not serve CORS headers at all, returned more restrictive policies, or were infrastructure hosts that did not expose API endpoints.
The pattern made the root cause clear: the wildcard policy came from a shared ingress configuration applied to the production API tier. All nine hosts inherited it. Every endpoint on every host was affected.
What the Endpoints Returned
The scope of the misconfiguration mattered as much as its presence. A wildcard CORS policy on endpoints that return only publicly available information is low severity — the same data is accessible without CORS. The severity scales with what the affected endpoints return to authenticated requests.
I catalogued the endpoint categories across the nine hosts by examining API documentation referenced in JavaScript source, response structures from authenticated requests, and the naming patterns of API paths.
The affected endpoints included:
Account and user data. Endpoints returning account profile information, associated email addresses, subscription tier, and billing contact details. The response structure was consistent with data the application used to populate user-facing dashboard views.
Channel and broadcast configuration. Endpoints returning full channel definitions — playout schedules, source configurations, output stream specifications, and associated distribution targets. For a broadcast infrastructure platform, channel configuration is operationally sensitive: it describes exactly how content is being delivered and to where.
Access credentials and token metadata. Endpoints returning information about active API tokens, including creation timestamps, scope assignments, and last-use data. Not the raw token values, but the metadata structure that maps tokens to their permissions and associated accounts.
Internal service routing. Endpoints returning internal endpoint addresses and service identifiers used by the platform's own components for inter-service communication. The values were not credentials, but they described the internal topology in sufficient detail to be useful for understanding where subsequent requests should be directed.
Each of these categories represented data that an authenticated client holding a valid token had legitimate access to. The issue was not that the data was accessible to authenticated requests — that was intended. The issue was that the CORS policy permitted that data to be read by JavaScript running on any external page, provided that page could include the client's token in the request.
The Exploitable Scenario
Demonstrating exploitability required constructing a scenario where a victim who held a valid API token visited an attacker-controlled page, and that page used cross-origin requests to extract data from the affected endpoints.
The realistic trigger was straightforward for this class of API customer. The platform issued API tokens to broadcasters and content distribution partners — organizations, not just end users. Partner developers who held tokens would have them available in their browser environments, authentication flows, or local development setups. A targeted attack could reach these individuals through any number of vectors: phishing, malicious redirects from a compromised resource, or social engineering through apparent customer communication.
A proof-of-concept page on a domain I controlled made a cross-origin request to one of the affected hosts using a token I held for my own test account:
javascript
fetch('https://api-prod.example.com/api/v2/channels', {
headers: {
'Authorization': 'Bearer ' + storedToken,
'Content-Type': 'application/json'
}
})
.then(r => r.json())
.then(data => {
// data contains authenticated API response
document.getElementById('output').textContent = JSON.stringify(data);
});The request succeeded. The browser received a 200 with full response body and exposed it to the JavaScript, because the Access-Control-Allow-Origin: * header explicitly authorized it. The same-origin policy, which would have blocked this in the absence of a permissive CORS header, had nothing to enforce.
The distinction from cookie-authenticated systems is worth being precise about. When an API uses cookies for authentication and a client sets withCredentials: true in the cross-origin request, browsers refuse to expose the response if the CORS policy is a wildcard — the combination of wildcard and credentials is rejected by the browser. This protection did not apply here. The API used Bearer tokens in the Authorization header. No withCredentials flag was needed. The token traveled in the header, the wildcard CORS policy permitted the response to be read, and the browser complied.
Scope and Configuration Root Cause
Working backwards from the nine affected hosts to their configuration source required examining what infrastructure managed the API tier's ingress.
The company used a Kubernetes-based deployment. The API hosts were backed by ingress resources pointing to API service pods. The CORS header appeared uniformly in responses, consistently formatted, across all nine hosts — which indicated the header was being added at the ingress layer rather than by the application code.
A configuration block added to the shared ingress definition for the API tier, applying CORS headers globally, explained both the consistency and the scope. Every endpoint on every host routed through that ingress definition inherited the wildcard policy.
The consequence of the root cause was also the remediation path. Rather than hunting down nine separate configurations, fixing one ingress definition would correct all nine hosts simultaneously. The wildcard needed to be replaced with an explicit allowlist: the actual client origins that the API was intended to serve — the domains of the company's own web applications and the registered origins of known partner integrations.
Remediation
The ingress CORS configuration was replaced with an explicit allowlist of permitted origins. The policy moved from:
Access-Control-Allow-Origin: *
to a dynamic origin check: inbound Origin headers are matched against a stored list of registered client domains. When the origin matches, it is reflected in the response header. When it does not match, no CORS header is returned, and the browser's same-origin policy applies.
For the partner API use case — where client applications operate from many domains — the registration model is appropriate: partners register their application origins during API credential issuance, and those origins are added to the allowlist. Unregistered origins do not receive permissive CORS headers.
Secondary controls added as part of the remediation included explicit scope restrictions in the API token issuance flow — tokens with broad scope, previously usable by any client holding them, were given narrower default scopes — and review of the ingress configuration management process to add CORS policy to the items requiring explicit sign-off during infrastructure changes.
One Configuration, Nine Findings
The uniform nature of this misconfiguration made it representative of a pattern that appears frequently in large deployments: a single infrastructure decision that applies broadly, never revisited after the initial deployment.
In this case, the wildcard policy was likely added during early development to avoid dealing with CORS errors before the list of production client origins was finalized. Development convenience that became production configuration. By the time the API tier was serving live broadcast workflows for paying customers, the CORS policy had been in place long enough that no one actively thought about it.
Finding nine instances of the same issue is not nine separate failures. It is one failure applied nine times by design. The scope is not evidence that the team made nine separate mistakes — it is evidence that one decision propagated further than it should have.
For a deeper look at how CORS misconfiguration works and the full range of exploitable configurations, see the CORS misconfiguration knowledge article.