The Service That Trusted Every Certificate
The election management platform covered a broad attack surface: authentication systems, administrative interfaces, audit logging, cryptographic protections on exported data, and several external integrations that were less visible in the application tier but carried significant risk.
One of those integrations was a voter roll synchronization service — a background process that ran periodically to fetch updated voter registration data from an external government database. The data it retrieved was used to validate voter eligibility and update the platform's internal records.
From the application's perspective, the service was straightforward: make an HTTPS request to an external API, parse the response, reconcile the returned records against the local database. HTTPS was in use. The data was sensitive — voter names, addresses, registration status, and eligibility determinations. The assumption was that the connection was secure.
The assumption was wrong.
The Finding
Source code review of the integration service revealed a custom HTTP client configured with TLS certificate validation disabled. The service was written in Python, using the requests library:
python
# Voter roll sync client — identifying details changed
import requests
class VoterRollClient:
def __init__(self, base_url: str, api_key: str):
self.base_url = base_url
self.api_key = api_key
self.session = requests.Session()
self.session.verify = False # Disable SSL verification
self.session.headers.update({'X-API-Key': api_key})
def fetch_updated_records(self, since: str) -> list:
response = self.session.get(
f"{self.base_url}/records",
params={'updated_since': since}
)
response.raise_for_status()
return response.json()The line self.session.verify = False disabled certificate verification for every request made through this session. The HTTP client would accept any certificate presented by any server — regardless of whether it was signed by a trusted certificate authority, whether it had expired, or whether the hostname on the certificate matched the server being contacted.
This was not buried in obscure configuration. It was a single, clearly commented line in the primary client class used by the integration service.
What Certificate Validation Actually Does
TLS certificate validation serves a purpose that is easy to underestimate: it authenticates the server. When a client connects to api.example.gov over HTTPS, it expects to receive a certificate that:
- Was issued by a certificate authority the client trusts
- Contains the hostname
api.example.gov(or a matching wildcard) - Has not expired
- Has not been revoked
If all of these conditions are met, the client can be confident it is talking to the legitimate server. The private key used to establish the TLS session belongs to the server that was issued the certificate, and the certificate authority vouches for the server's identity.
When validation is disabled, all four conditions are bypassed. The client establishes a TLS session with whatever server responds — it only requires that the session was established. An attacker who can intercept the connection presents their own certificate, completes the handshake, and now acts as a man-in-the-middle between the client and the legitimate server.
[Integration Service] → TLS (attacker cert) → [Attacker] → TLS (real cert) → [Government API]
The integration service believes it is talking to the government API. The attacker is terminating both connections. Every request the service makes and every response it receives passes through the attacker's system, where it can be read, logged, or modified.
The Attack Scenario
Exploiting disabled certificate validation requires a position on the network path — the ability to intercept traffic between the integration service and the external government API. This requirement does not eliminate the risk. Several realistic scenarios provide this position.
Internal network access. The integration service ran within the platform's internal network. Any system on the same network segment — including any compromised application server, database host, or monitoring system — could perform ARP spoofing or route manipulation to intercept outbound traffic.
Cloud infrastructure. Cloud provider infrastructure — hypervisors, virtual network switches, load balancers — has visibility into internal traffic by design. A sophisticated attacker with access to cloud management plane credentials could intercept traffic without touching the application servers.
Network device compromise. The path from the integration service to the external government API traversed routers, switches, and potentially managed security appliances. Any compromised device on that path could intercept and forward TLS traffic. With certificate validation disabled, presenting a fraudulent certificate succeeds without detection.
DNS manipulation. If DNS resolution could be influenced — through cache poisoning, rogue DNS responses, or access to the internal DNS server — the integration service could be directed to an attacker-controlled IP address while still using the legitimate hostname in the request. The attacker's server would respond, and the client would accept it.
Given any of these positions, the proof of concept is simple: run a TLS-terminating proxy configured with a self-signed certificate, intercept the integration service's outbound connections, and forward them transparently to the real API. The proxy sees every request and response in plaintext.
What Was at Risk
The voter roll synchronization service fetched records containing full names, dates of birth, residential addresses, voter registration status, and eligibility determinations. This data was used to update the platform's authoritative voter records database.
An attacker with an established man-in-the-middle position could do two things with this data: read it in transit and modify it in transit.
Reading the data is straightforward. Every record the integration service fetched would pass through the attacker's proxy in plaintext, regardless of the HTTPS in use. The API key used to authenticate to the government API would also appear in request headers — an attacker who captures it can make direct requests to the government API independently of the integration service.
Modifying the data is the more significant concern for an election platform. The integration service returned records that updated voter eligibility status in the local database. An attacker who could modify responses from the government API could inject false records, alter eligibility determinations, or suppress records entirely — and these modifications would propagate into the authoritative voter database without any cryptographic indication that the data had been altered.
The platform had no mechanism to verify the integrity of data received through this integration. It trusted the HTTPS connection to authenticate both the channel and the data source. With certificate validation disabled, neither guarantee held.
Why the Bypass Existed
The commented line — # Disable SSL verification — suggested a deliberate decision. Review of the commit history confirmed it: the disable had been added approximately fourteen months earlier in a commit with the message fix: ssl error on staging.
At some point during development, the external government API had presented a certificate that the integration service's environment did not trust — likely a certificate issued by an internal certificate authority used for staging, or a hostname mismatch between the configured URL and the certificate's subject. Rather than adding the correct CA to the application's trust store, the developer disabled verification to eliminate the error.
The fix worked. The staging environment passed integration tests. The configuration shipped to production. Fourteen months passed without visible incident, because the bypass is invisible during normal operation. There is no error, no log message, and no functional difference from a correctly configured TLS connection when no attacker is present.
By the time of the assessment, the original mismatch no longer existed. The government API used a valid, publicly-trusted certificate that the integration service would have accepted without any special configuration. The bypass was solving a problem that had not existed for over a year.
Remediation
Disabled TLS verification should never be the response to a certificate error. The correct remediation depends on the actual cause of the error.
If the target service uses a self-signed certificate or an internal CA: Add the specific certificate or CA to the application's trust store. In Python's requests library:
python
# Trust a specific CA bundle
self.session.verify = "/etc/ssl/certs/internal-ca.pem"
# Or reference a specific certificate
self.session.verify = "/etc/ssl/certs/api-server.crt"This allows the application to trust the specific CA used by the external service without trusting all certificates globally.
If the hostname on the certificate does not match the connection URL: Fix the URL to use the hostname that appears on the certificate, or coordinate with the external service to issue a certificate covering the correct hostname. Hostname verification exists to prevent a valid certificate for one server from being used to impersonate another.
If the certificate has expired: Report it to the external service operator. An expired certificate indicates a certificate management failure on their side. Working around it by disabling verification hides the signal and leaves the connection vulnerable.
In this case the fix was a one-line change: removing self.session.verify = False. The requests library's default behavior trusts the system CA bundle, which included the CA that issued the government API's certificate. All connections succeeded after the change with no modification to the external service.
A Pattern That Repeats
Disabled certificate validation appears in assessed codebases with enough regularity to make it a standard item in code review checklists. The pattern is consistent across organizations and languages: a transient development error prompts a bypass, the bypass is committed, the error resolves on its own, and the bypass persists because it produces no observable failure in testing or normal operation.
The checks are straightforward to include in static analysis. A search for verify=False in Python, InsecureSkipVerify: true in Go, rejectUnauthorized: false in Node.js, or permissive TrustManager implementations in Java covers the most common patterns. These searches produce few false positives — there is no legitimate production use case for accepting any certificate without validation.
Every HTTPS connection that exists to protect sensitive data in transit depends on two things: encryption and authentication. Removing certificate validation removes the authentication. The connection looks secure. The protocol is HTTPS. The encryption is real. The protection against an active man-in-the-middle is not.
For a broader view of how cryptographic implementation choices — including certificate handling — affect real-world security posture, see the insecure deserialization knowledge article on a different class of implementation error with similarly invisible consequences in normal operation.