DEV Community

Haven Messenger
Haven Messenger

Posted on • Originally published at havenmessenger.com

Code Signing and Sigstore: How Software Supply Chain Integrity Works

The SolarWinds attack compromised roughly 18,000 organizations by inserting malicious code into a software update that was then cryptographically signed by SolarWinds' own build system. The signature was valid. The software was malicious. This is the supply chain problem: code signing proves the software came from a particular key, but it doesn't prove the software is what users think it is. Sigstore is an attempt to fix the architecture, not just the key management.

Code signing has been a feature of software distribution for decades. Apple requires signed apps for distribution through the App Store and enforces notarization for macOS software outside it. Windows displays SmartScreen warnings for unsigned executables. Linux distributions cryptographically sign packages and verify signatures at install time. The mechanism is well-understood: a developer signs a hash of the software artifact with their private key; users verify the signature with the corresponding public key; a valid signature proves the artifact hasn't been modified since it was signed.

What code signing doesn't prove is when the signing happened, under what conditions the signing key was used, or whether the signed artifact is the one that was built from the source code users can read. These gaps are where supply chain attacks live.

The Traditional Code Signing Model and Its Limits

Classical code signing relies on a private key held by the developer or organization. Security properties are only as good as the key management:

  • Key compromise. If an attacker gains access to the signing key, they can sign arbitrary malicious software that will appear authentic. This happened in several high-profile cases — the Flame malware forged a Microsoft certificate by exploiting an MD5 collision vulnerability.
  • Build system compromise. SolarWinds is the canonical example. The signing key was legitimate; the attacker compromised the build process upstream of signing, so valid signatures were produced for tampered artifacts.
  • No transparency. Traditional signing leaves no public audit trail. An attacker who silently abuses a compromised signing key leaves no record that the abuse occurred — defenders have no way to detect unexpected signing events.
  • Key rotation complexity. When a signing key must be rotated (due to expiration, compromise, or algorithm change), establishing trust in the new key requires distributing it through some trusted channel — which becomes a new attack surface.

A valid cryptographic signature means the software was signed with the claimed key. It says nothing about whether that key was used legitimately, whether the signed code matches the published source, or when the signing occurred. Attackers who compromise a build pipeline inherit valid signing authority.

What Sigstore Is

Sigstore is an open-source project (now under the OpenSSF, the Open Source Security Foundation) that rethinks the signing infrastructure rather than just the algorithms. It has three core components:

  • Cosign — a tool for signing and verifying container images and other software artifacts. Integrates with CI/CD pipelines.
  • Fulcio — a certificate authority that issues short-lived signing certificates tied to OpenID Connect (OIDC) identities. Instead of managing long-lived signing keys, developers authenticate via their GitHub, Google, or Microsoft identity; Fulcio issues a certificate valid for minutes, not years.
  • Rekor — an immutable, append-only transparency log (similar in concept to Certificate Transparency for TLS certificates) that records all signing events. Every signature is logged publicly; unexpected signatures on a project's artifacts are detectable by anyone watching the log.

When a developer signs with Sigstore:

  1. They authenticate to Fulcio using their OIDC identity (e.g., their GitHub account).
  2. Fulcio issues a short-lived certificate embedding their identity and expiring in minutes.
  3. Cosign signs the artifact with the short-lived key and submits the signature and certificate to Rekor.
  4. Rekor returns a signed inclusion proof — a cryptographic record that this entry is in the log.

The result: there are no long-lived private keys to steal or compromise. Each signing event is tied to a specific human identity (the OIDC identity) and recorded in a tamper-evident public log. If an attacker compromises a CI system and signs malicious artifacts, those signatures appear in Rekor — and a project's monitoring can detect unexpected signing events.

Transparency Logs: Learning from Certificate Transparency

The conceptual model for Rekor comes from Certificate Transparency (CT), a system that became mandatory for TLS certificates after 2018. Before CT, certificate authorities could issue certificates for any domain without public record. This enabled attacks: a rogue CA could issue a certificate for google.com and use it for man-in-the-middle attacks, with no way for Google to detect it.

CT requires that all publicly-trusted TLS certificates be logged in append-only, publicly auditable logs before browsers will accept them. Google, Cloudflare, and others operate CT logs. The result: certificate misissuance is now detectable. Domain owners can monitor CT logs for certificates issued to their domains.

Rekor applies the same architecture to software artifact signing. The log is a Merkle tree: each entry contains a hash of the artifact, the signature, and the signing certificate. Each entry is linked to all previous entries; modifying any historical entry would require recomputing all subsequent hashes, which would be detectable. The transparency property is structural, not policy-based.

GPG Signing for Git Commits: Different Problem, Complementary Tool

Sigstore addresses artifact signing in CI/CD and package distribution. Git commit signing addresses a different problem: proving that a commit attributed to a person was actually made by them.

When you GPG-sign a Git commit, the signature covers the commit content and metadata. Anyone who has your public key can verify you made the commit. GitHub, GitLab, and Gitea display verified badges on signed commits. This matters for projects where the commit history is itself a security property — if attackers can forge commits attributed to maintainers, they can socially engineer merges of malicious code.

The SSH signing support added to Git 2.34 simplified this: rather than managing a GPG keyring, you can sign commits with an SSH key. GitHub supports SSH signatures directly without requiring GPG.

Approach Protects Key Management Transparency
Classical code signing Artifact integrity from known key Long-lived private key None
Sigstore / Cosign Artifact integrity + signing identity + audit trail Short-lived, OIDC-backed; no persistent private key Public log (Rekor)
GPG commit signing Commit authorship attribution Long-lived GPG key Via WoT / keyservers
Reproducible builds Binary matches published source N/A (multiple independent verifiers) Independent reproduction

Reproducible Builds: Signing Isn't Enough

Even perfect code signing doesn't answer one question: does this signed binary actually correspond to the published source code? A developer could sign a binary built from source code they haven't published. A malicious insider could modify the build process without changing the source repository.

Reproducible builds address this orthogonal problem. When a build is reproducible, independently following the documented build process from the same source produces a bit-for-bit identical binary. Multiple parties can verify the build; agreement among independent verifiers provides evidence that the binary matches the source. Debian, Bitcoin Core, and Tor Browser have extensive reproducible build infrastructure.

The combination of Sigstore (who signed it, when, logged publicly) plus reproducible builds (the binary matches the source) provides defense in depth that no single mechanism offers alone. This is the architecture serious security-sensitive open source projects are moving toward.

What This Means for Users

For most software users, these mechanisms operate invisibly — package managers, app stores, and update systems handle verification automatically. But understanding the architecture matters for two reasons.

First, it clarifies what to look for when evaluating security-sensitive software. Projects that publish signed releases with detached signatures, maintain entries in Rekor, and have reproducible build infrastructure have made concrete investments in supply chain integrity. Projects that distribute unsigned binaries or rely on HTTPS-only distribution have not.

Second, it frames the correct threat model. The compromise vectors that have worked against major organizations — SolarWinds, XZ Utils (the 2024 backdoor nearly merged into most Linux distributions) — target the build and distribution pipeline, not the cryptographic algorithms. The question isn't "can I verify the SHA256?" but "can I verify what was actually built and signed, by whom, and when?"

For users of privacy-sensitive applications specifically, this matters: an application that encrypts your communications provides no protection if a malicious update can be delivered silently through a compromised build pipeline. Signed releases, logged in a transparency ledger, built reproducibly from audited source — that's the architecture that closes the gap.

Originally published at havenmessenger.com

Top comments (0)