Today most API communication between machines is secured through API secrets — static keys, tokens, or PKI certificates that act like system passwords in order to authenticate machines and broker communication between them. These machines could be cloud workloads, pods, containers, servers, virtual machines, microservices, or physical machines like servers or Internet of Things devices.
The challenge with current mechanisms of securing authentication between machines is that they all prescribe a bearer model of authentication. As long as an API key, token, or certificate is valid, it can be held and used from anywhere, even a nefarious machine. The model does not guarantee trusted access or trust in the API client. Further adding to the risk is that these API secrets are often long-lived and cumbersome to maintain hygiene around.
Perfect security hygiene would mean each API secret is uniquely assigned to only one machine, never shared, routinely rotated. and securely distributed through development and deployment systems to the machine that needs it without the risk of being leaked along the way. The reality is API secrets are often shared across dozens or hundreds of machines and workloads. They are rarely, if ever, rotated, and provisioning and managing secrets across different applications and environments is an arduous task.
The Cost of Secrets
According to a 2021 report by 1Password, IT and DevOps spend an average of over 25 minutes each day managing secrets, at an estimated payroll expense of $8.5 billion annually across companies in the US. In a world where development and deployment systems are fully automated, provisioning and rotating secrets continues to be a very manual, laborious process.
Leaked infrastructure secrets come at a measurable cost. Exposed code, credentials, and keys — whether they are exposed accidentally or intentionally — cost companies an average of $1.2 million in revenue per year, according to the 1Password report.
More recently, the static nature of API secrets has made them ripe targets for adversaries. Much like passwords, secrets tend to become more vulnerable as they age — a problem that is only compounded when those secrets are being shared across dozens, sometimes hundreds of different workloads. Today, secrets are getting leaked at an alarming rate in code repositories, continuous integration (CI) systems like Jenkins or Travis, orchestration tools like Kubernetes, and cloud hosting environments like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Ditto for logging tools like Splunk and Elastic and even collaboration environments like Slack. Organizations leaked more than 6 million passwords, API keys, and other sensitive data in 2021, doubling the number from the previous year, according to a recent GitGuardian review.
Tools to Improve Things
Enterprises can take extra steps to keep their secrets more secure. Secrets management solutions like vaults or secrets managers help organize and better secure these system passwords. But if your organization happens to be operating workloads with all three cloud providers, your team will have to leverage three proprietary secrets management systems in order to safeguard those secrets — Azure Key Vault, AWS Secrets Manager, and Secret Manager for GCP.
Tools do exist that can scan your environments to find hard-coded secrets in source code, code repositories, CI environments, and logging systems. Some tools can also scan public personal and organization repositories for secrets that may have already been exposed.
An Unbearable Burden
One significant blind spot for many engineering and security teams is visibility into the identities of the machines, applications, services, or workloads that are leveraging API secrets. If it isn’t already broken by this point, that becomes the point where the bearer model starts to break down.
Between the manual nature of secrets management, the vulnerability of these static values, and the way the explosion of API usage has significantly increased the amount of compromised keys, tokens, and certificates, not having visibility into the entities that are leveraging API secrets has made the bearer model untenable. CISA’s zero-trust framework and NIST 800-207 provide guidelines around how organizations think about machines and workloads as nonperson entities — where the user isn’t a human, but rather another application or service account.
While CISA and NIST guidelines assist organizations in handling identity and access, the solution to this problem has already been established for human-to-machine interactions: multifactor authentication (MFA). If we think about the genesis of MFA as it relates to applications and services human users are accessing, the purpose is to validate the user identity with a set of credentials. Many companies require employees to enable MFA to ensure that it is indeed Jane who is trying to access the CRM application with her username and password, just in case her user credentials were compromised. The static nature of passwords, coupled with how long they tend to age well past most security guidelines, makes them ripe targets for adversaries, which is why many organizations mandate MFA to access applications that contain sensitive data.
The bearer model is no different — keys, tokens, and certificates are static values that act as system passwords. The challenge many organizations face is having visibility into the identities of the machines, applications, workloads, or services that are ultimately leveraging those credentials.