As cyber threats and data breaches continue to pose a serious risk, DevOps/DevSecOps teams are focusing on implementing more effective security measures and strategies to protect their systems and data at every stage of the software development process.
The zero trust security model is one such strategy that has attracted a significant amount of attention.
What is a zero trust security model?
Zero trust is a security model that assumes that whether a cloud infrastructure is on-premises, or both inside and outside an organisation's network, there is a potential threat for resources, configurations, users’ data, management tools, devices, etc, to be compromised. To address this, security leaders and product teams should eliminate assumed trust of all third-party tools, team members and services and instead validate every step of every interaction.
Hence this model/framework requires continuous authentication and authorisations of users, devices and applications, regardless of their location or network.
In this article, we'll discuss three ways to achieve zero trust security in your infrastructure.
Implementing identity and access management (IAM) policies
Having a solid identity and access management (IAM) strategy in place is the first step in putting a zero trust security paradigm into practice.
In order to do this, user and workload identities and their access rights must be identified and verified prior to every interaction. To guarantee that only authorised people and machines have access to critical data, the IAM policy should also contain strong password policies, multi-factor authentication (MFA) and regular access reviews.
IAM automates this process while also providing administrators with auditing features and more precise control over access across the entire organisation. It is timely, given the recent advances in IOT devices and zero trust models, which have increased the requirements for cyber security stringency.
For instance, if we talk about cloud infrastructure, whether it’s on GCP, Azure or AWS, using the provider’s built-in IAM policies and roles allows us to restrict access to resources so that one service won’t be aware and open to other resources. Leaving the production environment exposed can lead to a scenario where one system that became vulnerable to an attack or malicious activity would likely harm other resources as well.
More specifically, consider a web app hosted on an EC2 instance in AWS. Now let’s say the app has a feature that requires regular data upload to an S3 bucket. In order to use AWS keys, we can directly attach a role to that EC2 instance with a specific bucket access that allows GET, PUT, LIST methods so that the access will indeed be specific.
This can be controlled by RBAC and network policies in Kubernetes clusters. A tool called Akeyless can integrate with your cloud provider and Kubernetes cluster to make authentication and authorisation more secure.
For Kubernetes, the JWT token is used by the Akeyless Kubernetes Auth Method to verify the Kubernetes application. This JWT is only ever shared with the Gateway, which is managed and runs in the user’s environment and never with Akeyless or any other third party during the process. As a result, it is authenticated in a truly zero trust-complaint manner. While there are many services that help with IAM, Akeyless’s centralised SaaS structure is optimised for multicloud development environments.
Implementing network segmentation
The second step to implementing a zero trust security model is to segment your network. This involves creating smaller sub-networks within the larger enterprise cluster, with strict controls on the communication between them, ensuring that if one sub-network is compromised, the others remain secure.
In addition to this, there are also external tools that ensure the observability and traceability of the network, such as Hashicorp’s Consul, Cilium and Isitio’s Service Mesh.
These tools help with the implementation of network policies between different services deployed on the cluster, their flow and the monitoring of it.
SaaS extensions based on stateless gateways, with transparency to internal operations, allow for service continuity and recovery. You don’t need to change any network infrastructure in order for them to work with your internal resources.
Implementing data encryption
The third methodology to implement zero trust is to encrypt all sensitive data, both in transit and at rest. To achieve this, it is recommended to use industry-standard encryption algorithms such as AES-256 and RSA.
Encrypting data at rest involves using encryption techniques to protect data stored in databases, servers and other storage devices. To achieve this, Akeyless uses proprietary encryption algorithms to protect secrets stored in their vault at rest, while also providing key management services. As an added layer of security, only parts of your keys are encrypted in the Akeyless vault’s storage, while other parts are stored by your own infrastructure.
There are several features available when you are using cloud services, where you can enable encryption at REST. In S3 buckets, RDS and other AWS cloud services, you can enable encryption at REST, and in K8s clusters, you can enable REST encryption for your etcd so that your request is end-to-end encrypted.
To achieve encryption of data in transit, you can use tools such as secure sockets layer (SSL) or transport layer security (TLS) to encrypt your data as it travels between different devices, networks and systems.
Pillars of pipeline scepticism
Using the three above-mentioned pillars, you can implement the zero trust model in your infrastructure. Making sure it is being used in your production environment supercharges your security stance and is essential for compliance.
But most importantly, it helps developers and DevOps teams to build more secure products, ensuring that once a project has been deployed, it will be less exposed to external threats.
Share