How to Securely Access Private Resources in a Kubernetes Cluster
Traditionally, exposing Kubernetes services involves using either LoadBalancer Services
(offered by cloud providers for distributing traffic) or Ingress Controllers
(a more advanced approach for routing traffic based on Ingress resources).
However, both methods expose a new public IP address to the internet, increasing the potential attack surface for malicious actors to exploit vulnerabilities and compromise your Kubernetes network or workloads.
Connecting your Kubernetes cluster directly to your n2x.io network topology allows you to safely expose cluster access without the security risks of using load balancers or ingress controllers. There are two ways of connecting the applications you have running on Kubernetes to your n2x.io network topology. You can connect them at Pod-level or at Service-level.
Once you've added n2x.io to your cluster, you can make your Kubernetes resources available in your n2x.io network topology. This lets you access them securely from any of your other n2x-nodes.
In this tutorial, you will see how to connect Kubernetes cluster resources to your n2x.io network topology in two different modes.
Before you begin
To complete this tutorial, you must meet the following requirements:
- Access to at least two Kubernetes clusters, version
v1.27.x
or greater. - A n2x.io account created and one subnet with
10.254.1.0/24
prefix. - Installed n2xctl command-line tool, version
v0.0.3
or greater. - Installed kubectl command-line tool, version
v1.27.x
or greater.
Note
Please note that this tutorial uses a Linux OS with an Ubuntu 22.04 (Jammy Jellyfish) with amd64 architecture.
Creating a Sample Application
In the first step, we are going to create a sample application to run in your Kubernetes cluster. This application will help us show the different options to connect to n2x.io network topology.
Copy the following YAML manifest and save it to web-server.yaml
in your working directory:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server-deploy
spec:
replicas: 1
selector:
matchLabels:
app: web-server
template:
metadata:
labels:
app: web-server
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-server-svc
spec:
selector:
app: web-server
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
The manifest defines a deployment, which manages replicating pods running the nginx
web server container. It also includes a ClusterIP service, which exposes the deployment internally within the cluster through port 80
.
To deploy both the web server and service, run the following command:
kubectl apply -f web-server.yaml
Check if the pod is running and the service was deployed successfully:
kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
NAME READY STATUS RESTARTS AGE
pod/web-server-deploy-95d5d7f57-zznkc 1/1 Running 0 13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 64s
service/web-server-svc ClusterIP 10.96.26.13 <none> 80/TCP 13s
Understanding n2x.io Kubernetes Connetion Types
There are two ways of connecting the applications you have running on Kubernetes to your n2x.io topology:
Kubernetes Services
This method exposes Kubernetes ClusterIP Services as endpoints within the n2x.io subnet, making them accessible by other n2x-nodes
.
Info
Learn more about Kubernetes Services here.
To connect a new kubernetes service to the n2x.io subnet, you can execute the following command:
n2xctl k8s svc connect
The first time we try to connect a service on a Kubernetes cluster, n2x.io will check for the existence of such a gateway, and if it doesn't exist it will offer us the option to create it. We just have to type Y
to the question Want to create one?
Info
Learn how to remove a Kubernetes Gateway deployed in your cluster here.
Once you have a running Kubernetes Gateway
available in the subnet, you can connect your service to the n2x.io subnet. You have two ways to perform this connection:
To connect a new kubernetes service to the n2x.io subnet, you can execute the following command:
n2xctl k8s svc connect
The command will typically prompt you to select the Tenant
, Network
, and Subnet
from your available n2x.io topology options. Then, you can choose the service you want to connect by selecting it with the space key and pressing enter. In this case, we will select default: web-server-svc
.
The YAML manifest below is a variant of web-server
service created earlier. However, a new n2x.io
annotation block has been added (under the service's metadata.annotations
field), which automatically exposes the service in your n2x.io topology:
apiVersion: v1
kind: Service
metadata:
name: web-server-svc
annotations:
n2x.io/account: <n2x_account_id>
n2x.io/dnsName: web-server-svc
n2x.io/ipv4: auto
n2x.io/network: <n2x_net_id>
n2x.io/subnet: <n2x_subnet_id>
n2x.io/tenant: <n2x_tenant_id>
spec:
selector:
app: web-server
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
Here’s what the above annotations mean:
n2x.io/account
: The account identifier of the n2x.ioKubernetes Gateway
node to which you will connect the service (replace<n2x-account-id>
with your actual value).n2x.io/dnsName
: The prefix used to build the FQDN of the service.n2x.io/ipv4
: The method used to assign the IP of the n2x.io subnet (auto or static IP).n2x.io/network
: The network identifier of the n2x.ioKubernetes Gateway
node to which you will connect the service (replace<n2x-net-id>
with your actual value).n2x.io/subnet
: The subnet identifier of the n2x.ioKubernetes Gateway
node to which you will connect the service (replace<n2x-subnet-id>
with your actual value).n2x.io/tenant
: The tenant identifier of the n2xKubernetes Gateway
node to which you will connect the service (replace<n2x-tenant-id>
with your actual value).
Info
Easily retrieve this information (<n2x_account_id>
, <n2x_net_id>
, <n2x_subnet_id>
and <n2x_tenant_id>
) using the n2x.io CLI.
Save the YAML to web-server-svc.yaml
and execute the following command to apply in your cluster:
kubectl apply -f web-server-svc.yaml
Kubernetes Gateway
will detect the presence of the annotations and automatically set up an appropriate endpoint. Finding IP address assigned to the service:
-
Access the n2x.io WebUI and log in.
-
In the left menu, click on the
Network Topology
section and choose thesubnet
associated with your service (e.g., subnet-10-254-0). -
Click on the
IPAM
section. Here, you'll see both IPv4 and IPv6 addresses assigned to theweb-server-svc.default.n2x.local
endpoint. Identify the IP address you need for your specific use case.
Kubernetes Workloads
This method connects the Kubernetes Pods of your workload (such as deployments, statefulsets, etc.) to a n2x.io subnet. It achieves this by injecting a sidecar container
alongside the application container in each pod. This sidecar container acts as a n2x-node
, enabling two-way communication between the pod and the n2x.io subnet.
Info
Learn more about Kubernetes Workloads here.
To connect a new kubernetes workload to the n2x.io subnet, you can execute the following command:
n2xctl k8s workload connect
The command will typically prompt you to select the Tenant
, Network
, and Subnet
from your available n2x.io topology options. Then, you can choose the workload you want to connect by selecting it with the space key and pressing enter. In this case, we will select web-server-deploy
.
After connecting the Kubernetes workload to n2x.io topology, you can see a new node was added in n2x.io WebUI:
Checking Access to Kubernetes Resources
Once any Kubernetes workload or service is added to the n2x.io topology, you can establish end-to-end connections from any other n2x-node
connected to the same topology.
Tip
To control which n2x-nodes
can communicate with each other, you can configure security policies in your subnet.
Now, to enable communication with the Kubernetes cluster API, you can add a new connected n2x-node to the same subnet. You can install the n2x-node
agent on your desired device (like our laptop
in this example) using the provided one-line command.
The node web-server-deploy
has an endpoint with the IP address 10.254.0.237
. To verify access, you can now check the connection from your laptop to this IP address using a tool like curl
or wget
.
$ curl http://10.254.0.237
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Conclusion
Traditional methods like LoadBalancers and Ingress Controllers for exposing Kubernetes services come with inherent security risks. n2x.io offers a secure alternative by connecting your Kubernetes cluster to its network topology, allowing controlled access to resources from any n2x-node
. This guide shows you how to connect your Kubernetes workloads using n2x.io in two different modes: Pod-level
and Service-level
. By following these steps, you can securely expose your applications within a trusted n2x.io environment.