Gathering Metrics with Prometheus across Multiple Sites
In this tutorial, we will create a network topology using n2x.io to enable communication across a public Kubernetes cluster, a private Kubernetes cluster, and a Ubuntu server in a private data center. We will deploy the Prometheus server in the private Kubernetes cluster and Node Exporters to scrape metrics from the Ubuntu server and the worker nodes of the public Kubernetes cluster. We will then access the Prometheus server's web UI to browse targets, query, and graph the collected metrics.
Here is the high-level overview of our setup architecture:

In our setup, we will be using the following components:
- Prometheus is used to scrape the data metrics from exporters and stores its metrics in TSDB(Time series database). For more info please visit the Prometheus Documentation
- Node exporter is the Prometheus agent that fetches the node metrics and makes it available for Prometheus to fetch from
/metrics
endpoint. So basically node exporter collects all the server-level metrics such as CPU, memory, etc. For more info please visit Node Exporter Documentation. - n2x-node is an open-source agent that runs on the machines you want to connect to your n2x.io network topology. For more info please visit n2x.io Documentation.
Before you begin
In order to complete this tutorial, you must meet the following requirements:
- Access to at least two Kubernetes clusters, version
v1.27.x
or greater. - A n2x.io account created and one subnet with
10.254.1.0/24
prefix. - Installed n2xctl command-line tool, version
v0.0.3
or greater. - Installed kubectl command-line tool, version
v1.27.x
or greater. - Installed helm command-line tool, version
v3.10.1
or greater.
Note
Please note that this tutorial uses a Linux OS with an Ubuntu 22.04 (Jammy Jellyfish) with amd64 architecture.
Step-by-step Guide
Step 1: Install Prometheus in Kubernetes private cluster
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
We are going to deploy a Prometheus on a k8s-private
cluster using the official Helm chart:
-
First, let’s add the following Helm repo:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-
Update all the repositories to ensure helm is aware of the latest versions:
helm repo update
-
Then we can install the
25.26.0
version of Prometheus in the selected namespace. For it is theprometheus
namespace.helm install prometheus prometheus-community/prometheus -n prometheus --create-namespace --version 25.26.0
-
Once it is done, you can verify if all of the pods in the
prometheus
namespace are up and running:kubectl -n prometheus get pod
NAME READY STATUS RESTARTS AGE prometheus-alertmanager-0 1/1 Running 0 45s prometheus-kube-state-metrics-85596bfdb6-5566m 1/1 Running 0 46s prometheus-prometheus-node-exporter-hcpgk 1/1 Running 0 46s prometheus-prometheus-node-exporter-jck5h 1/1 Running 0 46s prometheus-prometheus-pushgateway-79745d4495-9hrxc 1/1 Running 0 46s prometheus-server-cc55794b-qw5g6 2/2 Running 0 46s
Accessing the Prometheus UI
To access the Prometheus UI, you can forward the Prometheus dashboard to your local machine. By default, the Prometheus pod exposes port 9090
, while the service exposes port 80
.
Use the following command to establish the port forwarding against the service:
kubectl -n prometheus port-forward svc/prometheus-server 9090:80
Once the port forwarding is active, open a web browser and navigate to http://localhost:9090
to access the Prometheus dashboard.
In the Status
> Targets
section and filter by Kubernetes-nodes
, we can see the private cluster nodes are up and well-connected without any additional and manual configuration:
Step 2: Connect Prometheus to our n2x.io network topology
Prometheus needs to have connectivity to remote node exporters to scrape metrics.
To connect a new kubernetes workloads to the n2x.io subnet, you can execute the following command:
n2xctl k8s workload connect
The command will typically prompt you to select the Tenant
, Network
, and Subnet
from your available n2x.io topology options. Then, you can choose the workloads you want to connect by selecting it with the space key and pressing enter. In this case, we will select prometheus: prometheus-server
.
Then, we can access the n2x.io WebUI to verify that the node is correctly connected to the subnet selected.
Step 3: Install Node Exporter in server01
Since the operating system of our server01
is Ubuntu 22.04 and the package is available in the official Ubuntu repositories, we can install it with the following command:
apt-get update && apt-get -y install prometheus-node-exporter
We can verify that prometheus-node-exporter
is running properly and exposing metrics with the following command:
curl http://localhost:9100/metrics
Step 4: Connect server01 to our n2x.io network topology
Now we need to connect server01
to our n2x.io network topology so that the prometheus-server
can scrape the metrics.
Adding a new node in a subnet with n2x.io is very easy. Here's how:
- Head over to the n2x WebUI and navigate to the
Network Topology
section in the left panel. - Click the
Add Node
button and ensure the new node is placed in the same subnet as theprometheus-server
. - Assign a
name
anddescription
for the new node. - Click
Add New Connected Node to Subnet
.
Here, we can select the environment where we are going to install the n2x-node
agent. In this case, we are going to use Linux:
Run the script on server01
terminal and check if the service is running with the command:
systemctl status n2x-node
You can use ip addr show dev n2x0
command on server01
to display the IP address assigned to this node:
Note
Make a note of this IP address as it will be used as a target in Prometheus configuration.
Step 5: Install Node Exporter in Kubernetes public cluster worker node
Setting your context to Kubernetes Public cluster:
kubectl config use-context k8s-public
Create a namespace named prometheus
. Run the following command:
kubectl create namespace prometheus
Although in our example we will only have one worker node, the Node Exporter needs to run on each node in the Kubernetes cluster therefore we will use a DaemonSet to achieve this.
Create a file called node-exporter.yaml
and add the following:
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: node-exporter
name: node-exporter
namespace: prometheus
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
labels:
app: node-exporter
spec:
containers:
- args:
- --web.listen-address=0.0.0.0:9100
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
image: quay.io/prometheus/node-exporter:v1.8.2
imagePullPolicy: IfNotPresent
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
protocol: TCP
resources:
limits:
cpu: 200m
memory: 50Mi
requests:
cpu: 100m
memory: 30Mi
volumeMounts:
- mountPath: /host/proc
name: proc
readOnly: true
- mountPath: /host/sys
name: sys
readOnly: true
hostNetwork: true
hostPID: true
restartPolicy: Always
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- hostPath:
path: /proc
type: ""
name: proc
- hostPath:
path: /sys
type: ""
name: sys
Go ahead and deploy Node Exporter into your Kubernetes cluster by executing the following command:
kubectl apply -f node-exporter.yaml
Run the following command to check the node-exporter
Pod is in Running
state:
kubectl -n prometheus get pods
NAME READY STATUS RESTARTS AGE
node-exporter-lcsk6 1/1 Running 0 25s
Step 6: Connect Node Exporter pods to our n2x.io network topology
Just like we did previously with the server01
, we need to connect the Node Exporter to our n2x.io network topology so that Prometheus can scrape metrics.
To connect a new kubernetes workloads to the n2x.io subnet, you can execute the following command:
n2xctl k8s workload connect
The command will typically prompt you to select the Tenant
, Network
, and Subnet
from your available n2x.io topology options. Then, you can choose the workloads you want to connect by selecting it with the space key and pressing enter. In this case, we will select prometheus: node-exporter
.
Now we can access the n2x.io WebUI to verify that the Node Exporters are correctly connected to the subnet.
Note
Make a note of this IP address as it will be used as a target in Prometheus configuration.
Step 7: Configuring Prometheus to scrape metrics of server01 and worker
Now that we have a node exporter running and exposing metrics in server01
and worker
node in Kubernetes public cluster, we need to configure Prometheus to scrape these metrics.
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
To configure Prometheus to scrape metrics from your new targets, we'll need to modify its configuration. This involves editing the prometheus-server
ConfigMap to add a new job definition the scrape_configs
section.
kubectl -n prometheus edit cm prometheus-server
scrape_configs:
- job_name: 'nodes'
scrape_interval: 5s
static_configs:
- targets:
- <SERVER01_IP_ADDRESS>:9100
- <WORKER_IP_ADDRESS>:9100
Replace <SERVER01_IP_ADDRESS>
with the IP address getting previously from the n2x0
interface of server01
. In this example the IP is 10.254.1.116
.
Replace <WORKER_IP_ADDRESS>
with the IP address previously from n2x.io WebUI assigned to node-exporter
workload . In this example the IP is 10.254.1.133
.
After that, we can return to the Prometheus dashboard in the browser on http://localhost:9090
.
In the Status
> Targets
section, now we have a new group called nodes
. If we filter the target by nodes
, we can see the 2 new targets that we just added connected to Prometheus successfully:
Conclusion
In this guide, we've learned how n2x.io can help us create a unified monitoring solution with Prometheus. This unified solution can help the operation teams of your organization to debug and troubleshoot faster in multi-cloud or multi-site environments. In summary, with a unified monitoring solution, we avoid the problems caused by data silos.
We recommend you read the article Observability Tools as Data Silos of Troubleshooting if this topic was interesting to you.