Gathering Metrics with Grafana Alloy and Mimir across Multiple Sites
In this guide, we will learn how to deploy a centralized observability architecture using n2x.io, Grafana Mimir, and Grafana Alloy. This architecture collects metrics from a physical server and two Kubernetes clusters (one private and one cloud-based) using node_exporter, sending all data to a single Mimir backend.
Here is the high-level overview of our setup architecture:
In our setup, we will be using the following components:
- Grafana is an analytics and interactive visualization platform. For more info please visit the Grafana Documentation
- Grafana Mimir is an open-source, horizontally scalable, highly available, multi-tenant time series database designed for long-term storage of Prometheus and OpenTelemetry metrics. It enables users to ingest metrics, run queries, set up recording and alerting rules, and manage data across multiple tenants. For more information, please visit the Grafana Mimir Documentation.
- Grafana Alloy is an open-source, vendor-neutral distribution of the OpenTelemetry Collector. It enables the collection, processing, and export of telemetry data—including metrics, logs, traces, and profiles—through programmable pipelines. Alloy supports both push and pull data collection methods and integrates seamlessly with backends like Grafana Mimir, Loki, and Tempo. For more information, please visit the Grafana Alloy Documentation.
- Node exporter is the Prometheus agent that fetches the node metrics and makes it available for Prometheus to fetch from
/metricsendpoint. So basically node exporter collects all the server-level metrics such as CPU, memory, etc. For more info please visit Node Exporter Documentation. - n2x-node is an open-source agent that runs on the machines you want to connect to your n2x.io network topology. For more info please visit n2x.io Documentation.
Before you begin
In order to complete this tutorial, you must meet the following requirements:
- Access to at least two Kubernetes clusters, version
v1.32.xor greater. - A n2x.io account created and one subnet with
10.250.1.0/24prefix. - Installed n2xctl command-line tool, version
v0.0.4or greater. - Installed kubectl command-line tool, version
v1.32.xor greater. - Installed helm command-line tool, version
v3.17.3or greater.
Note
Please note that this tutorial uses a Linux OS with an Ubuntu 24.04 (Noble Numbat) with amd64 architecture.
Step-by-step Guide
Step 1: Install Grafana Mimir on a private cluster
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
We are going to deploy a Grafana Mimir on a k8s-private cluster using the official Helm chart:
-
First, let’s add the following Helm repo:
helm repo add grafana https://grafana.github.io/helm-charts -
Update all the repositories to ensure helm is aware of the latest versions:
helm repo update -
Create the configuration file
custom.yamlwith this information:metaMonitoring: serviceMonitor: enabled: true grafanaAgent: enabled: true installOperator: true metrics: additionalRemoteWriteConfigs: - url: "http://mimir-nginx.mimir.svc:80/api/v1/push" ingester: replicas: 1 zoneAwareReplication: migration: enabled: false replicas: null -
Go ahead and deploy Grafana Mimir into your Kubernetes cluster by executing the following command:
helm install mimir grafana/mimir-distributed -f custom.yaml -n mimir --create-namespace --version 5.7.0 -
Run the following command to check the Grafana Mimir Pod is in Running state:
kubectl -n mimir get podsNAME READY STATUS RESTARTS AGE mimir-alertmanager-0 1/1 Running 0 5m41s mimir-compactor-0 1/1 Running 0 5m41s mimir-distributor-556bc88958-8rzn7 1/1 Running 0 5m41s mimir-grafana-agent-operator-5ffc87b4f7-jgmxv 1/1 Running 0 5m41s mimir-ingester-zone-a-0 1/1 Running 0 5m41s mimir-ingester-zone-b-0 1/1 Running 0 5m41s mimir-ingester-zone-c-0 1/1 Running 0 5m41s mimir-make-minio-buckets-5.4.0-l4dz8 0/1 Completed 0 5m41s mimir-meta-monitoring-0 2/2 Running 0 3m59s mimir-meta-monitoring-logs-srk6l 2/2 Running 0 3m58s mimir-minio-5477c4c7b4-x2whc 1/1 Running 0 5m41s mimir-nginx-7b49958f6b-4k2zm 1/1 Running 0 5m41s mimir-overrides-exporter-67b4c999f9-g4fk6 1/1 Running 0 5m41s mimir-querier-cdc56ffc6-7vrqq 1/1 Running 0 5m41s mimir-querier-cdc56ffc6-zgwfv 1/1 Running 0 5m41s mimir-query-frontend-cbb44597f-j5gg9 1/1 Running 0 5m41s mimir-query-scheduler-57686f5bd-gzpts 1/1 Running 0 5m41s mimir-query-scheduler-57686f5bd-xngwc 1/1 Running 0 5m41s mimir-rollout-operator-5d576bc569-mqjt4 1/1 Running 0 5m41s mimir-ruler-6b4858c795-5wkf2 1/1 Running 0 5m41s mimir-store-gateway-zone-a-0 1/1 Running 0 5m41s mimir-store-gateway-zone-b-0 1/1 Running 0 5m41s mimir-store-gateway-zone-c-0 1/1 Running 0 5m41s
Step 2: Install Grafana Alloy on a private cluster
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
We are going to deploy a Grafana Alloy on a k8s-private cluster using the official Helm chart:
-
First, let’s add the following Helm repo:
helm repo add grafana https://grafana.github.io/helm-charts -
Update all the repositories to ensure helm is aware of the latest versions:
helm repo update -
Create the configuration file
values.yamlwith this information:--- alloy: configMap: content: | prometheus.remote_write "default" { endpoint { url = "http://mimir-nginx.mimir.svc:80/api/v1/push" } } --- serviceAccount: create: true name: alloy-sa --- rbac: create: true name: alloy-discovery rules: - apiGroups: [""] resources: - pods - namespaces - endpoints verbs: ["get", "list", "watch"] -
Go ahead and deploy Grafana Alloy into your Kubernetes cluster by executing the following command:
helm install alloy grafana/alloy -n alloy --create-namespace -f values.yaml --version 1.0.3 -
Run the following command to check the alloy Pod is in Running state:
kubectl -n alloy get podsNAME READY STATUS RESTARTS AGE alloy-d2xr4 2/2 Running 0 41s
Step 3: Connect Grafana Alloy to our n2x.io network topology
Grafana Alloy needs to have connectivity to remote node exporters to scrape metrics.
To connect a new kubernetes workloads to the n2x.io subnet, you can execute the following command:
n2xctl k8s workload connect
The command will typically prompt you to select the Tenant, Network, and Subnet from your available n2x.io topology options. Then, you can choose the workloads you want to connect by selecting it with the space key and pressing enter. In this case, we will select alloy: alloy.

Now we can access the n2x.io WebUI to verify that the Grafana Alloy are correctly connected to the subnet.

Step 4: Install Node Exporter Public and Private Cluster
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
We are going to deploy a Node Exporter on a k8s-private cluster using the official Helm chart:
-
First, let’s add the following Helm repo:
helm repo add grafana https://grafana.github.io/helm-charts -
Update all the repositories to ensure helm is aware of the latest versions:
helm repo update -
Go ahead and deploy Node Exporter into your Kubernetes cluster by executing the following command:
helm install node-exporter prometheus-community/prometheus-node-exporter --namespace monitoring --create-namespace --version 4.46.1 -
Run the following command to check the node-exporter Pod is in Running state:
kubectl -n monitoring get podsNAME READY STATUS RESTARTS AGE node-exporter-prometheus-node-exporter-9x5zz 1/1 Running 0 3m39s node-exporter-prometheus-node-exporter-m4cxn 1/1 Running 0 3m39s
Also Required
This procedure must also be completed in the k8s-public context.
Step 5: Connect the Public Cluster's Node Exporter to the n2x.io network topology
Setting your context to Kubernetes Public cluster:
kubectl config use-context k8s-public
To connect a new kubernetes workloads to the n2x.io subnet, you can execute the following command:
n2xctl k8s workload connect
The command will typically prompt you to select the Tenant, Network, and Subnet from your available n2x.io topology options. Then, you can choose the workloads you want to connect by selecting it with the space key and pressing enter. In this case, we will select monitoring: node-exporter-prometheus-node-exporter.

Now we can access the n2x.io WebUI to verify that the Node Exporters are correctly connected to the subnet.

Note
Make a note of this IP address as it will be used as a target in Grafana Alloy configuration.
Step 6: Install Node Exporter in server01
Since the operating system of our server01 is Ubuntu 24.04 and the package is available in the official Ubuntu repositories, we can install it with the following command:
apt-get update && apt-get -y install prometheus-node-exporter
We can verify that prometheus-node-exporter is running properly and exposing metrics with the following command:
curl http://localhost:9100/metrics
Step 7: Connect server01 to our n2x.io network topology
Now we need to connect server01 to our n2x.io network topology so that the Grafana Alloy can scrape the metrics.
Adding a new node in a subnet with n2x.io is very easy. Here's how:

- Head over to the n2x WebUI and navigate to the
Network Topologysection in the left panel. - Click the
Add Nodebutton and ensure the new node is placed in the same subnet as theGrafana Alloy. - Assign a
nameanddescriptionfor the new node. - Click
Add New Connected Node to Subnet.
Here, we can select the environment where we are going to install the n2x-node agent. In this case, we are going to use Linux:

Run the script on server01 terminal and check if the service is running with the command:
systemctl status n2x-node
You can use ip addr show dev n2x0 command on server01 to display the IP address assigned to this node:

Note
Make a note of this IP address as it will be used as a target in Grafana Alloy configuration.
Step 8: Configure Grafana Alloy to scrape metric of targets
Now that we have a node exporter running and exposing metrics in server01 and worker node in Kubernetes public cluster, we need to configure Grafana Alloy to scrape these metrics.
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
To configure Grafana Alloy to scrape metrics from your new targets, we'll need to modify the values.yaml file with the following information:
alloy:
configMap:
content: |
discovery.kubernetes "node_exporters" {
role = "pod"
namespaces {
names = ["monitoring"]
}
selectors {
role = "pod"
label = "app.kubernetes.io/name=prometheus-node-exporter"
}
}
// Scrape K8S private
prometheus.scrape "k8s_private_node_exporter" {
targets = discovery.kubernetes.node_exporters.targets
scrape_interval = "15s"
forward_to = [prometheus.relabel.k8s_private_node_exporter.receiver]
}
prometheus.relabel "k8s_private_node_exporter" {
rule {
target_label = "cluster"
replacement = "k8s-private"
}
rule {
target_label = "job"
replacement = "node_exporter"
}
forward_to = [prometheus.remote_write.default.receiver]
}
// Scrape K8S public
prometheus.scrape "k8s_public_node_exporter" {
targets = [
{
__address__ = "<WORKER_IP_ADDRESS>:9100",
},
]
forward_to = [prometheus.relabel.k8s_public_node_exporter.receiver]
}
prometheus.relabel "k8s_public_node_exporter" {
rule {
target_label = "cluster"
replacement = "k8s-public"
}
rule {
target_label = "job"
replacement = "node_exporter"
}
forward_to = [prometheus.remote_write.default.receiver]
}
// Scrape node_exporter server01
prometheus.scrape "server01_node_exporter" {
targets = [
{
__address__ = "<SERVER01_IP_ADDRESS>:9100",
},
]
forward_to = [prometheus.relabel.server01_node_exporter.receiver]
}
prometheus.relabel "server01_node_exporter" {
rule {
target_label = "job"
replacement = "node_exporter"
}
forward_to = [prometheus.remote_write.default.receiver]
}
prometheus.remote_write "default" {
endpoint {
url = "http://mimir-nginx.mimir.svc:80/api/v1/push"
}
}
serviceAccount:
create: true
name: alloy-sa
rbac:
create: true
name: alloy-discovery
rules:
- apiGroups: [""]
resources:
- pods
- namespaces
- endpoints
verbs: ["get", "list", "watch"]
Replace placeholders with the appropriate IP addresses
- Replace
<SERVER01_IP_ADDRESS>with the IP address getting previously from then2x0interface ofserver01. In this example the IP is10.250.1.48. - Replace
<WORKER_IP_ADDRESS>with the IP address previously from n2x.io WebUI assigned tonode-exporterworkload . In this example the IP is10.250.1.216.
After that, we need upgrade the Grafana Alloy configuration using the modified values.yaml:
helm upgrade alloy grafana/alloy -n alloy -f values.yaml
Step 9: Installing Grafana in Kubernetes private cluster worker node
Once our logs are stored in Grafana Mimir, we'll need a way to analyze them. To achieve this, we'll deploy Grafana, a powerful visualization tool, on our private Kubernetes cluster using Helm:
Setting your context to Kubernetes Private cluster:
kubectl config use-context k8s-private
We are going to deploy a Grafana Mimir on a k8s-private cluster using the official Helm chart:
-
First, let’s add the following Helm repo:
helm repo add grafana https://grafana.github.io/helm-charts -
Update all the repositories to ensure helm is aware of the latest versions:
helm repo update -
Create the configuration file
values.yamlwith this information:datasources: datasources.yaml: apiVersion: 1 datasources: - name: mimir type: prometheus access: proxy url: http://mimir-nginx.mimir.svc:80/prometheus isDefault: true editable: true -
Install Grafana version
9.2.0with default configuration in thegrafananamespace:helm install grafana grafana/grafana -n grafana --create-namespace --version 9.2.0 -f values.yaml -
The Grafana pod should be up and running:
kubectl -n grafana get podNAME READY STATUS RESTARTS AGE grafana-7f5786dbfc-9srz9 1/1 Running 0 2m46s -
We can get the Grafana
adminuser password by running:kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
All the deployments now are completed. It is time we set up our Grafana. We can port-forward the Grafana service and access the Grafana dashboard directly from http://localhost:8080/:
kubectl -n grafana port-forward svc/grafana 8080:80
Info
You can login with admin user and password getting before.
After this, we can check if the data are in successfully stored in Grafana Mimir. From Grafana Dashboard click in Explore, select the node_exporter_build_info query and select Run query:

Conclusion
In this guide, we demonstrated how n2x.io, integrated with Grafana Mimir and Grafana Alloy, provides a scalable, centralized architecture for collecting and analyzing metrics. By bridging physical servers and Kubernetes clusters across both private and public environments, this unified telemetry and networking setup empowers operations teams with a consistent and reliable observability layer.
This approach reduces the overhead of managing fragmented systems, simplifies troubleshooting, and ensures cohesive visibility across distributed infrastructure. For teams operating in complex, multi-site environments, it offers a solid foundation for building efficient, resilient monitoring pipelines.
To dive deeper into the challenges of fragmented tooling and the benefits of a unified observability strategy, we recommend reading Observability Tools as Data Silos of Troubleshooting.