Skip to content

Commit 4e6c691

Browse files
committed
Add prerequisites for some ootb integrations
1 parent 6f550cf commit 4e6c691

File tree

9 files changed

+225
-4
lines changed

9 files changed

+225
-4
lines changed

resources/ceph/INSTALL.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1-
# Installation
1+
# Prerequisites
2+
3+
### Enable Prometheus Module
4+
Ceph instruments Prometheus metrics and annotates the manager pod with Prometheus annotations.
5+
6+
Make sure that the Prometheus module is activated in the Ceph cluster by running the following command:
7+
8+
```
9+
ceph mgr module enable prometheus
10+
```# Installation
211
312
The application is ready to be scraped

resources/consul/INSTALL.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,16 @@
1-
# Installation
1+
# Prerequisites
2+
3+
4+
### Enable Prometheus Metrics and Disable Hostname in Metrics
5+
As seen in Consul documentation pages [Helm Global Metrics](https://www.consul.io/docs/k8s/helm#v-global-metrics) and [Prometheus Retention Time](https://www.consul.io/docs/agent/options#telemetry-prometheus_retention_time), to make Consul expose an endpoint for scraping metrics, you need to enable a few global.metrics configurations.
6+
You also need to enable the telemetry.disable_hostname "extra configurations" in the Consul Server and Client, so the metrics don't contain the name of the instances.
7+
8+
If you install Consul with Helm, you need to use the following flags:
9+
```
10+
--set 'global.metrics.enabled=true'
11+
--set 'global.metrics.enableAgentMetrics=true'
12+
--set 'server.extraConfig="{"telemetry": {"disable_hostname": true}}"'
13+
--set 'client.extraConfig="{"telemetry": {"disable_hostname": true}}"'
14+
```# Installation
215
316
The application is ready to be scraped

resources/fluentd/INSTALL.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,39 @@
1+
# Prerequisites
2+
3+
### OpenShift
4+
5+
If you have installed Fluentd using the OpenShift Logging Operator, no further action is required to enable monitoring.
6+
7+
### Kubernetes
8+
9+
#### Enable Prometheus Metrics
10+
For Fluentd to expose Prometheus metrics, enable the following plugins:
11+
- 'prometheus' input plugin
12+
- 'prometheus_monitor' input plugin
13+
- 'prometheus_output_monitor' input plugin
14+
15+
As seen in the [official plugin documentation](https://github.com/fluent/fluent-plugin-prometheus/blob/master/README.md), you can enable them with the following configurations:
16+
```
17+
<source>
18+
@type prometheus
19+
@id in_prometheus
20+
bind "0.0.0.0"
21+
port 24231
22+
metrics_path "/metrics"
23+
</source>
24+
25+
<source>
26+
@type prometheus_monitor
27+
@id in_prometheus_monitor
28+
</source>
29+
30+
<source>
31+
@type prometheus_output_monitor
32+
@id in_prometheus_output_monitor
33+
</source>
34+
```
35+
36+
If you are deploying Fluentd using the [official Helm chart](https://github.com/fluent/helm-charts/tree/main/charts/fluentd), it already has these plugins enabled by default in its configuration, so no additional actions are needed.
137
# Installation
238

339
The application is ready to be scraped
Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,31 @@
1-
# Installation
1+
# Prerequisites
2+
3+
4+
### Enable Prometheus Metrics
5+
For HAProxy to expose Prometheus metrics, the following options must be enabled:
6+
- controller.metrics.enabled = true
7+
- controller.stats.enabled = true
8+
9+
You can check all the properties in the [official web page](https://github.com/haproxy-ingress/charts/blob/release-0.13/haproxy-ingress/README.md#configuration).
10+
11+
If you are deploying HAProxy using the [official Helm chart](https://github.com/haproxytech/helm-charts/tree/main/kubernetes-ingress), they can be enabled with the following configurations:
12+
13+
```
14+
helm install haproxy-ingress haproxy-ingress/haproxy-ingress \
15+
--set-string "controller.stats.enabled = true" \
16+
--set-string "controller.metrics.enabled = true"
17+
```
18+
19+
This configuration creates the following section in haproxy.cfg file
20+
21+
```
22+
frontend prometheus
23+
mode http
24+
bind :9101
25+
http-request use-service prometheus-exporter if { path /metrics }
26+
http-request use-service lua.send-prometheus-root if { path / }
27+
http-request use-service lua.send-404
28+
no log
29+
```# Installation
230
331
The application is ready to be scraped

resources/harbor/INSTALL.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1-
# Installation
1+
# Prerequisites
2+
3+
4+
### Enable Prometheus Metrics
5+
As seen in the Harbor documentation page [Configure the Harbor YML File](https://goharbor.io/docs/main/install-config/configure-yml-file/), to make Harbor expose an endpoint for scraping metrics, you need to set the 'metric.enabled' configuration to 'true'.
6+
7+
If you install Harbor with Helm, you need to use the following flag:
8+
```
9+
--set 'metrics.enabled=true'
10+
```# Installation
211
312
The application is ready to be scraped

resources/keda/INSTALL.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1+
# Prerequisites
2+
3+
### Enable Prometheus Metrics
4+
Keda instruments Prometheus metrics and annotates the metrics API pod with Prometheus annotations.
5+
6+
Make sure that the prometheus metrics are activated. If you install Keda with Helm you need to use the following flag:
7+
```
8+
--set prometheus.metricServer.enabled=true
9+
```
110
# Installation
211

312
The application is ready to be scraped

resources/nginx-ingress/INSTALL.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,25 @@
1+
# Prerequisites
2+
3+
4+
### Enable NGINX Ingress metrics
5+
6+
To enable metric scraping, you should add the following line to the NGINX Ingress configuration file:
7+
8+
```
9+
controller.metrics.enabled=true
10+
```
11+
12+
This parameter should be added in the NGINX Ingress section of the values.yaml file if you're using Helm to deploy the NGINX Ingress, or in the nginx-ingress-controller configuration file if you're using a native Kubernetes installation.
13+
14+
Once you've enabled metric scraping with this parameter, the NGINX Ingress will automatically begin exposing its metrics on port 10254.
15+
16+
Another option is adding the following line to the NGINX Ingress configuration file:
17+
18+
```
19+
controller.metrics.podAnnotations.prometheus.io/scrape=true
20+
```
21+
22+
123
# Installation
224

325
The application is ready to be scraped
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,88 @@
1+
# Prerequisites
2+
3+
### Openshift 3.11
4+
5+
Once the Sysdig agent is deployed, check if it is running on all nodes (compute, master, and infra):
6+
7+
```
8+
oc get nodes
9+
oc get pods -n sysdig-agent -o wide
10+
```
11+
12+
Apply this patch in case the Agent is not running on infra/master.
13+
14+
```
15+
oc patch namespace sysdig-agent --patch-file='sysdig-agent-namespace-patch.yaml'
16+
```
17+
18+
sysdig-agent-namespace-patch.yaml file
19+
```yaml
20+
apiVersion: v1
21+
kind: Namespace
22+
metadata:
23+
annotations:
24+
openshift.io/node-selector: ""
25+
```
26+
27+
OpenShift integrates security by default. Therefore, if you want Sysdig agent to scrape HAProxy router metrics, provide it with the necessary permissions. To do so:
28+
29+
```
30+
oc apply -f router-clusterrolebinding-sysdig-agent-oc3.yaml
31+
```
32+
33+
router-clusterrolebinding-sysdig-agent-oc3.yaml file
34+
```yaml
35+
apiVersion: rbac.authorization.k8s.io/v1
36+
kind: ClusterRole
37+
metadata:
38+
name: haproxy-route-monitoring
39+
rules:
40+
- apiGroups:
41+
- route.openshift.io
42+
resources:
43+
- routers/metrics
44+
verbs:
45+
- get
46+
---
47+
apiVersion: rbac.authorization.k8s.io/v1
48+
kind: ClusterRoleBinding
49+
metadata:
50+
labels:
51+
app: sysdig-agent
52+
name: sysdig-router-monitoring
53+
roleRef:
54+
apiGroup: rbac.authorization.k8s.io
55+
kind: ClusterRole
56+
name: haproxy-route-monitoring
57+
subjects:
58+
- kind: ServiceAccount
59+
name: sysdig-agent
60+
namespace: sysdig-agent # Remember to change to the namespace where you have the Sysdig agents deployed
61+
```
62+
63+
### Openshift 4.X
64+
65+
OpenShift integrates security by default. Therefore, if you want Sysdig agent to scrape HAProxy router metrics, provide it with the necessary permissions. To do so:
66+
67+
```
68+
oc apply -f router-clusterrolebinding-sysdig-agent-oc4.yaml
69+
```
70+
71+
router-clusterrolebinding-sysdig-agent-oc4.yaml file
72+
```yaml
73+
apiVersion: rbac.authorization.k8s.io/v1
74+
kind: ClusterRoleBinding
75+
metadata:
76+
name: router-monitoring-sysdig-agent
77+
roleRef:
78+
apiGroup: rbac.authorization.k8s.io
79+
kind: ClusterRole
80+
name: router-monitoring
81+
subjects:
82+
- kind: ServiceAccount
83+
name: sysdig-agent
84+
namespace: sysdig-agent # Remember to change to the namespace where you have the Sysdig agents deployed
85+
```
186
# Installation
287
388
The application is ready to be scraped

resources/rabbitmq/INSTALL.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,13 @@
1+
# Prerequisites
2+
3+
### Enable Prometheus Metrics
4+
Rabbitmq instruments Prometheus metrics and annotates the metrics API pod with Prometheus annotations.
5+
6+
Make sure that Prometheus metrics are activated. If they are not, activate the plugin using this command inside of the rabbitmq container:
7+
8+
```sh
9+
rabbitmq-plugins enable rabbitmq_prometheus
10+
```
111
# Installation
212

313
The application is ready to be scraped

0 commit comments

Comments
 (0)