设立 Prometheus 和 Grafana 来监控 Longhorn
概览Longhorn 在 REST 端点 http://LONGHORN_MANAGER_IP:PORT/metrics 上以 Prometheus 文本神色原生公开筹划。关联系数可用筹划的评释,请参阅 Longhorn's metrics。您不错使用 Prometheus, Graphite, Telegraf 等任何网络器具来捏取这些筹划,然后通过 Grafana 等器具将网络到的数据可视化。
幸运快艇百家乐本文档提供了一个监控 Longhorn 的示例设立。监控系统使用 Prometheus 网络数据和警报,使用 Grafana 将网络的数据可视化/容貌板(visualizing/dashboarding)。高等轮廓来看,监控系统包含:
Prometheus 处事器从 Longhorn 筹划端点捏取和存储时辰序列数据。Prometheus 还崇拜根据竖立的端正和网络的数据生成警报。Prometheus 处事器然后将警报发送到 Alertmanager。 AlertManager 然后经管这些警报(alerts),包括静默(silencing)、抑制(inhibition)、团聚(aggregation)和通过电子邮件、招呼见知系统和聊天平台等顺序发送见知。 Grafana 向 Prometheus 处事器查询数据并绘图容貌板进行可视化。下图形色了监控系统的翔实架构。
上图中有 2 个未说起的组件:
Longhorn 后端处事是指向 Longhorn manager pods 集的处事。Longhorn 的筹划在端点 http://LONGHORN_MANAGER_IP:PORT/metrics 的 Longhorn manager pods 中公开。 Prometheus operator 使在 Kubernetes 上启动 Prometheus 变得至极容易。operator 监视 3 个自界说资源:ServiceMonitor、Prometheus 和 AlertManager。当用户创建这些自界说资源时,Prometheus Operator 会使用用户指定的竖立部署和经管 Prometheus server, AlerManager。装配
按照此评释将系数组件装配到 monitoring 定名空间中。要将它们装配到不同的定名空间中,请篡改字段 namespace: OTHER_NAMESPACE
创建 monitoring 定名空间
apiVersion: 2023款丰田皇冠4款车型v1 kind: Namespace metadata: name: monitoring
装配 Prometheus Operator
部署 Prometheus Operator 偏激所需的 ClusterRole、ClusterRoleBinding 和 Service Account。
www.crownstakeszonehome.comapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/version: v0.38.3 name: prometheus-operator namespace: monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-operator subjects: - kind: ServiceAccount name: prometheus-operator namespace: monitoring --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/version: v0.38.3 name: prometheus-operator namespace: monitoring rules: - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - create - apiGroups: - apiextensions.k8s.io resourceNames: - alertmanagers.monitoring.coreos.com - podmonitors.monitoring.coreos.com - prometheuses.monitoring.coreos.com - prometheusrules.monitoring.coreos.com - servicemonitors.monitoring.coreos.com - thanosrulers.monitoring.coreos.com resources: - customresourcedefinitions verbs: - get - update - apiGroups: - monitoring.coreos.com resources: - alertmanagers - alertmanagers/finalizers - prometheuses - prometheuses/finalizers - thanosrulers - thanosrulers/finalizers - servicemonitors - podmonitors - prometheusrules verbs: - '*' - apiGroups: - apps resources: - statefulsets verbs: - '*' - apiGroups: - "" resources: - configmaps - secrets verbs: - '*' - apiGroups: - "" resources: - pods verbs: - list - delete - apiGroups: - "" resources: - services - services/finalizers - endpoints verbs: - get - create - update - delete - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - namespaces verbs: - get - list - watch --- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/version: v0.38.3 name: prometheus-operator namespace: monitoring spec: replicas: 1 selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator template: metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/version: v0.38.3 spec: containers: - args: - --kubelet-service=kube-system/kubelet - --logtostderr=true - --config-reloader-image=jimmidyson/configmap-reload:v0.3.0 - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.38.3 image: quay.io/prometheus-operator/prometheus-operator:v0.38.3 name: prometheus-operator ports: - containerPort: 8080 name: http resources: limits: cpu: 200m memory: 200Mi requests: cpu: 100m memory: 100Mi securityContext: allowPrivilegeEscalation: false nodeSelector: beta.kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 65534 serviceAccountName: prometheus-operator --- apiVersion: v1 kind: ServiceAccount metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/version: v0.38.3 name: prometheus-operator namespace: monitoring --- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/version: v0.38.3 name: prometheus-operator namespace: monitoring spec: clusterIP: None ports: - name: http port: 8080 targetPort: http selector: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator
装配 Longhorn ServiceMonitor
Longhorn ServiceMonitor 有一个标签聘请器 app: longhorn-manager 来聘请 Longhorn 后端处事。稍后,Prometheus CRD 不错包含 Longhorn ServiceMonitor,以便 Prometheus server 不错发现系数 Longhorn manager pods 偏激端点。
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: longhorn-prometheus-servicemonitor namespace: monitoring labels: name: longhorn-prometheus-servicemonitor spec: selector: matchLabels: app: longhorn-manager namespaceSelector: matchNames: - longhorn-system endpoints: - port: manager
装配和竖立 Prometheus AlertManager
使用 3 个实例创建一个高可用的 Alertmanager 部署:
apiVersion: monitoring.coreos.com/v1 kind: Alertmanager metadata: name: longhorn namespace: monitoring spec: replicas: 3
除非提供灵验竖立,不然 Alertmanager 实例将无法启动。关联 Alertmanager 竖立的更多评释,请参见此处。底下的代码给出了一个示例竖立:
global: resolve_timeout: 5m route: group_by: [alertname] receiver: email_and_slack receivers: - name: email_and_slack email_configs: - to: <the email address to send notifications to> from: <the sender address> smarthost: <the SMTP host through which emails are sent> # SMTP authentication information. auth_username: <the username> auth_identity: <the identity> auth_password: <the password> headers: subject: 'Longhorn-Alert' text: |- {{ range .Alerts }} *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}` *Description:* {{ .Annotations.description }} *Details:* {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{ end }} {{ end }} slack_configs: - api_url: <the Slack webhook URL> channel: <the channel or user to send notifications to> text: |- {{ range .Alerts }} *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}` *Description:* {{ .Annotations.description }} *Details:* {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{ end }} {{ end }}
将上述 Alertmanager 竖立保存在名为 alertmanager.yaml 的文献中,并使用 kubectl 从中创建一个 secret。
Alertmanager 实例要求 secret 资源定名罢黜 alertmanager-{ALERTMANAGER_NAME} 神色。在上一步中,Alertmanager 的称号是 longhorn,是以 secret 称号必须是 alertmanager-longhorn
皇冠体育$ kubectl create secret generic alertmanager-longhorn --from-file=alertmanager.yaml -n monitoring
为了好像稽查 Alertmanager 的 Web UI,请通过 Service 公开它。一个通俗的顺序是使用 NodePort 类型的 Service :
apiVersion: v1 kind: Service metadata: name: alertmanager-longhorn namespace: monitoring spec: type: NodePort ports: - name: web nodePort: 30903 port: 9093 protocol: TCP targetPort: web selector: alertmanager: longhorn
创建上述处事后,您不错通过节点的 IP 和端口 30903 看望 Alertmanager 的 web UI。
使用上头的 NodePort 处事进行快速考据,因为它欠亨过 TLS 贯穿进行通讯。您可能但愿将处事类型篡改为 ClusterIP,并设立一个 Ingress-controller 以通过 TLS 贯穿公开 Alertmanager 的 web UI。
装配和竖立 Prometheus server
创建界说警报条目的 PrometheusRule 自界说资源。
apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: longhorn role: alert-rules name: prometheus-longhorn-rules namespace: monitoring spec: groups: - name: longhorn.rules rules: - alert: LonghornVolumeUsageCritical annotations: description: Longhorn volume {{$labels.volume}} on {{$labels.node}} is at {{$value}}% used for more than 5 minutes. summary: Longhorn volume capacity is over 90% used. expr: 100 * (longhorn_volume_usage_bytes / longhorn_volume_capacity_bytes) > 90 for: 5m labels: issue: Longhorn volume {{$labels.volume}} usage on {{$labels.node}} is critical. severity: critical
关联如何界说警报端正的更多信息,请参见https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#alerting-rules
要是激活了 RBAC 授权,则为 Prometheus Pod 创建 ClusterRole 和 ClusterRoleBinding:
apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: monitoring apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: prometheus namespace: monitoring rules: - apiGroups: [""] resources: - nodes - services - endpoints - pods verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - configmaps verbs: ["get"] - nonResourceURLs: ["/metrics"] verbs: ["get"] apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: prometheus namespace: monitoring
创建 Prometheus 自界说资源。请防卫,咱们在 spec 中聘请了 Longhorn 处事监视器(service monitor)和 Longhorn 端正。
apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: prometheus namespace: monitoring spec: replicas: 2 serviceAccountName: prometheus alerting: alertmanagers: - namespace: monitoring name: alertmanager-longhorn port: web serviceMonitorSelector: matchLabels: name: longhorn-prometheus-servicemonitor ruleSelector: matchLabels: prometheus: longhorn role: alert-rules
为了好像稽查 Prometheus 处事器的 web UI,请通过 Service 公开它。一个通俗的顺序是使用 NodePort 类型的 Service:
apiVersion: v1 kind: Service metadata: name: prometheus namespace: monitoring spec: type: NodePort ports: - name: web nodePort: 30904 port: 9090 protocol: TCP targetPort: web selector: prometheus: prometheus
创建上述处事后,您不错通过节点的 IP 和端口 30904 看望 Prometheus server 的 web UI。
此时,您应该好像在 Prometheus server UI 的所在和端正部分看到系数 Longhorn manager targets 以及 Longhorn rules。
使用上述 NodePort service 进行快速考据,因为它欠亨过 TLS 贯穿进行通讯。您可能但愿将处事类型篡改为 ClusterIP,并设立一个 Ingress-controller 以通过 TLS 贯穿公开 Prometheus server 的 web UI。
装配 Grafana
创建 Grafana 数据源竖立:
apiVersion: v1 kind: ConfigMap metadata: name: grafana-datasources namespace: monitoring data: prometheus.yaml: |- { "apiVersion": 1, "datasources": [ { "access":"proxy", "editable": true, "name": "prometheus", "orgId": 1, "type": "prometheus", "url": "http://prometheus:9090", "version": 1 } ] }
创建 Grafana 部署:
apiVersion: apps/v1 kind: Deployment metadata: name: grafana namespace: monitoring labels: app: grafana spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: name: grafana labels: app: grafana spec: containers: - name: grafana image: grafana/grafana:7.1.5 ports: - name: grafana containerPort: 3000 resources: limits: memory: "500Mi" cpu: "300m" requests: memory: "500Mi" cpu: "200m" volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage - mountPath: /etc/grafana/provisioning/datasources name: grafana-datasources readOnly: false volumes: - name: grafana-storage emptyDir: {} - name: grafana-datasources configMap: defaultMode: 420 name: grafana-datasources
在 NodePort 32000 上显露 Grafana:
apiVersion: v1 kind: Service metadata: name: grafana namespace: monitoring spec: selector: app: grafana type: NodePort ports: - port: 3000 targetPort: 3000 nodePort: 32000
使用上述 NodePort 处事进行快速考据,因为它欠亨过 TLS 贯穿进行通讯。您可能但愿将处事类型篡改为 ClusterIP,并设立一个 Ingress-controller 以通过 TLS 贯穿公开 Grafana。
使用端口 32000 上的任何节点 IP 看望 Grafana 容貌板。默许阐发为:
User: admin Pass: admin
装配 Longhorn dashboard
投入 Grafana 后,导入预置的面板:https://grafana.com/grafana/dashboards/13032
博彩平台信誉关联如何导入 Grafana dashboard 的评释,请参阅 https://grafana.com/docs/grafana/latest/reference/export_import/
见效后,您应该会看到以下 dashboard:
将 Longhorn 筹划集成到 Rancher 监控系统中
对于 Rancher 监控系统
使用 Rancher,您不错通过与朝上的开源监控惩处决策 Prometheus 的集成来监控集群节点、Kubernetes 组件和软件部署的状况和进度。
关联如何部署/启用 Rancher 监控系统的评释,请参见https://rancher.com/docs/rancher/v2.x/en/monitoring-alerting/
将 Longhorn 筹划添加到 Rancher 监控系统
要是您使用 Rancher 来经管您的 Kubernetes 况兼也曾启用 Rancher 监控,您不错通过通俗地部署以下 ServiceMonitor 将 Longhorn 筹划添加到 Rancher 监控中:
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: longhorn-prometheus-servicemonitor namespace: longhorn-system labels: name: longhorn-prometheus-servicemonitor spec: selector: matchLabels: app: longhorn-manager namespaceSelector: matchNames: - longhorn-system endpoints: - port: manager
创建 ServiceMonitor 后,Rancher 将自动发现系数 Longhorn 筹划。
然后,您不错设立 Grafana 容貌板以进行可视化。
Longhorn 监控筹划
彩票游戏推荐网站Volume(卷)
筹划名 评释 示例 longhorn_volume_actual_size_bytes 对应节点上卷的每个副本使用的骨子空间 longhorn_volume_actual_size_bytes{node="worker-2",volume="testvol"} 1.1917312e+08 longhorn_volume_capacity_bytes 此卷的竖立大小(以 byte 为单元) longhorn_volume_capacity_bytes{node="worker-2",volume="testvol"} 6.442450944e+09 longhorn_volume_state 本卷状况:1=creating, 2=attached, 3=Detached, 4=Attaching, 5=Detaching, 6=Deleting longhorn_volume_state{node="worker-2",volume="testvol"} 2 longhorn_volume_robustness 本卷的健壮性: 0=unknown, 1=healthy, 2=degraded, 3=faulted longhorn_volume_robustness{node="worker-2",volume="testvol"} 1Node(节点)
筹划名 评释 示例 longhorn_node_status 该节点的状况:1=true, 0=false longhorn_node_status{condition="ready",condition_reason="",node="worker-2"} 1 longhorn_node_count_total Longhorn 系统中的节点总和 longhorn_node_count_total 4 longhorn_node_cpu_capacity_millicpu 此节点上的最大可分派 CPU longhorn_node_cpu_capacity_millicpu{node="worker-2"} 2000 longhorn_node_cpu_usage_millicpu 此节点上的 CPU 使用率 longhorn_node_cpu_usage_millicpu{node="pworker-2"} 186 longhorn_node_memory_capacity_bytes 此节点上的最大可分派内存 longhorn_node_memory_capacity_bytes{node="worker-2"} 4.031229952e+09 longhorn_node_memory_usage_bytes 此节点上的内存使用情况 longhorn_node_memory_usage_bytes{node="worker-2"} 1.833582592e+09 longhorn_node_storage_capacity_bytes 本节点的存储容量 longhorn_node_storage_capacity_bytes{node="worker-3"} 8.3987283968e+10 longhorn_node_storage_usage_bytes 该节点的已用存储 longhorn_node_storage_usage_bytes{node="worker-3"} 9.060941824e+09 longhorn_node_storage_reservation_bytes 此节点上为其他期骗重要和系统保留的存储空间 longhorn_node_storage_reservation_bytes{node="worker-3"} 2.519618519e+10Disk(磁盘)
筹划名 评释 示例 longhorn_disk_capacity_bytes 此磁盘的存储容量 longhorn_disk_capacity_bytes{disk="default-disk-8b28ee3134628183",node="worker-3"} 8.3987283968e+10 longhorn_disk_usage_bytes 此磁盘的已用存储空间 longhorn_disk_usage_bytes{disk="default-disk-8b28ee3134628183",node="worker-3"} 9.060941824e+09 longhorn_disk_reservation_bytes 此磁盘上为其他期骗重要和系统保留的存储空间 longhorn_disk_reservation_bytes{disk="default-disk-8b28ee3134628183",node="worker-3"} 2.519618519e+10Instance Manager(实例经管器)
皇冠客服飞机:@seo3687
筹划名 评释 示例 longhorn_instance_manager_cpu_usage_millicpu 这个 longhorn 实例经管器的 CPU 使用率 longhorn_instance_manager_cpu_usage_millicpu{instance_manager="instance-manager-e-2189ed13",instance_manager_type="engine",node="worker-2"} 80 longhorn_instance_manager_cpu_requests_millicpu 在这个 Longhorn 实例经管器的 kubernetes 中肯求的 CPU 资源 longhorn_instance_manager_cpu_requests_millicpu{instance_manager="instance-manager-e-2189ed13",instance_manager_type="engine",node="worker-2"} 250 longhorn_instance_manager_memory_usage_bytes 这个 longhorn 实例经管器的内存使用情况 longhorn_instance_manager_memory_usage_bytes{instance_manager="instance-manager-e-2189ed13",instance_manager_type="engine",node="worker-2"} 2.4072192e+07 longhorn_instance_manager_memory_requests_bytes 这个 longhorn 实例经管器在 Kubernetes 中肯求的内存 longhorn_instance_manager_memory_requests_bytes{instance_manager="instance-manager-e-2189ed13",instance_manager_type="engine",node="worker-2"} 0Manager(经管器)
筹划名 评释 示例 longhorn_manager_cpu_usage_millicpu 这个 Longhorn Manager 的 CPU 使用率 longhorn_manager_cpu_usage_millicpu{manager="longhorn-manager-5rx2n",node="worker-2"} 27 longhorn_manager_memory_usage_bytes 这个 Longhorn Manager 的内存使用情况 longhorn_manager_memory_usage_bytes{manager="longhorn-manager-5rx2n",node="worker-2"} 2.6144768e+07 复古 Kubelet Volume 筹划对于 Kubelet Volume 筹划
Kubelet 公开了以下筹划:
kubelet_volume_stats_capacity_bytes kubelet_volume_stats_available_bytes kubelet_volume_stats_used_bytes kubelet_volume_stats_inodes kubelet_volume_stats_inodes_free kubelet_volume_stats_inodes_used这些筹划揣摸与 Longhorn 块成立内的 PVC 文献系统接头的信息。
它们与 longhorn_volume_* 筹划不同,后者测量特定于 Longhorn 块成立(block device)的信息。
您不错设立一个监控系统来捏取 Kubelet 筹划端点以得回 PVC 的状况并设创新常事件的警报,举例 PVC 行将亏损存储空间。
一个流行的监控设立是 prometheus-operator/kube-prometheus-stack,,它捏取 kubelet_volume_stats_* 筹划并为它们提供容貌板和警报端正。
外汇 Longhorn CSI 插件复古在 v1.1.0 中,Longhorn CSI 插件根据 CSI spec 复古 NodeGetVolumeStats RPC。
博彩皇冠赔率这允许 kubelet 查询 Longhorn CSI 插件以得回 PVC 的状况。
然后 kubelet 在 kubelet_volume_stats_* 筹划中公开该信息。
Longhorn 警报端正示例咱们不才面提供了几个示例 Longhorn 警报端正供您参考。请参阅此处得回系数可用 Longhorn 筹划的列表并构建您我方的警报端正。
apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: longhorn role: alert-rules name: prometheus-longhorn-rules namespace: monitoring spec: groups: - name: longhorn.rules rules: - alert: LonghornVolumeActualSpaceUsedWarning annotations: description: The actual space used by Longhorn volume {{$labels.volume}} on {{$labels.node}} is at {{$value}}% capacity for more than 5 minutes. summary: The actual used space of Longhorn volume is over 90% of the capacity. expr: (longhorn_volume_actual_size_bytes / longhorn_volume_capacity_bytes) * 100 > 90 for: 5m labels: issue: The actual used space of Longhorn volume {{$labels.volume}} on {{$labels.node}} is high. severity: warning - alert: LonghornVolumeStatusCritical annotations: description: Longhorn volume {{$labels.volume}} on {{$labels.node}} is Fault for more than 2 minutes. summary: Longhorn volume {{$labels.volume}} is Fault expr: longhorn_volume_robustness == 3 for: 5m labels: issue: Longhorn volume {{$labels.volume}} is Fault. severity: critical - alert: LonghornVolumeStatusWarning annotations: description: Longhorn volume {{$labels.volume}} on {{$labels.node}} is Degraded for more than 5 minutes. summary: Longhorn volume {{$labels.volume}} is Degraded expr: longhorn_volume_robustness == 2 for: 5m labels: issue: Longhorn volume {{$labels.volume}} is Degraded. severity: warning - alert: LonghornNodeStorageWarning annotations: description: The used storage of node {{$labels.node}} is at {{$value}}% capacity for more than 5 minutes. summary: The used storage of node is over 70% of the capacity. expr: (longhorn_node_storage_usage_bytes / longhorn_node_storage_capacity_bytes) * 100 > 70 for: 5m labels: issue: The used storage of node {{$labels.node}} is high. severity: warning - alert: LonghornDiskStorageWarning annotations: description: The used storage of disk {{$labels.disk}} on node {{$labels.node}} is at {{$value}}% capacity for more than 5 minutes. summary: The used storage of disk is over 70% of the capacity. expr: (longhorn_disk_usage_bytes / longhorn_disk_capacity_bytes) * 100 > 70 for: 5m labels: issue: The used storage of disk {{$labels.disk}} on node {{$labels.node}} is high. severity: warning - alert: LonghornNodeDown annotations: description: There are {{$value}} Longhorn nodes which have been offline for more than 5 minutes. summary: Longhorn nodes is offline expr: longhorn_node_total - (count(longhorn_node_status{condition="ready"}==1) OR on() vector(0)) for: 5m labels: issue: There are {{$value}} Longhorn nodes are offline severity: critical - alert: LonghornIntanceManagerCPUUsageWarning annotations: description: Longhorn instance manager {{$labels.instance_manager}} on {{$labels.node}} has CPU Usage / CPU request is {{$value}}% for more than 5 minutes. summary: Longhorn instance manager {{$labels.instance_manager}} on {{$labels.node}} has CPU Usage / CPU request is over 300%. expr: (longhorn_instance_manager_cpu_usage_millicpu/longhorn_instance_manager_cpu_requests_millicpu) * 100 > 300 for: 5m labels: issue: Longhorn instance manager {{$labels.instance_manager}} on {{$labels.node}} consumes 3 times the CPU request. severity: warning - alert: LonghornNodeCPUUsageWarning annotations: description: Longhorn node {{$labels.node}} has CPU Usage / CPU capacity is {{$value}}% for more than 5 minutes. summary: Longhorn node {{$labels.node}} experiences high CPU pressure for more than 5m. expr: (longhorn_node_cpu_usage_millicpu / longhorn_node_cpu_capacity_millicpu) * 100 > 90 for: 5m labels: issue: Longhorn node {{$labels.node}} experiences high CPU pressure. severity: warning
在https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#alerting-rules 稽查关联如何界说警报端正的更多信息。