Lightweight Logging System: Grafana Loki vs ELK

πŸ‘‡ Follow 51CTO Technology Stack, Enjoy technology and fulfill the CTO dream πŸ‘‡

β€œ

Grafana Loki is a log aggregation tool that serves as the core of a fully functional logging stack.

Lightweight Logging System: Grafana Loki vs ELK

Image from Pexels

Let’s take a look at how lightweight the results are:
Lightweight Logging System: Grafana Loki vs ELK
Loki is a data store optimized for efficiently preserving log data. The efficient indexing of log data distinguishes Loki from other logging systems. Unlike other logging systems, Loki’s index is built based on labels, and the raw log messages are not indexed.

Lightweight Logging System: Grafana Loki vs ELK

A client (also known as an agent) collects logs, converts them into streams, and then pushes the streams to Loki via HTTP API. The Promtail agent is specifically designed for Loki, but many other agents integrate seamlessly with Loki.

Lightweight Logging System: Grafana Loki vs ELK

Loki features are as follows:

Efficient memory usage for indexing logs: By indexing on a set of labels, the index can be significantly smaller than other log aggregation products. Less memory leads to lower operational costs.

Multi-tenancy:Loki allows multiple tenants to use a single Loki instance. Data from different tenants is completely isolated from each other. Multi-tenancy is configured by assigning a tenant ID in the agent.

LogQL, Loki’s query language: Users familiar with Prometheus’s query language, PromQL, will find LogQL familiar and flexible for generating queries against logs. This language also helps generate metrics from log data, a powerful feature that goes far beyond log aggregation.

Scalability:Loki performs well on a small scale. In single-process mode, all required microservices run in one process.

Single-process mode is ideal for testing Loki, running locally, or small-scale deployments. Loki is also designed to scale horizontally for large installations.

Each microservice component of Loki can be broken down into individual processes, and the configuration allows for independent scaling of components.

Flexibility: Many agents (clients) have plugin support. This allows existing observability structures to add Loki as their log aggregation tool without switching parts of the existing observability stack.

Grafana integration:Loki integrates seamlessly with Grafana, providing a complete observability stack.

Loki/Promtail/Grafana vs EFK

EFK (Elasticsearch, Fluentd, Kibana) stack is used to ingest, visualize, and query logs from various sources.

Data in Elasticsearch is stored on disk as unstructured JSON objects. The keys of each object and the content of each key are indexed.

Queries can then be defined using JSON objects (known as query DSL) or queried using the Lucene query language.

In contrast, Loki in single binary mode can store data on disk, but in horizontal scalable mode, data is stored in cloud storage systems like S3, GCS, or Cassandra.

Logs are stored as plain text and tagged with a set of label names and values, where only the label pairs are indexed. This trade-off makes operations cheaper than full indexing and allows developers to actively log from their applications. Use LogQL to query logs in Loki.

However, due to this design trade-off, LogQL queries that filter based on content (i.e., text in the log line) require loading all blocks that match the labels defined in the query within the search window.

Fluentd is often used to collect logs and forward them to Elasticsearch. Fluentd is known as a data collector that can ingest logs from many sources, process them, and forward them to one or more destinations.

In contrast, Promtail’s use case is specifically tailored for Loki. Its primary operation mode is to discover log files stored on disk and forward them with a set of labels associated to Loki.

Promtail can perform service discovery for Kubernetes pods running on the same node as Promtail, acting as a container sidecar or Docker log driver, reading logs from specified folders, and tracking systemd logs.

Loki represents logs with a set of labels in a manner similar to how Prometheus represents metrics.

When deployed in an environment with Prometheus, logs from Promtail typically have the same labels as application metrics due to using the same service discovery mechanism.

Having logs and metrics with the same labels allows users to seamlessly switch context between metrics and logs, aiding in root cause analysis.

Kibana is used for visualizing and searching Elasticsearch data and is very powerful in analyzing this data.

Kibana provides many visualization tools for data analysis, such as geo maps, machine learning for anomaly detection, and graphs for discovering data relationships. Alerts can be configured to notify users when unexpected situations occur.

In contrast, Grafana is tailored specifically for time-series data from sources like Prometheus and Loki.

Dashboards can be set up to visualize metrics (with log support coming soon), and you can perform ad-hoc queries on your data using the explore view. Like Kibana, Grafana supports alerting based on your metrics.

Architecture Diagram:

Lightweight Logging System: Grafana Loki vs ELK

Architecture Diagram for Collecting Logs:

Lightweight Logging System: Grafana Loki vs ELK
A lightweight log collection solution:
  • Promtail: Log collection tool

  • Loki: Log aggregation system

  • Grafana: Visualization tool

Deploying Loki

Official Website:

https://github.com/grafana/loki

β‘  Loki

Edit the Loki configuration file: loki-config.yaml, reference:

https://grafana.com/docs/loki/latest/configuration/examples/
https://grafana.com/docs/loki/latest/installation/docker/
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-config
  labels:
    name: loki
data:
  loki-config.yaml: |-
    auth_enabled: false

    server:
      http_listen_port: 3100
      grpc_listen_port: 9096

    ingester:
      lifecycler:
        address: 127.0.0.1
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
        final_sleep: 0s
      chunk_idle_period: 5m
      chunk_retain_period: 30s

      chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
      max_transfer_retries: 0     # Chunk transfers disabled

    schema_config:
      configs:
      - from: 2021-08-18
        store: boltdb
        object_store: filesystem
        schema: v11
        index:
          prefix: index_
          period: 168h

    storage_config:
      boltdb:
        directory: /tmp/loki/index

      filesystem:
        directory: /tmp/loki/chunks

    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h

      ingestion_rate_mb: 15

    chunk_store_config:
      max_look_back_period: 0s

    table_manager:
      retention_deletes_enabled: false
      retention_period: 0s
kubectl apply -f loki-config.yaml

Create Service and StatefulSet, loki.yaml:

---
apiVersion: v1
kind: Service
metadata:
  name: loki
  annotations:
    k8s.kuboard.cn/displayName: loki
    k8s.kuboard.cn/workload: loki
  labels:
    name: loki
spec:
  ports:
    - name: http
      port: 3100
      protocol: TCP
      targetPort: 3100
  selector:
    name: loki

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: loki
spec:
  serviceName: loki
  selector:
    matchLabels:
      name: loki
  template:
    metadata:
      labels:
        name: loki
    spec:
      volumes:
      - name: loki-config
        configMap:
          #defaultMode: 0640
          name: loki-config
      containers:
      - name: loki
        #image: grafana/loki:2.3.0
        image: grafana/loki:master
        args:
        - -config.file=/etc/loki/loki-config.yaml
        ports:
        - containerPort: 3100
          name: loki
          protocol: TCP
        volumeMounts:
        - name: loki-config
          mountPath: /etc/loki/
          readOnly: true

Run the command to create:

kubectl apply -f loki.yaml

β‘‘ Grafana

You can change the storage section according to your time requirements; if not changed, it is emptyDir, you know. Username and password are admin/admin123. You can modify it yourself.

apiVersion: v1
kind: Service
metadata:
  name: grafana
  labels:
    k8s-app: grafana
spec:
  type: NodePort
  ports:
  - name: http
    port: 3000
    targetPort: 3000
  selector:
    k8s-app: grafana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  labels:
    k8s-app: grafana
spec:
  selector:
    matchLabels:
      k8s-app: grafana
  template:
    metadata:
      labels:
        k8s-app: grafana
    spec:
     # initContainers:             ## Initialization container for modifying the ownership of the mounted storage folder
     # - name: init-file
     #   image: busybox:1.28
     #   imagePullPolicy: IfNotPresent
     #   securityContext:
     #     runAsUser: 0
     #   command: ['chown', '-R', "472:0", "/var/lib/grafana"]
     #   volumeMounts:
     #   - name: data
     #     mountPath: /var/lib/grafana
     #     subPath: grafana
      containers:                
      - name: grafana             ## Grafana container
        #image: grafana/grafana
        image: grafana/grafana:7.4.3
        #securityContext:          ## Container security policy, setting the group and user used to run the container
        #  fsGroup: 0
        #  runAsUser: 472
        ports:
        - name: http
          containerPort: 3000
          protocol: TCP
        env:                      ## Configure environment variables, setting Grafana's default admin username/password
        - name: GF_SECURITY_ADMIN_USER
          value: "admin"
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: "admin123"
        readinessProbe:           ## Readiness probe
          failureThreshold: 10
          httpGet:
            path: /api/health
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        livenessProbe:            ## Liveness probe
          failureThreshold: 10
          httpGet:
            path: /api/health
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        volumeMounts:            ## Container mount configuration
        - name: data
          mountPath: /var/lib/grafana
          subPath: grafana
      volumes:                   ## Shared storage mount configuration
      - name: data
        emptyDir: {}
        #persistentVolumeClaim:
        #  claimName: grafana     ## Specify the PVC to use

β‘’ Promtail

Combine the application with Promtail for log collection. Here we use Sidecar mode. Two containers run in one pod, one for the business container and one for Promtail, both mounting the same storage directory, allowing Promtail to collect logs.

Edit promtail-config.yaml, labels can be set according to different businesses. Reference:

https://grafana.com/docs/loki/latest/clients/promtail/installation/
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: promtail-config
  labels:
    k8s-app: promtail
data:
  promtail.yaml: |-
    server:
      http_listen_port: 9080
      grpc_listen_port: 0

    positions:
      filename: ./positions.yaml # This location needs to be writable by Promtail.
      #filename: /tmp/positions.yaml # This location needs to be writable by Promtail.

    client:
      url: http://loki:3100/loki/api/v1/push

    scrape_configs:
    - job_name: system
    #- job_name: busybox
      static_configs:
      - targets:
          - localhost
        labels:
          job: varlog    # Custom
          host: busybox  # Custom
          __path__: /tmp/*log   # Directory to collect logs
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: promtail-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      name: promtail
  template:
    metadata:
      labels:
        name: promtail
    spec:
      volumes:
      - name: log
        emptyDir: {}
      - name: promtail-config
        configMap:
          name: promtail-config

      containers:
      - name: promtail
        image: grafana/promtail:master
        imagePullPolicy: IfNotPresent
        args:
        - -config.file=/etc/promtail/promtail.yaml
        volumeMounts:
        - name: log
          mountPath: /tmp/
        - name: promtail-config
          mountPath: /etc/promtail/

      - name: busybox
        image: centos:7
        imagePullPolicy: IfNotPresent
        args:
        - /bin/sh
        - -c
        - "while : ; do echo '--- promtail log test ---' `date` && echo '--- promtail log test ---' `date` >> /tmp/healthy.log && sleep 3 ; done "
        volumeMounts:
        - name: log
          mountPath: /tmp/

Configure Grafana and view logs

View the NodePort port of Grafana:
kubectl get svc
Lightweight Logging System: Grafana Loki vs ELK

Open the Grafana page in the browser by entering Node IP + the port seen above:

Lightweight Logging System: Grafana Loki vs ELK

Enter username and password to log in: admin admin123.

Configure the data source:

Lightweight Logging System: Grafana Loki vs ELK

Lightweight Logging System: Grafana Loki vs ELK
Find Loki, and in the URL write the service name and port number of Loki, which is http://loki:3100.

Then click the “Save && Test” button at the bottom of the page:

Lightweight Logging System: Grafana Loki vs ELK

View logs:

Lightweight Logging System: Grafana Loki vs ELK

Select host or job to view logs from different businesses:

Lightweight Logging System: Grafana Loki vs ELK

You can see the log content:

Lightweight Logging System: Grafana Loki vs ELK
Thus, the Loki + Promtail + Grafana logging solution is complete.

Author:Sunzz

Editor: Tao Jialong

Source: cnblogs.com/Sunzz/p/15190702.html

Lightweight Logging System: Grafana Loki vs ELK

Recommended Exciting Articles:

From 5s optimization to 1s, understanding can at least earn 40K!
Byte’s final interview: Why is double writing not recommended in the system?
Implementation plan for automatically closing unpaid orders on Pinduoduo!

Leave a Comment

×