Skip to content

Spring Cloud Sleuth With Zipkin And Kibana#

From the last 2 posts Distributed Tracing and Spring Cloud Sleuth we have known the definitions, the descriptions and the concepts of Distributed Tracing and Spring Cloud Sleuth. In this session we will deep dive into the example about using Spring Cloud Sleuth, Zipkin and Kibana in microservice system.

What Is The Kibana?#

  • First of all, we need to understand a little bit about Kibana, so what is the Kibana?

    • Kibana is an open-source data visualization dashboard software for Elasticsearch. It allows users to create visualizations, reports, and dashboards from data indexed in Elasticsearch. Kibana can produce various types of charts, maps, and features for application monitoring, operational intelligence, and data exploration. (View more: Logit.io, Elastic)

    • Kibana is the official interface of Elasticsearch. Users of Elasticsearch will find Kibana to be the most effective interface for discovering data insights and performing active management of the health of their Elastic Stack. Elastic has invested heavily in the innovation of the visualization interface. (View more: Kibana - Wikipedia)

  • In this example we will use Kibana for visualizing the log data in a microservice system.

Sample Microservice System#

  • Now, let's check the sample micro service system that we are going to build as in the diagram below.

 #zoom

  • So as can see in the image above, our system will include 4 main parts:
    • Microservice system: this part will include 4 spring boot services, one Eureka service and 3 spring boot application services, then all the log data and tracing data will be sent to kafka.
    • Kafka: we will create 2 topics in which one topic is used for receiving log data and one topic will be used for receiving tracing data
    • Tracing system: In the tracing system, we will hava a zipkin server which will listen the zipkin topic of kafka and get the tracing data there to handle and show on web UI.
    • Log system: In the log system, we will have 3 services, logstash, elastic and kibana. The logstash will listen the logstash topic of kafka and get the log data there for handling, then It will push the handled data to elastic and the kibana will read the data of elastic and visualize them on UI.

Building Kafka-Zipkin-Kibana Systems#

  • So to build the all the kafka, Zipkin and Kibana system, we will need to use the docker-compose.yml as below.
docker-compose.yml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
version: '3.7'

services:

  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
    container_name: zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - elk   

  broker:
    image: confluentinc/cp-kafka:7.3.2
    container_name: broker
    ports:
    # To learn about configuring Kafka for access across networks see
    # https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_ADVERTISED_HOST_NAME: broker
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_LISTENERS: INSIDE://:19092,OUTSIDE://:9092
      KAFKA_ADVERTISED_LISTENERS: INSIDE://broker:19092,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
    networks:
      - elk  

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
      # Use single node discovery in order to disable production mode and avoid bootstrap checks
      # see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    depends_on:
    - broker  
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:7.16.2
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000/tcp"
      - "5000:5000/udp"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:7.16.2
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  storage:
    image: openzipkin/zipkin-mysql
    container_name: storage
    command: --default-authentication-plugin=mysql_native_password
    environment:
      - MYSQL_ROOT_PASSWORD=zipkin
      - MYSQL_DATABASE=zipkin
      - MYSQL_USER=zipkin
      - MYSQL_PASSWORD=zipkin
    # Uncomment to expose the storage port for testing
    # ports:
    #   - 3306:3306
    volumes:
    - ./mysql-data:/var/lib/mysql:rw
    networks:
      - elk

  # The zipkin process services the UI, and also exposes a POST endpoint that
  # instrumentation can send trace data to. Scribe is disabled by default.
  zipkin:
    image: openzipkin/zipkin
    container_name: zipkin
    # Environment settings are defined here https://github.com/openzipkin/zipkin/blob/master/zipkin-server/README.md#environment-variables
    environment:
      - STORAGE_TYPE=mysql
      # Point the zipkin at the storage backend
      - MYSQL_HOST=storage
      - MYSQL_USER=zipkin
      - MYSQL_PASS=zipkin
      - MYSQL_DB=zipkin
      - KAFKA_BOOTSTRAP_SERVERS=broker:19092
      # Uncomment to enable scribe
      # - SCRIBE_ENABLED=true
      # Uncomment to enable self-tracing
      # - SELF_TRACING_ENABLED=true
      # Uncomment to enable debug logging
      # - JAVA_OPTS=-Dlogging.level.zipkin2=DEBUG
    ports:
      # Port used for the Zipkin UI and HTTP Api
      - 9411:9411
      # Uncomment if you set SCRIBE_ENABLED=true
      # - 9410:9410
    depends_on:
      - storage
    networks:
      - elk  

  # Adds a cron to process spans since midnight every hour, and all spans each day
  # This data is served by http://192.168.99.100:8080/dependency
  #
  # For more details, see https://github.com/openzipkin/docker-zipkin-dependencies
  dependencies:
    image: openzipkin/zipkin-dependencies
    container_name: dependencies
    entrypoint: crond -f
    environment:
      - STORAGE_TYPE=mysql
      - MYSQL_HOST=storage
      # Add the baked-in username and password for the zipkin-mysql image
      - MYSQL_USER=zipkin
      - MYSQL_PASS=zipkin
      # Uncomment to see dependency processing logs
      # - ZIPKIN_LOG_LEVEL=DEBUG
      # Uncomment to adjust memory used by the dependencies job
      # - JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G
    depends_on:
      - storage
    networks:
      - elk        

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:
  mysql-data:
  • Then we also need to create some folder for configuration files as in the image below.

 #zoom

  • We have contents of following files as below.
elasticsearch.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---

## Default Elasticsearch configuration from Elasticsearch base image.

## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml

#

cluster.name: "docker-cluster"

network.host: 0.0.0.0



## X-Pack settings

## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html

#

# xpack.license.self_generated.type: trial

xpack.security.enabled: true

xpack.monitoring.collection.enabled: true
kibana.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---

## Default Kibana configuration from Kibana base image.

## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js

#

server.name: kibana

server.host: "0"

elasticsearch.hosts: [ "http://elasticsearch:9200" ]

xpack.monitoring.ui.container.elasticsearch.enabled: true



## X-Pack security credentials

#

elasticsearch.username: elastic

elasticsearch.password: changeme
logstash.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---

## Default Logstash configuration from Logstash base image.

## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml

#

http.host: "0.0.0.0"

xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]



## X-Pack security credentials

#

xpack.monitoring.enabled: true

xpack.monitoring.elasticsearch.username: elastic

xpack.monitoring.elasticsearch.password: changeme
logstash.conf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
input {
    tcp {
        port => 5000
    }
    kafka {
        bootstrap_servers => "broker:19092"
        topics => ["logstash"]
    }
}

## Add your filters / logstash plugins configuration here
filter {
  json {
    source => "message"
  }
}

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
        index => "logstash-%{+YYYY.MM.dd}"
    }
}
  • Okay now, let's deep dive into the docker compose of every service.

Kafka#

  • Now, let's take a look into the kafka part in the docker-compose.yml as below.
docker-compose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
version: '3.7'

services:

  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
    container_name: zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - elk   

  broker:
    image: confluentinc/cp-kafka:7.3.2
    container_name: broker
    ports:
    # To learn about configuring Kafka for access across networks see
    # https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_ADVERTISED_HOST_NAME: broker
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_LISTENERS: INSIDE://:19092,OUTSIDE://:9092
      KAFKA_ADVERTISED_LISTENERS: INSIDE://broker:19092,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
    networks:
      - elk  

....

ZooKeeper#

Field Value Description
Image confluentinc/cp-zookeeper:7.3.2 The Docker image used for the ZooKeeper service. It is pulled from the Confluent repository with version 7.3.2.
Container name zookeeper The name assigned to the ZooKeeper container.
Environment Variables
ZOOKEEPER_CLIENT_PORT 2181 Specifies the client port used to connect to ZooKeeper.
ZOOKEEPER_TICK_TIME 2000 Specifies the tick time in milliseconds used by ZooKeeper for session timeouts.
Networks elk Specifies the network(s) that the ZooKeeper container is connected to.

Broker#

Field Value Description
Image confluentinc/cp-kafka:7.3.2 The Docker image used for the Kafka broker service. It is pulled from the Confluent repository with version 7.3.2.
Container name broker The name assigned to the Kafka broker container.
Ports "9092:9092" Maps the host's port 9092 to the container's port 9092. This allows external access to Kafka on port 9092.
Depends On zookeeper Specifies that this service depends on the ZooKeeper service.
Environment Variables
KAFKA_BROKER_ID 1 Specifies the ID of the Kafka broker.
KAFKA_ZOOKEEPER_CONNECT 'zookeeper:2181' Specifies the connection string to ZooKeeper.
KAFKA_ADVERTISED_HOST_NAME broker Specifies the advertised host name for the Kafka broker.
KAFKA_INTER_BROKER_LISTENER_NAME INSIDE Specifies the name of the inter-broker listener.
KAFKA_LISTENERS INSIDE://:19092,OUTSIDE://:9092 Specifies the listeners for Kafka.
KAFKA_ADVERTISED_LISTENERS INSIDE://broker:19092,OUTSIDE://localhost:9092 Specifies the advertised listeners for Kafka.
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT Specifies the listener security protocol mapping.
Networks elk Specifies the network(s) that the Kafka broker container is connected to.

Logstash - Elastic - Kibana#

  • Now, let's take a look into the log system part in the docker-compose.yml which contains logstash, elastic and kibana services as below.
docker-compose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
version: '3.7'

services:

....

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
      # Use single node discovery in order to disable production mode and avoid bootstrap checks
      # see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    depends_on:
    - broker  
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:7.16.2
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000/tcp"
      - "5000:5000/udp"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:7.16.2
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

....

Elasticsearch#

Field Value Description
Image docker.elastic.co/elasticsearch/elasticsearch:7.16.2 The Docker image used for the Elasticsearch service. It pulls the Elasticsearch image from the official Elastic repository with version 7.16.2.
Volumes
- type bind Specifies the volume type as bind, indicating a host directory will be mounted to the container.
- source ./elasticsearch/config/elasticsearch.yml Specifies the path of the Elasticsearch configuration file on the host machine.
- target /usr/share/elasticsearch/config/elasticsearch.yml Specifies the path where the Elasticsearch configuration file will be mounted inside the container.
- read_only true Specifies that the mounted volume should be read-only.
- type volume Specifies the volume type as volume, indicating a Docker volume will be used.
- source elasticsearch Specifies the name of the Docker volume.
- target /usr/share/elasticsearch/data Specifies the path where Elasticsearch will store its data inside the container.
Ports "9200:9200", "9300:9300" Maps the host's ports 9200 and 9300 to the container's ports 9200 and 9300 respectively. This allows external access to Elasticsearch HTTP REST API and transport protocol ports.
Environment Variables
ES_JAVA_OPTS "-Xmx256m -Xms256m" Specifies the Java options for Elasticsearch, including memory settings.
ELASTIC_PASSWORD changeme Specifies the password for the built-in Elasticsearch user 'elastic'.
discovery.type single-node Specifies the discovery type as single-node. This disables production mode and avoids bootstrap checks during Elasticsearch startup.
Depends On broker Specifies that this service depends on the 'broker' service.
Networks elk Specifies the network(s) that the Elasticsearch container is connected to.
  • Then inside the elasticsearch.yml we have the content as below.
elasticsearch.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
# xpack.license.self_generated.type: trial 
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
Configuration Value Explanation
cluster.name "docker-cluster" Defines the name of the Elasticsearch cluster. This name is used to identify the cluster and allows nodes with the same cluster name to join together as a cluster and share data and workload.
network.host 0.0.0.0 Specifies the network interface on which Elasticsearch listens for incoming connections. By setting it to 0.0.0.0, Elasticsearch listens on all network interfaces, allowing connections from any IP address. This is useful when running Elasticsearch in a containerized environment.
xpack.security.enabled true Enables X-Pack security features for Elasticsearch. X-Pack provides authentication, role-based access control, and other security features to secure the Elasticsearch cluster and prevent unauthorized access. By setting it to true, the security features are enabled.
xpack.monitoring.collection.enabled true Enables the collection of monitoring data by X-Pack. X-Pack provides monitoring and metrics capabilities for Elasticsearch clusters. By enabling this setting, Elasticsearch collects and stores various metrics and monitoring data that can be used for analyzing cluster health, performance, and resource usage.

Logstash#

Field Value Description
Image docker.elastic.co/logstash/logstash:7.16.2 The Docker image used for the Logstash service. It pulls the Logstash image from the official Elastic repository with version 7.16.2.
Volumes
- type bind Specifies the volume type as bind, indicating a host directory will be mounted to the container.
- source ./logstash/config/logstash.yml Specifies the path of the Logstash configuration file on the host machine.
- target /usr/share/logstash/config/logstash.yml Specifies the path where the Logstash configuration file will be mounted inside the container.
- read_only true Specifies that the mounted volume should be read-only.
- type bind Specifies the volume type as bind, indicating a host directory will be mounted to the container.
  • Then inside the logstash.yml we have the content as below.
logstash.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

## X-Pack security credentials
#
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
Configuration Value Explanation
http.host "0.0.0.0" Specifies the network interface on which Logstash listens for incoming HTTP connections. By setting it to 0.0.0.0, Logstash listens on all network interfaces, allowing connections from any IP address. This is useful when running Logstash in a containerized environment and receiving HTTP requests.
xpack.monitoring.elasticsearch.hosts [ "http://elasticsearch:9200" ] Specifies the Elasticsearch hosts to which Logstash sends monitoring data. Logstash collects various monitoring data and sends it to Elasticsearch for storage and analysis. By setting it to "http://elasticsearch:9200", Logstash sends the monitoring data to Elasticsearch running on the specified host and port.
xpack.monitoring.enabled true Enables the collection and sending of monitoring data by Logstash. When enabled, Logstash collects various monitoring data related to its own performance and activity and sends it to Elasticsearch for storage and analysis.
xpack.monitoring.elasticsearch.username elastic Specifies the username for authentication when sending monitoring data to Elasticsearch. This username is used to authenticate Logstash with Elasticsearch.
xpack.monitoring.elasticsearch.password changeme Specifies the password for authentication when sending monitoring data to Elasticsearch. This password is used in combination with the username to authenticate Logstash with Elasticsearch.
  • Next inside the logstash.conf we have the content as below.
logstash.conf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
input {
    tcp {
        port => 5000
    }
    kafka {
        bootstrap_servers => "broker:19092"
        topics => ["logstash"]
    }
}

## Add your filters / logstash plugins configuration here
filter {
  json {
    source => "message"
  }
}

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
        index => "logstash-%{+YYYY.MM.dd}"
    }
}
Configuration Value Explanation
input Defines the input plugins that Logstash will use to receive data. In this configuration, two input plugins are specified: TCP and Kafka.
input.tcp.port 5000 Specifies the TCP port on which Logstash listens for incoming data. This allows Logstash to receive data over a TCP connection on port 5000.
input.kafka.bootstrap_servers "broker:19092" Specifies the bootstrap servers for the Kafka input plugin. These servers are used by Logstash to connect to the Kafka cluster and consume messages from the specified topics. In this case, the Kafka cluster is accessible at "broker" on port 19092.
input.kafka.topics ["logstash"] Specifies the Kafka topics from which Logstash will consume messages. In this configuration, Logstash consumes messages from the "logstash" topic.
filter Defines the filter plugins that Logstash will use to transform or manipulate the incoming data. In this configuration, a single filter plugin is specified: JSON.
filter.json.source "message" Specifies the source field from which the JSON filter plugin will extract the JSON data. In this case, Logstash expects the incoming data to be in the "message" field, and the JSON filter will extract and parse the JSON data from it.
output Defines the output plugins that Logstash will use to send the processed data to an external system. In this configuration, a single output plugin is specified: Elasticsearch.
output.elasticsearch.hosts "elasticsearch:9200" Specifies the Elasticsearch hosts to which Logstash will send the processed data. In this case, Logstash sends the data to Elasticsearch running on the specified host and port.
output.elasticsearch.user "elastic" Specifies the username for authentication when sending data to Elasticsearch. This username is used to authenticate Logstash with Elasticsearch.
output.elasticsearch.password "changeme" Specifies the password for authentication when sending data to Elasticsearch. This password is used in combination with the username to authenticate Logstash with Elasticsearch.
output.elasticsearch.index "logstash-%{+YYYY.MM.dd}" Specifies the index pattern that Logstash will use when storing the data in Elasticsearch. In this configuration, the index pattern includes the current date in the format "logstash-YYYY.MM.dd" to create daily indices for the data.

Kibana#

Field Value Description
Image docker.elastic.co/kibana/kibana:7.16.2 The Docker image used for the Kibana service. It pulls the Kibana image from the official Elastic repository with version 7.16.2.
Volumes
- type bind Specifies the volume type as bind, indicating a host directory will be mounted to the container.
- source ./kibana/config/kibana.yml Specifies the path of the Kibana configuration file on the host machine.
- target /usr/share/kibana/config/kibana.yml Specifies the path where the Kibana configuration file will be mounted inside the container.
- read_only true Specifies that the mounted volume should be read-only.
Ports "5601:5601" Maps the host's port 5601 to the container's port 5601. This allows external access to the Kibana web interface.
Networks elk Specifies the network(s) that the Kibana container is connected to.
Depends On elasticsearch Specifies that this service depends on the 'elasticsearch' service.
  • Next, inside the kibana.yml we have the configuration below.
kibana.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js
#
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true

## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme
  • The kibana.yml file is a configuration file used by Kibana, the data visualization and exploration tool for Elasticsearch. It allows you to customize various settings related to Kibana's behavior and connection to Elasticsearch.
Field Value Description
server.name kibana Sets the name of the Kibana server.
server.host "0" Defines the host to which Kibana binds. Using "0" means it will bind to all available network interfaces.
elasticsearch.hosts [ "http://elasticsearch:9200" ] Specifies the Elasticsearch cluster's URLs that Kibana should connect to.
xpack.monitoring.ui.container.elasticsearch.enabled true Enables the Monitoring UI for containerized Elasticsearch clusters.
elasticsearch.username elastic Sets the username to authenticate with Elasticsearch.
elasticsearch.password changeme Sets the password to authenticate with Elasticsearch.

Zipkin#

  • Now, let's take a look into the kafka part in the docker-compose.yml as below.
docker-compose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
  storage:
    image: openzipkin/zipkin-mysql
    container_name: storage
    command: --default-authentication-plugin=mysql_native_password
    environment:
      - MYSQL_ROOT_PASSWORD=zipkin
      - MYSQL_DATABASE=zipkin
      - MYSQL_USER=zipkin
      - MYSQL_PASSWORD=zipkin
    # Uncomment to expose the storage port for testing
    # ports:
    #   - 3306:3306
    volumes:
    - ./mysql-data:/var/lib/mysql:rw
    networks:
      - elk

  # The zipkin process services the UI, and also exposes a POST endpoint that
  # instrumentation can send trace data to. Scribe is disabled by default.
  zipkin:
    image: openzipkin/zipkin
    container_name: zipkin
    # Environment settings are defined here https://github.com/openzipkin/zipkin/blob/master/zipkin-server/README.md#environment-variables
    environment:
      - STORAGE_TYPE=mysql
      # Point the zipkin at the storage backend
      - MYSQL_HOST=storage
      - MYSQL_USER=zipkin
      - MYSQL_PASS=zipkin
      - MYSQL_DB=zipkin
      - KAFKA_BOOTSTRAP_SERVERS=broker:19092
      # Uncomment to enable scribe
      # - SCRIBE_ENABLED=true
      # Uncomment to enable self-tracing
      # - SELF_TRACING_ENABLED=true
      # Uncomment to enable debug logging
      # - JAVA_OPTS=-Dlogging.level.zipkin2=DEBUG
    ports:
      # Port used for the Zipkin UI and HTTP Api
      - 9411:9411
      # Uncomment if you set SCRIBE_ENABLED=true
      # - 9410:9410
    depends_on:
      - storage
    networks:
      - elk  

  # Adds a cron to process spans since midnight every hour, and all spans each day
  # This data is served by http://192.168.99.100:8080/dependency
  #
  # For more details, see https://github.com/openzipkin/docker-zipkin-dependencies
  dependencies:
    image: openzipkin/zipkin-dependencies
    container_name: dependencies
    entrypoint: crond -f
    environment:
      - STORAGE_TYPE=mysql
      - MYSQL_HOST=storage
      # Add the baked-in username and password for the zipkin-mysql image
      - MYSQL_USER=zipkin
      - MYSQL_PASS=zipkin
      # Uncomment to see dependency processing logs
      # - ZIPKIN_LOG_LEVEL=DEBUG
      # Uncomment to adjust memory used by the dependencies job
      # - JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G
    depends_on:
      - storage
    networks:
      - elk        

Storage#

  • This Docker Compose part defines a service named "storage" using the image "openzipkin/zipkin-mysql" to run a MySQL database specifically for the Zipkin service. It provides the storage backend for Zipkin's tracing data.
Field Value Description
image openzipkin/zipkin-mysql Specifies the Docker image to use for the "storage" service. It uses the official "openzipkin/zipkin-mysql" image.
container_name storage Sets the name of the container that will be created when running the "storage" service. The container will be named "storage".
command --default-authentication-plugin=mysql_native_password Provides additional configuration options to the MySQL server when the container starts. It sets the default authentication plugin for MySQL to "mysql_native_password".
environment MYSQL_ROOT_PASSWORD=zipkin
MYSQL_DATABASE=zipkin
MYSQL_USER=zipkin
MYSQL_PASSWORD=zipkin
Defines environment variables passed to the MySQL server running inside the container. It sets the MySQL root password, creates a database named "zipkin," and creates a user named "zipkin" with the password "zipkin."
volumes ./mysql-data:/var/lib/mysql:rw Maps the local directory "./mysql-data" to the "/var/lib/mysql" directory inside the container to persist MySQL data.
networks elk Connects the "storage" service to the "elk" network, presumably defined elsewhere in the Docker Compose file for communication between services.
ports (Commented out, not active) (Commented out) It exposes the MySQL port (3306) from the container to the host, allowing direct access to the MySQL database from the host machine (not necessary for normal operation).

Zipkin#

Field Value Description
image openzipkin/zipkin Specifies the Docker image to use for the "zipkin" service. It uses the official "openzipkin/zipkin" image from Docker Hub, which is the Zipkin distributed tracing system.
container_name zipkin Sets the name of the container that will be created when running the "zipkin" service. The container will be named "zipkin".
environment STORAGE_TYPE=mysql
MYSQL_HOST=storage
MYSQL_USER=zipkin
MYSQL_PASS=zipkin
MYSQL_DB=zipkin
KAFKA_BOOTSTRAP_SERVERS=broker:19092
Defines environment variables passed to the Zipkin server running inside the container. It sets various configuration options like the storage backend type (MySQL), MySQL host, user, password, database name, Kafka bootstrap servers, and other optional settings like enabling Scribe, self-tracing, and debug logging.
ports 9411:9411 Exposes the port 9411 from the container to the host. Port 9411 is used for the Zipkin UI and HTTP API, allowing access to the Zipkin UI and services via a web browser or HTTP requests.
depends_on storage Defines that the "zipkin" service depends on the "storage" service. This ensures that the "storage" service starts before the "zipkin" service starts, allowing the Zipkin server to connect to the MySQL storage backend when it starts up.
networks elk Connects the "zipkin" service to the "elk" network, presumably defined elsewhere in the Docker Compose file. This enables communication between the "zipkin" service and other services connected to the "elk" network.

Dependencies#

  • The "dependencies" service uses the "openzipkin/zipkin-dependencies" image to run a cron job that processes spans since midnight every hour and all spans each day. This data is then served at http://192.168.99.100:8080/dependency. The service uses MySQL as the storage backend, and it depends on the "storage" service to ensure the database is available. You can also see additional configuration options that can be uncommented, such as enabling dependency processing logs or adjusting memory usage for the job.
Field Value Description
image openzipkin/zipkin-dependencies This field specifies the Docker image to use for the "dependencies" service. It uses the official "openzipkin/zipkin-dependencies" image from Docker Hub. The "zipkin-dependencies" service processes spans since midnight every hour and all spans each day, serving the data at http://192.168.99.100:8080/dependency.
container_name dependencies The container_name field sets the name of the container that will be created when running the "dependencies" service. Containers are instances of Docker images that are isolated and run independently. In this case, the container will be named "dependencies".
entrypoint crond -f The entrypoint field sets the command that will be executed when the container starts. In this case, it runs the crond command with the -f option, which starts the cron daemon in the foreground. The cron job inside the container will process spans since midnight every hour and all spans each day.
environment See below The environment field defines environment variables passed to the Zipkin Dependencies service running inside the container. Environment variables are used to configure the behavior of the application.
depends_on storage The depends_on field specifies that the "dependencies" service depends on the "storage" service. Dependencies ensure that services start in the correct order. In this case, the "dependencies" service depends on the "storage" service to ensure the MySQL storage backend is available before it starts.
networks elk The networks field connects the "dependencies" service to the "elk" network. Networks are used to enable communication between containers. By connecting to the "elk" network, the "dependencies" service can communicate with other services connected to the same network, potentially enabling integration with an ELK (Elasticsearch, Logstash, Kibana) stack.

Building Micro-Service System#

  • Now, let's continue to build the micro-service system which contains 4 services in in the image below.

 #zoom

  • So as we can see 3 services bff-service, customer-service and product-service will register into the eureka-server. After registering, those service can make client calls through exported REST apis of them with the support of OpenFeign. You can view more information in these sections: Spring Cloud OpenFeign Basic, Spring Cloud OpenFeign With Eureka.
  • Then every time a HTTP call is made by any service, then the log data and tracing data will be sent to correct corresponding Kafka topics and then the Tracing System and Log System will consume those messages and store and display on UI.

Eureka Server#

Pom.xml#

  • To create Eureka Server, let's add some dependencies as below.
  • Note Netflix Eureka Server dependencies require you to add Spring Cloud dependency first as mentioned in Spring Cloud Introduction. So maybe you have to add the dependency below first.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
<properties>
    <spring.cloud-version>2021.0.0</spring.cloud-version>
</properties>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring.cloud-version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
  • Then to use Netflix Eureka Server in your Spring Boot application. You will need to add some dependencies as below
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<dependencies>

    <!-- ...other dependencies -->

    <dependency>  
   <groupId>org.springframework.cloud</groupId>  
   <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>  
    </dependency>  

    <dependency>  
         <groupId>com.google.code.gson</groupId>  
         <artifactId>gson</artifactId>  
         <version>2.8.9</version>  
    </dependency>

    <!-- ...other dependencies -->

</dependencies>

Configuration#

  • Now, you will need to add annotation @EnableEurekaServer into your main class as below.
ServerApplication.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package com.eureka.server;  

import org.springframework.boot.SpringApplication;  
import org.springframework.boot.autoconfigure.SpringBootApplication;  
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;  

@SpringBootApplication  
@EnableEurekaServer  
public class ServerApplication {  

   public static void main(String[] args) {  
      SpringApplication.run(ServerApplication.class, args);  
   }  

}
  • Then In your application.yml. Let's add some configuration. The details of configuration are put in the comments.
application.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#server run at port 8761
server:
  port: 8761

spring:
  application:
    #application name
    name: eureka-server

eureka:
  client:
    #self register is false
    register-with-eureka: false
    #self fetch registry is false
    fetch-registry: false

Product Service#

  • Now, let's continue on building the first service (product-service). In this service, it just simply receives REST Api calls from other services, it doesn't make any call to others.

 #zoom

Pom.xml#

  • Like Eureka Server, we also need to add a dependency for Spring Cloud.
  • Note Netflix Eureka Client dependencies require you to add Spring Cloud dependency first as mentioned in Spring Cloud Introduction. So maybe you have to add the dependency below first.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
<properties>
    <spring.cloud-version>2021.0.0</spring.cloud-version>
</properties>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring.cloud-version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
  • Then to use Netflix Eureka Client in your Spring Boot application. You will need to add some dependencies as below
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-core</artifactId>
        <version>2.13.1</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-annotations</artifactId>
        <version>2.13.1</version>
    </dependency>

    <dependency>
        <groupId>com.google.code.gson</groupId>
        <artifactId>gson</artifactId>
        <version>2.8.9</version>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
  • Next, we will add dependencies for using spring-cloud-sleuth and sending tracing data to kafka.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
        <version>2.6.3</version>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-sleuth-zipkin</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>

    <dependency>
        <groupId>io.opentracing.brave</groupId>
        <artifactId>brave-opentracing</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.9.8</version>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
Dependency Description
spring-boot-starter-actuator Provides production-ready features for monitoring and managing the Spring Boot application. Includes health checks, metrics, tracing, and more.
spring-cloud-sleuth-zipkin Integrates with Zipkin, a distributed tracing system, to trace and visualize request flows in a distributed system.
spring-cloud-starter-sleuth Works with spring-cloud-sleuth-zipkin to provide distributed tracing capabilities and trace requests across microservices.
brave-opentracing Bridges between Brave and OpenTracing APIs, enabling OpenTracing-compatible instrumentation and trace propagation with Brave's implementation.
spring-kafka Integrates the Spring Boot application with Apache Kafka, enabling messaging and event-driven communication between microservices.
  • Next, we will also add dependencies for using log4j2 and sending log data to kafka.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-log4j2</artifactId>
        <version>2.7.11</version>
    </dependency>

    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-log4j-appender</artifactId>
        <version>2.8.2</version>
    </dependency>

    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-spring-cloud-config-client</artifactId>
        <version>2.17.2</version>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-bus</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
Dependency Description
org.springframework.boot:spring-boot-starter-log4j2:2.7.11 This dependency provides the Log4j2 logging framework as the default logging implementation for Spring Boot applications. Log4j2 offers advanced logging capabilities and allows you to configure logging using various configurations. It's commonly used for application logging in Spring Boot projects.
org.apache.kafka:kafka-log4j-appender:2.8.2 This dependency provides the Log4j appender for Kafka. It enables you to send log messages from your application to a Kafka topic, which can then be consumed by various consumers for centralized logging and analysis. This is useful for distributing and managing logs in a distributed system.
org.apache.logging.log4j:log4j-spring-cloud-config-client:2.17.2 This dependency integrates Log4j with Spring Cloud Config, allowing you to manage your logging configuration centrally using Spring Cloud Config Server. This is particularly useful in microservices architectures where you have multiple instances of the same application and want to manage their logging configuration externally. The exclusion of spring-cloud-bus prevents unnecessary dependencies that are not needed for this purpose.

Controller#

  • Next, let's create a simple controller with 2 Apis as below.
ProductController.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.product.service.controller;

import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class ProductController {

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/product/message")
    public ResponseEntity<String> getMessage() {
        log.info("ProductController: getMessage");
        return ResponseEntity.ok("Hello From product Service");
    }

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/product/message/error")
    public ResponseEntity<String> getErrorMessage() {
        log.info("ProductController: getErrorMessage");
        throw new RuntimeException("ProductController: getErrorMessage");
    }

}
  • The first api will response some texts and the second api will return an error response.

Configuration#

  • Firstly, we will use annotation @EnableDiscoveryClient in the main class to enable discovery client.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.product.service;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@SpringBootApplication
@EnableDiscoveryClient
public class SleuthLog4j2KafkaZipkinProductService {

    public static void main(String[] args) {
        SpringApplication.run(SleuthLog4j2KafkaZipkinProductService.class, args);
    }

}
  • Next, let's create a configuration class SleuthConfig with the content as below.
SleuthConfig.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.product.service.config;

import brave.baggage.BaggageFields;
import brave.baggage.CorrelationScopeConfig;
import brave.baggage.CorrelationScopeCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class SleuthConfig {

    @Bean
    CorrelationScopeCustomizer addSampled() {
        return b -> b.add(CorrelationScopeConfig.SingleCorrelationField.create(BaggageFields.SAMPLED));
    }

    @Bean
    CorrelationScopeCustomizer addParentId() {
        return b -> b.add(CorrelationScopeConfig.SingleCorrelationField.create(BaggageFields.PARENT_ID));
    }

}
  • Okay, currently we are using spring-cloud-dependencies version 2021.0.0 and behind the scene, it will automatically set the spring-cloud-sleuth version 3.1.0 for us. You can view the release note here
  • Then in the spring-cloud-sleuth version 3.1.0 we got some changes, the parentId and spanExportable will not be to the log by default. So if we want to log these information, we have to create a config class as above. You can view more information here.

  • Next, let's add the configuration below into the application.yml for sprint-cloud-sleuth and sending tracing data to kafka.

application.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
server:
  port: 8080

spring:
  autoconfigure:
    exclude: org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration
  application:
    name: sleuth-zipkin-customer-service
  cloud:
    discovery:
      enabled: true
  sleuth:
    sampler:
      probability: 1.0
    # propagation:
    #   type: B3,W3C
    supports-join: true
    trace-id128: true
  opentracing:
    enabled: true
  zipkin:
    kafka:
      topic: zipkin
    sender:
      type: kafka
#    base-url: http://localhost:9411


logging:
  level:
    root: INFO
#    org:
#      springframework:
#        cloud:
#          sleuth: DEBUG

product:
  service:
    name: sleuth-zipkin-product-service
Configuration Key Value Description
server.port 8080 Specifies the port on which the Spring Boot application will run. In this case, the application will run on port 8080.
spring.autoconfigure.exclude org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration Excludes the KafkaAutoConfiguration class from Spring Boot's auto-configuration. This is useful when you want to customize Kafka configuration yourself.
spring.application.name sleuth-zipkin-customer-service Sets the name of the Spring Boot application. In this case, it is set to "sleuth-zipkin-customer-service".
spring.cloud.discovery.enabled true Enables service discovery using Spring Cloud Discovery.
spring.sleuth.sampler.probability 1.0 Configures the probability of sampling a trace. A value of 1.0 means all requests will be traced.
spring.sleuth.supports-join true Enables support for joining spans in distributed tracing.
spring.sleuth.trace-id128 true Configures trace IDs to be 128-bit values.
spring.opentracing.enabled true Enables support for OpenTracing.
spring.zipkin.kafka.topic zipkin Sets the Kafka topic to which trace data will be sent for Zipkin.
spring.zipkin.sender.type kafka Specifies the sender type for Zipkin. In this case, it is set to "kafka".
logging.level.root INFO Sets the root logging level to INFO, which controls the level of log messages printed by the application.
product.service.name sleuth-zipkin-product-service Defines the name of the product service.
  • Next, to configure sending log data to kafka, we will continue to create the bootstrap.yml and log4j2.xml as below.
bootstrap.yml
1
2
3
spring:
  application:
    name: sleuth-zipkin-customer-service
  • This value will be used for the log pattern in log4j2.xml.
log4j2.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="info" name="spring-boot-kafka-log" packages="com.reloadly">
        <!-- Define the Appenders -->
    <Appenders>
            <!-- Kafka Appender -->
        <Kafka name="Kafka" topic="logstash">
            <PatternLayout>
                    <!-- Define the log pattern -->
                <alwaysWriteExceptions>false</alwaysWriteExceptions>
                <pattern>
                    {"timestamp":"%d{yyyy-MM-ddHH:mm:ss.SSSZ}","level":"%level","service":"$${spring:spring.application.name}","package":"%logger{36}","class":"%c","method":"%M","traceId":"%X{traceId}","spanId":"%X{spanId}","parentSpanId":"%X{parentId}","sampled":"%X{sampled}","pid":"%pid","thread":"%thread","logger":"%logger{40}","message":"%replace{%replace{%msg{separator()}}{(\")}{\\\"}}{\t}{}","exception":"%replace{%replace{%ex{full}{separator()}}{(\")}{\\\"}}{\t}{}"}
                </pattern>
            </PatternLayout>
            <!-- Define Kafka broker connection details -->
            <Property name="bootstrap.servers">localhost:9092</Property>
        </Kafka>
        <!-- Async Appender -->
        <Async name="Async">
            <AppenderRef ref="Kafka"/>
        </Async>
                <!-- Console Appender (for standard output) -->
        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout pattern="%style{%d{ISO8601}}{cyan} %highlight{%-5level} %pid %style{[${spring:spring.application.name},%t,%X{traceId},%X{spanId},%X{parentId},%X{sampled}]}{bright,blue} %style{%C{1.}}{bright,yellow}: %msg%n%throwable"/>
        </Console>

    </Appenders>
    <!-- Define the Loggers -->
    <Loggers>
            <!-- Root Logger -->
        <Root level="INFO">
                <!-- Attach appenders to the root logger -->
            <AppenderRef ref="Kafka"/>
            <AppenderRef ref="stdout"/>
        </Root>
        <!-- Logger for org.apache.kafka package -->
        <Logger name="org.apache.kafka" level="WARN" /><!-- avoid recursive logging -->
    </Loggers>
</Configuration>
  • This log4j2.xml configuration is used for configuring logging behavior in a Spring Boot application

    • Appenders: Defines various log appenders, including Kafka and Async.
    • Kafka Appender: Sends log messages to a Kafka topic named "logstash". The log pattern is defined using JSON format, including various placeholders for log data.
    • Async Appender: Wraps the Kafka appender to make it asynchronous for better performance.
    • Console Appender: Sends log messages to the standard output (console) with a specific log pattern.
    • Loggers: Defines loggers and their levels.
    • Root Logger: Specifies that log messages at the INFO level and higher should be captured by the Kafka and stdout appenders attached to the root logger.
    • Logger for org.apache.kafka package: Sets the log level for the org.apache.kafka package to WARN, avoiding recursive logging.
  • You can view full source code of this service at this link

Customer Service#

  • Now, let's continue on building the second service (customer-service). In this service, it just simply receives REST Api calls from bff-service, and it can also call to product-service following the Micro-Service System below.

 #zoom

Pom.xml#

  • Like product-service we also need to add a dependency for Spring Cloud.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
<properties>
    <spring.cloud-version>2021.0.0</spring.cloud-version>
</properties>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring.cloud-version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
  • Then to use Netflix Eureka Client in your Spring Boot application. You will need to add some dependencies as below
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-core</artifactId>
        <version>2.13.1</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-annotations</artifactId>
        <version>2.13.1</version>
    </dependency>

    <dependency>
        <groupId>com.google.code.gson</groupId>
        <artifactId>gson</artifactId>
        <version>2.8.9</version>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
  • Next, we will add dependency below for openfeign to make calls to product-service.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-openfeign</artifactId>
            <version>2.2.6.RELEASE</version>
    </dependency>

    <!-- ...other dependencies -->

<dependencies>
  • Next, we will add dependencies for using spring-cloud-sleuth and sending tracing data to kafka.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
        <version>2.6.3</version>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-sleuth-zipkin</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>

    <dependency>
        <groupId>io.opentracing.brave</groupId>
        <artifactId>brave-opentracing</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.9.8</version>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
Dependency Description
spring-boot-starter-actuator Provides production-ready features for monitoring and managing the Spring Boot application. Includes health checks, metrics, tracing, and more.
spring-cloud-sleuth-zipkin Integrates with Zipkin, a distributed tracing system, to trace and visualize request flows in a distributed system.
spring-cloud-starter-sleuth Works with spring-cloud-sleuth-zipkin to provide distributed tracing capabilities and trace requests across microservices.
brave-opentracing Bridges between Brave and OpenTracing APIs, enabling OpenTracing-compatible instrumentation and trace propagation with Brave's implementation.
spring-kafka Integrates the Spring Boot application with Apache Kafka, enabling messaging and event-driven communication between microservices.
  • Next, we will also add dependencies for using log4j2 and sending log data to kafka.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-log4j2</artifactId>
        <version>2.7.11</version>
    </dependency>

    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-log4j-appender</artifactId>
        <version>2.8.2</version>
    </dependency>

    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-spring-cloud-config-client</artifactId>
        <version>2.17.2</version>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-bus</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
Dependency Description
org.springframework.boot:spring-boot-starter-log4j2:2.7.11 This dependency provides the Log4j2 logging framework as the default logging implementation for Spring Boot applications. Log4j2 offers advanced logging capabilities and allows you to configure logging using various configurations. It's commonly used for application logging in Spring Boot projects.
org.apache.kafka:kafka-log4j-appender:2.8.2 This dependency provides the Log4j appender for Kafka. It enables you to send log messages from your application to a Kafka topic, which can then be consumed by various consumers for centralized logging and analysis. This is useful for distributing and managing logs in a distributed system.
org.apache.logging.log4j:log4j-spring-cloud-config-client:2.17.2 This dependency integrates Log4j with Spring Cloud Config, allowing you to manage your logging configuration centrally using Spring Cloud Config Server. This is particularly useful in microservices architectures where you have multiple instances of the same application and want to manage their logging configuration externally. The exclusion of spring-cloud-bus prevents unnecessary dependencies that are not needed for this purpose.

Controller#

  • Next, let's create a simple controller with 3 Apis as below.
CustomerController.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.customer.service.controller;

import com.springboot.cloud.sleuth.log4j2.kafka.zipkin.customer.service.api.ProductServiceApi;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class CustomerController {

    @Autowired
    private ProductServiceApi productServiceApi;

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/customer/message")
    public ResponseEntity<String> getMessage() {
        log.info("Start CustomerController: getMessage");
        log.info("Call Product Service method getProductMessage: 1");
        String productMessage = this.productServiceApi.getProductMessage();
        log.info("Content From Product Service: " + productMessage);
        log.info("Call Product Service method getProductMessage: 2");
        String productMessage2 = this.productServiceApi.getProductMessage();
        log.info("Content From Product Service: " + productMessage2);
        return ResponseEntity.ok("Hello From application adapter Service");
    }

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/customer/message/error")
    public ResponseEntity<String> getErrorMessage() {
        log.info("CustomerController: getErrorMessage");
        throw new RuntimeException("CustomerController: getErrorMessage");
    }

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/customer/message/error2")
    public ResponseEntity<String> getErrorMessage2() {
        log.info("CustomerController: getErrorMessage2");
        log.info("Call Product Service method getProductMessage");
        String productMessage = this.productServiceApi.getProductMessage();
        log.info("Content From Product Service: " + productMessage);
        log.info("Call Product Service method getErrorMessage");
        this.productServiceApi.getErrorMessage();
        return ResponseEntity.ok("CustomerController: getErrorMessage");
    }

}
  • On the first it just simply call to product-service two time and will receive some text responses. Then on the second api it will throw an error and the last api will call to product-service but will got an error response.

FeignClient#

  • Next let's create an interface ProductServiceApi for configuring the FeignClient which will help us to call to product-service easily.
ProductServiceApi.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.customer.service.api;

import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(name = "${product.service.name}")
public interface ProductServiceApi {

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/product/message")
    public String getProductMessage();

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/product/message/error")
    public String getErrorMessage();

}
  • In this interface, we just simply define 2 apis that we will call to product-service.

Configuration#

  • Firstly, we will use annotation @EnableDiscoveryClient and @EnableFeignClients in the main class to enable discovery client and feign client.
SleuthLog4j2KafkaZipkinCustomerApplication.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.customer.service;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class SleuthLog4j2KafkaZipkinCustomerApplication {
    public static void main(String[] args) {
        SpringApplication.run(SleuthLog4j2KafkaZipkinCustomerApplication.class, args);
    }
}
  • Next, let's create a configuration class SleuthConfig with the content as below.
SleuthConfig.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.customer.service.config;

import brave.baggage.BaggageFields;
import brave.baggage.CorrelationScopeConfig;
import brave.baggage.CorrelationScopeCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class SleuthConfig {

    @Bean
    CorrelationScopeCustomizer addSampled() {
        return b -> b.add(CorrelationScopeConfig.SingleCorrelationField.create(BaggageFields.SAMPLED));
    }

    @Bean
    CorrelationScopeCustomizer addParentId() {
        return b -> b.add(CorrelationScopeConfig.SingleCorrelationField.create(BaggageFields.PARENT_ID));
    }

}
  • Okay, currently we are using spring-cloud-dependencies version 2021.0.0 and behind the scene, it will automatically set the spring-cloud-sleuth version 3.1.0 for us. You can view the release note here
  • Then in the spring-cloud-sleuth version 3.1.0 we got some changes, the parentId and spanExportable will not be to the log by default. So if we want to log these information, we have to create a config class as above. You can view more information here.

  • Next, let's add the configuration below into the application.yml for feign client sprint-cloud-sleuth and sending tracing data to kafka.

application.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
server:
  port: 8080

spring:
  autoconfigure:
    exclude: org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration
  application:
    name: sleuth-zipkin-customer-service
  cloud:
    discovery:
      enabled: true
  sleuth:
    sampler:
      probability: 1.0
    # propagation:
    #   type: B3,W3C
    supports-join: true
    trace-id128: true
  opentracing:
    enabled: true
  zipkin:
    kafka:
      topic: zipkin
    sender:
      type: kafka
#    base-url: http://localhost:9411


logging:
  level:
    root: INFO
#    org:
#      springframework:
#        cloud:
#          sleuth: DEBUG

product:
  service:
    name: sleuth-zipkin-product-service
Configuration Value Description
server.port 8080 The port on which the server listens for incoming requests.
spring.autoconfigure.exclude org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration Excludes Kafka auto-configuration from Spring Boot's auto-configuration.
spring.application.name sleuth-zipkin-customer-service The name of the Spring Boot application.
spring.cloud.discovery.enabled true Enables service discovery using Spring Cloud Discovery.
spring.sleuth.sampler.probability 1.0 The probability of sampling traces (always enabled with probability 1.0).
spring.sleuth.supports-join true Enables support for joining distributed traces.
spring.sleuth.trace-id128 true Enables 128-bit trace IDs for distributed tracing.
spring.opentracing.enabled true Enables OpenTracing support.
spring.zipkin.kafka.topic zipkin The Kafka topic to which trace data will be sent.
spring.zipkin.sender.type kafka The sender type for trace data (Kafka).
logging.level.root INFO Sets the root logging level to INFO.
product.service.name sleuth-zipkin-product-service The name of the product service.
  • Next, to configure sending log data to kafka, we will continue to create the bootstrap.yml and log4j2.xml as below.
bootstrap.yml
1
2
3
spring:
  application:
    name: sleuth-zipkin-customer-service
  • This value will be used for the log pattern in log4j2.xml.
log4j2.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="info" name="spring-boot-kafka-log" packages="com.reloadly">
    <Appenders>
        <Kafka name="Kafka" topic="logstash">
            <PatternLayout>
                <alwaysWriteExceptions>false</alwaysWriteExceptions>
                <pattern>
                    {"timestamp":"%d{yyyy-MM-ddHH:mm:ss.SSSZ}","level":"%level","service":"$${spring:spring.application.name}","package":"%logger{36}","class":"%c","method":"%M","traceId":"%X{traceId}","spanId":"%X{spanId}","parentSpanId":"%X{parentId}","sampled":"%X{sampled}","pid":"%pid","thread":"%thread","logger":"%logger{40}","message":"%replace{%replace{%msg{separator()}}{(\")}{\\\"}}{\t}{}","exception":"%replace{%replace{%ex{full}{separator()}}{(\")}{\\\"}}{\t}{}"}
                </pattern>
            </PatternLayout>
            <Property name="bootstrap.servers">localhost:9092</Property>
        </Kafka>
        <Async name="Async">
            <AppenderRef ref="Kafka"/>
        </Async>

        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout pattern="%style{%d{ISO8601}}{cyan} %highlight{%-5level} %pid %style{[${spring:spring.application.name},%t,%X{traceId},%X{spanId},%X{parentId},%X{sampled}]}{bright,blue} %style{%C{1.}}{bright,yellow}: %msg%n%throwable"/>
        </Console>

    </Appenders>
    <Loggers>
        <Root level="INFO">
            <AppenderRef ref="Kafka"/>
            <AppenderRef ref="stdout"/>
        </Root>
        <Logger name="org.apache.kafka" level="WARN" /><!-- avoid recursive logging -->
    </Loggers>
</Configuration>
  • This log4j2.xml configuration is used for configuring logging behavior in a Spring Boot application

    • Appenders: Defines various log appenders, including Kafka and Async.
    • Kafka Appender: Sends log messages to a Kafka topic named "logstash". The log pattern is defined using JSON format, including various placeholders for log data.
    • Async Appender: Wraps the Kafka appender to make it asynchronous for better performance.
    • Console Appender: Sends log messages to the standard output (console) with a specific log pattern.
    • Loggers: Defines loggers and their levels.
    • Root Logger: Specifies that log messages at the INFO level and higher should be captured by the Kafka and stdout appenders attached to the root logger.
    • Logger for org.apache.kafka package: Sets the log level for the org.apache.kafka package to WARN, avoiding recursive logging.
  • You can view full source code of this service at this link

BFF Service#

  • Now, let's continue on building the final service (bff-service). In this service, it just simply receives REST Api calls from postman, and it will call to customer-service or product-service following the Micro-Service System below.

 #zoom

Pom.xml#

  • Like customer-service we also need to add a dependency for Spring Cloud.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
<properties>
    <spring.cloud-version>2021.0.0</spring.cloud-version>
</properties>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring.cloud-version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
  • Then to use Netflix Eureka Client in your Spring Boot application. You will need to add some dependencies as below
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-core</artifactId>
        <version>2.13.1</version>
    </dependency>

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-annotations</artifactId>
        <version>2.13.1</version>
    </dependency>

    <dependency>
        <groupId>com.google.code.gson</groupId>
        <artifactId>gson</artifactId>
        <version>2.8.9</version>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
  • Next, we will add dependency below for openfeign to make calls to customer-service and product-service.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-openfeign</artifactId>
            <version>2.2.6.RELEASE</version>
    </dependency>

    <!-- ...other dependencies -->

<dependencies>
  • Next, we will add dependencies for using spring-cloud-sleuth and sending tracing data to kafka.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
        <version>2.6.3</version>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-sleuth-zipkin</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-sleuth</artifactId>
    </dependency>

    <dependency>
        <groupId>io.opentracing.brave</groupId>
        <artifactId>brave-opentracing</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.9.8</version>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
Dependency Description
spring-boot-starter-actuator Provides production-ready features for monitoring and managing the Spring Boot application. Includes health checks, metrics, tracing, and more.
spring-cloud-sleuth-zipkin Integrates with Zipkin, a distributed tracing system, to trace and visualize request flows in a distributed system.
spring-cloud-starter-sleuth Works with spring-cloud-sleuth-zipkin to provide distributed tracing capabilities and trace requests across microservices.
brave-opentracing Bridges between Brave and OpenTracing APIs, enabling OpenTracing-compatible instrumentation and trace propagation with Brave's implementation.
spring-kafka Integrates the Spring Boot application with Apache Kafka, enabling messaging and event-driven communication between microservices.
  • Next, we will also add dependencies for using log4j2 and sending log data to kafka.
pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<dependencies>

    <!-- ...other dependencies -->

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-log4j2</artifactId>
        <version>2.7.11</version>
    </dependency>

    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-log4j-appender</artifactId>
        <version>2.8.2</version>
    </dependency>

    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-spring-cloud-config-client</artifactId>
        <version>2.17.2</version>
        <exclusions>
            <exclusion>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-bus</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

    <!-- ...other dependencies -->

</dependencies>
Dependency Description
org.springframework.boot:spring-boot-starter-log4j2:2.7.11 This dependency provides the Log4j2 logging framework as the default logging implementation for Spring Boot applications. Log4j2 offers advanced logging capabilities and allows you to configure logging using various configurations. It's commonly used for application logging in Spring Boot projects.
org.apache.kafka:kafka-log4j-appender:2.8.2 This dependency provides the Log4j appender for Kafka. It enables you to send log messages from your application to a Kafka topic, which can then be consumed by various consumers for centralized logging and analysis. This is useful for distributing and managing logs in a distributed system.
org.apache.logging.log4j:log4j-spring-cloud-config-client:2.17.2 This dependency integrates Log4j with Spring Cloud Config, allowing you to manage your logging configuration centrally using Spring Cloud Config Server. This is particularly useful in microservices architectures where you have multiple instances of the same application and want to manage their logging configuration externally. The exclusion of spring-cloud-bus prevents unnecessary dependencies that are not needed for this purpose.

Controller#

  • Next, let's create a simple controller with 4 Apis as below.
BffApplicationController.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service.controller;

import com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service.api.CustomerServiceApi;
import com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service.api.ProductServiceApi;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@Controller
@Slf4j
public class BffApplicationController {

    @Autowired
    private CustomerServiceApi customerServiceApi;
    @Autowired
    private ProductServiceApi productServiceApi;

    @RequestMapping(method = RequestMethod.GET, path = "/v1/bff/customer/message")
    public ResponseEntity<String> getCustomerMessage() {
        log.info("BffApplicationController: getCustomerMessage");
        return ResponseEntity.ok(this.customerServiceApi.getCustomerMessage());
    }

    @RequestMapping(method = RequestMethod.GET, path = "/v1/bff/customer/message/error")
    public ResponseEntity<String> getCustomerErrorMessage() {
        log.info("BffApplicationController: getCustomerErrorMessage");
        return ResponseEntity.ok(this.customerServiceApi.getErrorMessage());
    }

    @RequestMapping(method = RequestMethod.GET, path = "/v1/bff/customer/message/error2")
    public ResponseEntity<String> getCustomerErrorMessage2() {
        log.info("BffApplicationController: getCustomerErrorMessage2");
        return ResponseEntity.ok(this.customerServiceApi.getErrorMessage2());
    }

    @RequestMapping(method = RequestMethod.GET, path = "/v1/bff/customer/product/message")
    public ResponseEntity<String> getCustomerProductMessage() {
        log.info("BffApplicationController: getCustomerProductMessage");
        log.info("BffApplicationController: getCustomerMessage");
        String customerMessage = this.customerServiceApi.getCustomerMessage();
        log.info("Content From Customer Service: " + customerMessage);
        log.info("BffApplicationController: getProductService");
        String productMessage = this.productServiceApi.getProductMessage();
        log.info("Content From Product Service: " + productMessage);
        return ResponseEntity.ok("Customer Message: " + customerMessage + " - Product Message: " + productMessage);
    }

}
  • In which:
    • The first it just simply call to customer-service and response some texts.
    • The second api will call to customer-servie and get an error response.
    • The third api will call to customer-service and customer-service will call to product-service and get an error response.
    • The final api will call to customer-service and product-service

FeignClient#

  • Next let's create 2 interfaces CustomerServiceApi and ProductServiceApi for configuring the FeignClient which will help us to call to customer-service and product-service easily.
CustomerServiceApi.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service.api;

import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(name = "${service.customer.name}")
public interface CustomerServiceApi {

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/customer/message")
    public String getCustomerMessage();

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/customer/message/error")
    public String getErrorMessage();

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/customer/message/error2")
    public String getErrorMessage2();

}
ProductServiceApi.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service.api;

import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(name = "${service.product.name}")
public interface ProductServiceApi {

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/product/message")
    public String getProductMessage();

    @RequestMapping(method = RequestMethod.GET, path = "/v1/application/product/message/error")
    public String getErrorMessage();

}
  • In these 2 interfaces, we just simply defines apis that customer-service and product-service exported.

Configuration#

  • Like customer-service we will use annotation @EnableDiscoveryClient and @EnableFeignClients in the main class to enable discovery client and feign client.
SleuthLog4j2KafkaZipkinBffApplication.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class SleuthLog4j2KafkaZipkinBffApplication {
    public static void main(String[] args) {
        SpringApplication.run(SleuthLog4j2KafkaZipkinBffApplication.class, args);
    }
}
  • Next, let's create a configuration class SleuthConfig with the content as below.
SleuthConfig.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
package com.springboot.cloud.sleuth.log4j2.kafka.zipkin.bff.application.service.config;

import brave.baggage.BaggageFields;
import brave.baggage.CorrelationScopeConfig;
import brave.baggage.CorrelationScopeCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class SleuthConfig {

    @Bean
    CorrelationScopeCustomizer addSampled() {
        return b -> b.add(CorrelationScopeConfig.SingleCorrelationField.create(BaggageFields.SAMPLED));
    }

    @Bean
    CorrelationScopeCustomizer addParentId() {
        return b -> b.add(CorrelationScopeConfig.SingleCorrelationField.create(BaggageFields.PARENT_ID));
    }

}
  • Okay, currently we are using spring-cloud-dependencies version 2021.0.0 and behind the scene, it will automatically set the spring-cloud-sleuth version 3.1.0 for us. You can view the release note here
  • Then in the spring-cloud-sleuth version 3.1.0 we got some changes, the parentId and spanExportable will not be to the log by default. So if we want to log these information, we have to create a config class as above. You can view more information here.

  • Next, let's add the configuration below into the application.yml for feign client sprint-cloud-sleuth and sending tracing data to kafka.

application.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#service run at port 9090
server:
  port: 9090

spring:
  autoconfigure:
    exclude: org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration
  application:
    #service name which will be showed on eureka server dashboard
    name: sleuth-zipkin-bff-service
  #Enable Spring Cloud Discovery
  cloud:
    discovery:
      enabled: true
  sleuth:
    sampler:
      probability: 1.0
    # propagation:
    #   type: B3,W3C
    supports-join: true
    trace-id128: true
  opentracing:
    enabled: true
  zipkin:
    kafka:
      topic: zipkin
    sender:
      type: kafka
#    base-url: http://localhost:9411


logging:
  level:
    root: INFO
#    org:
#      springframework:
#        cloud:
#          sleuth: DEBUG


#register this service to eureka server by url
eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

service:
  customer:
    name: sleuth-zipkin-customer-service
  product:
    name: sleuth-zipkin-product-service
Configuration Value Description
server.port 9090 The port on which the service will run.
spring.autoconfigure.exclude org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration Excludes Kafka auto-configuration from Spring Boot's auto-configuration.
spring.application.name sleuth-zipkin-bff-service The name of the Spring Boot service, shown on Eureka server dashboard.
spring.cloud.discovery.enabled true Enables service discovery using Spring Cloud Discovery.
spring.sleuth.sampler.probability 1.0 The probability of sampling traces (always enabled with probability 1.0).
spring.sleuth.supports-join true Enables support for joining distributed traces.
spring.sleuth.trace-id128 true Enables 128-bit trace IDs for distributed tracing.
spring.opentracing.enabled true Enables OpenTracing support.
spring.zipkin.kafka.topic zipkin The Kafka topic to which trace data will be sent.
spring.zipkin.sender.type kafka The sender type for trace data (Kafka).
logging.level.root INFO Sets the root logging level to INFO.
eureka.client.serviceUrl.defaultZone http://localhost:8761/eureka/ The URL for the Eureka server for service registration and discovery.
service.customer.name sleuth-zipkin-customer-service The name of the customer service.
service.product.name sleuth-zipkin-product-service The name of the product service.
  • Next, to configure sending log data to kafka, we will continue to create the bootstrap.yml and log4j2.xml as below.
bootstrap.yml
1
2
3
spring:
  application:
    name: sleuth-zipkin-bff-service
  • This value will be used for the log pattern in log4j2.xml.
log4j2.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="info" name="spring-boot-kafka-log" packages="com.reloadly">
    <Appenders>
        <Kafka name="Kafka" topic="logstash">
            <PatternLayout>
                <alwaysWriteExceptions>false</alwaysWriteExceptions>
                <pattern>
                    {"timestamp":"%d{yyyy-MM-ddHH:mm:ss.SSSZ}","level":"%level","service":"$${spring:spring.application.name}","package":"%logger{36}","class":"%c","method":"%M","traceId":"%X{traceId}","spanId":"%X{spanId}","parentSpanId":"%X{parentId}","sampled":"%X{sampled}","pid":"%pid","thread":"%thread","logger":"%logger{40}","message":"%replace{%replace{%msg{separator()}}{(\")}{\\\"}}{\t}{}","exception":"%replace{%replace{%ex{full}{separator()}}{(\")}{\\\"}}{\t}{}"}
                </pattern>
            </PatternLayout>
            <Property name="bootstrap.servers">localhost:9092</Property>
        </Kafka>
        <Async name="Async">
            <AppenderRef ref="Kafka"/>
        </Async>

        <Console name="stdout" target="SYSTEM_OUT">
            <PatternLayout pattern="%style{%d{ISO8601}}{cyan} %highlight{%-5level} %pid %style{[${spring:spring.application.name},%t,%X{traceId},%X{spanId},%X{parentId},%X{sampled}]}{bright,blue} %style{%C{1.}}{bright,yellow}: %msg%n%throwable"/>
        </Console>

    </Appenders>
    <Loggers>
        <Root level="INFO">
            <AppenderRef ref="Kafka"/>
            <AppenderRef ref="stdout"/>
        </Root>
        <Logger name="org.apache.kafka" level="WARN" /><!-- avoid recursive logging -->
    </Loggers>
</Configuration>
  • This log4j2.xml configuration is used for configuring logging behavior in a Spring Boot application

    • Appenders: Defines various log appenders, including Kafka and Async.
    • Kafka Appender: Sends log messages to a Kafka topic named "logstash". The log pattern is defined using JSON format, including various placeholders for log data.
    • Async Appender: Wraps the Kafka appender to make it asynchronous for better performance.
    • Console Appender: Sends log messages to the standard output (console) with a specific log pattern.
    • Loggers: Defines loggers and their levels.
    • Root Logger: Specifies that log messages at the INFO level and higher should be captured by the Kafka and stdout appenders attached to the root logger.
    • Logger for org.apache.kafka package: Sets the log level for the org.apache.kafka package to WARN, avoiding recursive logging.
  • You can view full source code of this service at this link

Testing#

  • Before testing, let's take a look into test cases that we are going to do as in the images below.
  • Firstly, we will test the happy case, when BFF service call to Customer Service and the Customer Service will call 2 times to Product Service. All calls are success.

 #zoom

  • Next, we will test the first failed case, when BFF service call to Customer Service and get the error response.

 #zoom

  • Finally, we will continue to test the second failed case, when BFF service call to Customer Service and the Customer Service calls to Product Service and get the error response at the second call.

 #zoom

Happy Case#

  • Okay Let's start the docker compose for kafka-kibana-zipkin.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
duc@duc-MS-7E01:~/study/docker/kafka-kibana-zipkin$ docker compose up -d
[+] Running 83/17
  broker 2 layers [⣿⣿]      0B/0B      Pulled                                                                                                                                 270.2s 
  elasticsearch 9 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                    52.1s 
  logstash 11 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                      90.5s 
  storage 10 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                       273.3s 
  kibana 13 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                      77.0s 
  zipkin 9 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                          118.1s 
  dependencies 10 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                  256.7s 
  zookeeper 11 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                    245.8s                                 
[+] Running 10/10
  Network kafka-kibana-zipkin_elk                Created                                                                                                                        0.1s 
  Volume "kafka-kibana-zipkin_elasticsearch"     Created                                                                                                                        0.0s 
  Container zookeeper                            Started                                                                                                                        1.4s 
  Container storage                              Started                                                                                                                        1.4s 
  Container dependencies                         Started                                                                                                                        1.1s 
  Container zipkin                               Started                                                                                                                        1.0s 
  Container broker                               Started                                                                                                                        1.1s 
  Container kafka-kibana-zipkin-elasticsearch-1  Started                                                                                                                        1.4s 
  Container kafka-kibana-zipkin-kibana-1         Started                                                                                                                        1.8s 
  Container kafka-kibana-zipkin-logstash-1       Started                
  • Then let's run 2 commands below to create 2 topics for Kafka.
1
2
3
docker exec broker kafka-topics --bootstrap-server broker:9092 --create --replication-factor 1 --partitions 1 --topic logstash

docker exec broker kafka-topics --bootstrap-server broker:9092 --create --replication-factor 1 --partitions 1 --topic zipkin
  • Then let's start all services:

    • Eureka Service
    • BFF Service
    • Customer Service
    • Product Service
  • Next, let's open Postman and call the GET http://localhost:9090/v1/bff/customer/message api, then you can see the result as below.

 #zoom

  • Now, let's access to zipkin server at localhost:9411. Then let's search sleuth-zipkin-bff-service then we will have one trace as in the image below.

 #zoom

  • Next, if we view the detail of this trace then we can see we have a traceId and 4 spans which are corresponding with calls from Postman to BFF Service then from BFF Service to Customer Service and 2 times from Customer Service to Product Service.

 #zoom

 #zoom

  • Because in application.yaml files in micro-service system we configured supports-join: true then we can see client Send span had joined with the server received span. For example if we look into the second span and view the details on the right side then we can see there are 4 events in this span which are Client Send, Server Received, Server Sent and Client Received.
  • Next, when we access Kibana at localhost:5601 then use the traceId in the Zipkin to search, we can see all the log information regarding to the calls from Postman to BFF Service then to Customer Service and finally to Product Service as in the image below.

 #zoom

  • So with using traceId in Kibana we can easily search log information in our micro-system. In the log we can also see the parentSpanId and sampled that we configured before in micro-service system services.

Failed Case 1#

  • In this example, we just simply use Postman to call BFF Service and the BFF Service will call Customer Service and receive an error response as in the image below.

 #zoom

  • Okay let's use Postman and call to the api http://localhost:9090/v1/bff/customer/message/error then you can see the result as below.

 #zoom

  • Now, if we access the zipkin server then we will get a trace with error as in the image below.

 #zoom

  • Then if we view the trace detail then we will see error in the second span with some details inside the Tags like the error code, the controller class and api.

 #zoom

  • Then, let's take this failed traceId and search in Kibana system, we can see all the log information for calls from Postman to BFF service and from BFF Service to Customer Service and also the exception stacktraces Customer Service and BFF Service for the failed client call also.

 #zoom

  • So as a developer we can easily check and find the root cause of the error when we got some issues in our micro-service system.

Failed Case 2#

  • Okay the failed case 1 is really simple because we only have 2 services that interact with each other. In this example we will have 3 services in which Customer Service and Product Service will have 2 calls and we assume that the we will get the error at the second call as in the image below.

 #zoom

  • Okay let's use Postman and call to the api http://localhost:9090/v1/bff/customer/message/error2 then you can see the result as below.

 #zoom

  • Now, if we access the zipkin server then we will get the second trace with error as in the image below.

 #zoom

  • Then if we view the trace detail then we can easily determine the span that the error come from with some details inside the Tags like the error code, the controller class and api.

 #zoom

  • Then, if we want to see more details, let's take this failed traceId and search in Kibana system and we can see a list of log information for calls from Postman to BFF Service then from BFF Service to Customer Service and then from Customer Service to Product Service. We also see the error stacktraces and can easily determine the root issue come from which service.

 #zoom

Support Join Is False#

  • Okay, let's change the configuration supports-join: false in application.yaml for all Spring Boot services and start again. Then when we call apis from postman we can see results in Zipkin as in images below.

  • For the happy case we will get the result as below.

 #zoom

 #zoom

  • As we can see in the details of the trace, we have 7 spans which are corresponding with calls from Postman to BFF Service then from BFF Service to Customer Service and 2 times from Customer Service to Product Service.

 #zoom

  • We can see client span the server span are seperated. For example if we look into the second span and view the details on the right side then we can see there are only 2 events in this span which are Client Send and Client Received.

  • Okay for the failed case 1, we will have the result as below.

 #zoom

 #zoom

  • For supports-join: false we will have 3 spans because client span and server span are separated and we can easily see the error came from the server span of Customer Service.

  • Likewise, for the failed case 2.

 #zoom

 #zoom

  • We can see we also have 7 spans and we can easily determine the error span come from the server span of Product Service.

Summary#

  • Okay, we have just built a small micro-service system which use Spring Cloud Sleuth, Kafka, Zipkin and Kibana and we also see how these technologies help us in managing and detecting the issue quickly. All sources are put in the References part below.
  • However, I also want to mention a little bit if you want to try this example.
    • Firstly, your computer have to have 16Gb minimum of RAM because we run many docker services and Spring Boot services there.
    • Secondly, I got some issues with docker image openzipkin/zipkin-mysql. When I stop and restart the docker compose then all the trace data will be lost because new docker volume is always created. I will check it if I have time in future.

See Also#

References#