github.com/weedge/lib@v0.0.0-20230424045628-a36dcc1d90e4/client/mq/kafka/test-kafka-kraft.md (about)

     1  开发环境: mac
     2  
     3  测试Kraft clusters mode
     4  
     5  ```shell
     6  # 查看已启动的java进程  jps
     7  
     8  # Generate a cluster ID
     9  /usr/local/bin/kafka-storage random-uuid
    10  
    11  # Format Storage Directories, use the same cluster ID
    12  /usr/local/bin/kafka-storage format -t <uuid> -c /usr/local/etc/kafka/kraft/server.properties
    13  # node.id=1
    14  # listeners=PLAINTEXT://:9093,CONTROLLER://:9083 
    15  # advertised.listeners=PLAINTEXT://localhost:9093
    16  # log.dirs=/tmp/kraft-1-combined-logs
    17  /usr/local/bin/kafka-storage format -t <uuid> -c /usr/local/etc/kafka/kraft/server1.properties
    18  # node.id=2
    19  # listeners=PLAINTEXT://:9094,CONTROLLER://:9084 
    20  # advertised.listeners=PLAINTEXT://localhost:9094
    21  # log.dirs=/tmp/kraft-2-combined-logs
    22  /usr/local/bin/kafka-storage format -t <uuid> -c /usr/local/etc/kafka/kraft/server2.properties
    23  
    24  # Start the Kafka Server
    25  /usr/local/bin/kafka-server-start /usr/local/etc/kafka/kraft/server.properties
    26  /usr/local/bin/kafka-server-start /usr/local/etc/kafka/kraft/server1.properties
    27  /usr/local/bin/kafka-server-start /usr/local/etc/kafka/kraft/server2.properties
    28  
    29  # Created topic
    30  /usr/local/bin/kafka-topics  --create --bootstrap-server localhost:9092 --replication-factor 3 --partitions 2 --topic  foo
    31  
    32  # Product topic msg
    33  /usr/local/bin/kafka-console-producer --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --topic foo
    34  
    35  # Consumer topic msg
    36  /usr/local/bin/kafka-console-consumer --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --topic foo --from-beginning
    37  
    38  # Dump metadata log for encounter an issue 
    39  /usr/local/bin/kafka-dump-log --cluster-metadata-decoder --skip-record-metadata --files /tmp/kraft-combined-logs/\@metadata-0/*.log
    40  
    41  # Inspect the metadata of the cluster like the ZooKeeper shell
    42  /usr/local/bin/kafka-metadata-shell  --snapshot /tmp/kraft-combined-logs/\@metadata-0/00000000000000000000.log
    43  
    44  # Stop
    45  /usr/local/bin/kafka-server-stop 
    46  
    47  ```
    48  
    49  /usr/local/etc/kafka/kraft/server.properties 配置 
    50  
    51  ```shell
    52  # This configuration file is intended for use in KRaft mode, where
    53  # Apache ZooKeeper is not present.  See config/kraft/README.md for details.
    54  #
    55  
    56  ############################# Server Basics #############################
    57  
    58  # The role of this server. Setting this puts us in KRaft mode
    59  process.roles=broker,controller
    60  
    61  # The node id associated with this instance's roles
    62  node.id=0
    63  
    64  # The connect string for the controller quorum
    65  controller.quorum.voters=0@localhost:9082,1@localhost:9083,2@localhost:9084
    66  
    67  ############################# Socket Server Settings #############################
    68  
    69  # The address the socket server listens on. It will get the value returned from
    70  # java.net.InetAddress.getCanonicalHostName() if not configured.
    71  #   FORMAT:
    72  #     listeners = listener_name://host_name:port
    73  #   EXAMPLE:
    74  #     listeners = PLAINTEXT://your.host.name:9092
    75  listeners=PLAINTEXT://:9092,CONTROLLER://:9082
    76  inter.broker.listener.name=PLAINTEXT
    77  
    78  # Hostname and port the broker will advertise to producers and consumers. If not set,
    79  # it uses the value for "listeners" if configured.  Otherwise, it will use the value
    80  # returned from java.net.InetAddress.getCanonicalHostName().
    81  advertised.listeners=PLAINTEXT://localhost:9092
    82  
    83  # Listener, host name, and port for the controller to advertise to the brokers. If
    84  # this server is a controller, this listener must be configured.
    85  controller.listener.names=CONTROLLER
    86  
    87  # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
    88  listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    89  
    90  # The number of threads that the server uses for receiving requests from the network and sending responses to the network
    91  num.network.threads=3
    92  
    93  # The number of threads that the server uses for processing requests, which may include disk I/O
    94  num.io.threads=8
    95  
    96  # The send buffer (SO_SNDBUF) used by the socket server
    97  socket.send.buffer.bytes=102400
    98  
    99  # The receive buffer (SO_RCVBUF) used by the socket server
   100  socket.receive.buffer.bytes=102400
   101  
   102  # The maximum size of a request that the socket server will accept (protection against OOM)
   103  socket.request.max.bytes=104857600
   104  
   105  
   106  ############################# Log Basics #############################
   107  
   108  # A comma separated list of directories under which to store log files
   109  log.dirs=/tmp/kraft-combined-logs
   110  
   111  # The default number of log partitions per topic. More partitions allow greater
   112  # parallelism for consumption, but this will also result in more files across
   113  # the brokers.
   114  num.partitions=1
   115  
   116  # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
   117  # This value is recommended to be increased for installations with data dirs located in RAID array.
   118  num.recovery.threads.per.data.dir=1
   119  
   120  ############################# Internal Topic Settings  #############################
   121  # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
   122  # For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
   123  offsets.topic.replication.factor=1
   124  transaction.state.log.replication.factor=1
   125  transaction.state.log.min.isr=1
   126  
   127  ############################# Log Flush Policy #############################
   128  
   129  # Messages are immediately written to the filesystem but by default we only fsync() to sync
   130  # the OS cache lazily. The following configurations control the flush of data to disk.
   131  # There are a few important trade-offs here:
   132  #    1. Durability: Unflushed data may be lost if you are not using replication.
   133  #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
   134  #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
   135  # The settings below allow one to configure the flush policy to flush data after a period of time or
   136  # every N messages (or both). This can be done globally and overridden on a per-topic basis.
   137  
   138  # The number of messages to accept before forcing a flush of data to disk
   139  #log.flush.interval.messages=10000
   140  
   141  # The maximum amount of time a message can sit in a log before we force a flush
   142  #log.flush.interval.ms=1000
   143  
   144  ############################# Log Retention Policy #############################
   145  
   146  # The following configurations control the disposal of log segments. The policy can
   147  # be set to delete segments after a period of time, or after a given size has accumulated.
   148  # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
   149  # from the end of the log.
   150  
   151  # The minimum age of a log file to be eligible for deletion due to age
   152  log.retention.hours=168
   153  
   154  # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
   155  # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
   156  #log.retention.bytes=1073741824
   157  
   158  # The maximum size of a log segment file. When this size is reached a new log segment will be created.
   159  log.segment.bytes=1073741824
   160  
   161  # The interval at which log segments are checked to see if they can be deleted according
   162  # to the retention policies
   163  log.retention.check.interval.ms=300000
   164  ```
   165  
   166  
   167  
   168  
   169  
   170  ##### reference
   171  
   172  1. [KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum](https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum)
   173