trpc.group/trpc-go/trpc-go@v1.0.2/docs/user_guide/client/connection_mode.md (about)

     1  English | [中文](connection_mode.zh_CN.md)
     2  
     3  # tRPC-Go client connection mode
     4  
     5  
     6  ## Introduction
     7  
     8  Currently, tRPC-Go client supports various connection modes for the initiator of requests, including short connections, connection pools, and connection multiplexing. The client uses a connection pool mode by default, and users can choose different connection modes according to their needs.
     9  
    10  <font color="red">Note: The connection pool here refers to the connection pool implemented in tRPC-Go's transport layer. The database and HTTP plugins replace the transport with open-source libraries using a plugin mode, and do not use this connection pool.</font>
    11  
    12  ## Principle and Implementation
    13  
    14  ### Short connection
    15  
    16  The client creates a new connection for each request, and the connection is destroyed after the request is completed. In the case of a large number of requests, the throughput of the service will be greatly affected, resulting in significant performance loss.
    17  
    18  Use cases: it is suitable for one-time requests or when the called service is an old service that does not support receiving multiple requests on one connection.
    19  
    20  ### Connection pool
    21  
    22  The client maintains a connection pool for each downstream IP, and each request first obtains an IP from the name service, then obtains the corresponding connection pool based on the IP, and retrieves a connection from the connection pool. After the request is completed, the connection is returned to the connection pool. During the request process, this connection is exclusive and cannot be reused. The connections in the connection pool are destroyed and newly created according to a certain strategy. Binding one connection for one invocation may result in a large number of network connections when both upstream and downstream have a large scale, which creates enormous scheduling pressure and computational overhead.
    23  
    24  Use cases: This mode can be used in almost all scenarios.
    25  Note: Since the strategy of the connection pool queue is Last In First Out (LIFO), if the backend uses VIP addressing, it is possible to cause uneven distribution of the number of connections among different instances. In this case, it is advisable to address based on the name service as much as possible.
    26  
    27  ### Connection multiplexing
    28  
    29  The client sends multiple requests simultaneously on the same connection, and each request is distinguished by a serial number ID. The client establishes a long connection with each downstream service node, and by default all requests are sent to the server through this long connection. The server needs to support connection reuse mode. Connection multiplexing can greatly reduce the number of connections between services, but due to TCP header blocking, when the number of concurrent requests on the same connection is too high, it may cause some delay (in milliseconds). This problem can be alleviated to some extent by increasing the number of multiplexing connections (default two connections are established for one IP).
    30  
    31  Use cases: This mode is suitable for scenarios with extreme requirements for stability and throughput. The server needs to support single-connection asynchronous concurrent processing and the ability to distinguish requests by serial number ID, which requires certain server capabilities and protocol fields.
    32  
    33  Warning:
    34  
    35  - Because connection multiplexing will only establish one connection for each backend node, if the backend is in vip addressing mode (only one instance from the perspective of the client), connection multiplexing cannot be used, and the connection pool mode must be used.
    36  - The transferred server (note that it is not your current service, but the service called by you) must support connection multiplexing, that is, each request is processed asynchronously on one connection, multiple sending and multiple receiving, otherwise, there will be a large number of timeout failures on the client side. 
    37  
    38  ## Example
    39  
    40  ### Short connection
    41  
    42  ```go
    43  opts := []client.Option{
    44  		client.WithNamespace("Development"),
    45  		client.WithServiceName("trpc.app.server.service"),
    46  		// If the default connection pool is disabled, the short connection mode will be used
    47  		client.WithDisableConnectionPool(),
    48  }
    49  
    50  clientProxy := pb.NewGreeterClientProxy(opts...)
    51  req := &pb.HelloRequest{
    52  	Msg: "hello",
    53  }
    54  
    55  rsp, err := clientProxy.SayHello(ctx, req)
    56  if err != nil {
    57  	log.Error(err.Error())
    58  	return 
    59  }
    60  
    61  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
    62  ```
    63  
    64  ### Connection pool
    65  
    66  ```go
    67  // The connection pool mode is used by default, no configuration is required
    68  opts := []client.Option{
    69  		client.WithNamespace("Development"),
    70  		client.WithServiceName("trpc.app.server.service"),
    71  }
    72  
    73  clientProxy := pb.NewGreeterClientProxy(opts...)
    74  req := &pb.HelloRequest{
    75  	Msg: "hello",
    76  }
    77  
    78  rsp, err := clientProxy.SayHello(ctx, req)
    79  if err != nil {
    80  	log.Error(err.Error())
    81  	return 
    82  }
    83  
    84  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
    85  ```
    86  
    87  custom connection pool
    88  
    89  ```go
    90  import "trpc.group/trpc-go/trpc-go/pool/connpool"
    91  
    92  /*
    93  connection pool parameters
    94  type Options struct {
    95  	MinIdle             int			  	// the minimum number of idle connections, periodically replenished by the background of the connection pool, 0 means no replenishment
    96  	MaxIdle             int           	// the maximum number of idle connections, 0 means no limit, the default value of the framework is 65535
    97  	MaxActive           int           	// the maximum number of concurrent connections available to users, 0 means no limit
    98  	Wait                bool          	// whether to wait when the available connections reach the maximum number of concurrency, the default is false, do not wait
    99  	IdleTimeout         time.Duration 	// idle connection timeout, 0 means no limit, the default value of the framework is 50s
   100  	MaxConnLifetime     time.Duration 	// the maximum lifetime of the connection, 0 means no limit
   101  	DialTimeout         time.Duration 	// establish connection timeout, the default value of the framework is 200ms
   102  	ForceClose          bool          	// whether to forcibly close the connection after the user uses it, the default is false, and put it back into the connection pool
   103  	PushIdleConnToTail  bool			// the way to put it back into the connection pool, the default is false, using LIFO to get idle connections
   104  }
   105  */
   106  
   107  // The parameters of the connection pool can be set through option, please refer to the documentation of trpc-go for details, the connection pool needs to be set as a global variable
   108  var pool = connpool.NewConnectionPool(connpool.WithMaxIdle(65535))
   109  // The connection pool mode is used by default, no configuration is required
   110  opts := []client.Option{
   111  		client.WithNamespace("Development"),
   112  		client.WithServiceName("trpc.app.server.service"),
   113  		// Set up a custom connection pool
   114  		client.WithPool(pool),
   115  }
   116  
   117  clientProxy := pb.NewGreeterClientProxy(opts...)
   118  req := &pb.HelloRequest{
   119  	Msg: "hello",
   120  }
   121  
   122  rsp, err := clientProxy.SayHello(ctx, req)
   123  if err != nil {
   124  	log.Error(err.Error())
   125  	return 
   126  }
   127  
   128  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
   129  ```
   130  
   131  ### I/O multiplexing
   132  
   133  ```go
   134  opts := []client.Option{
   135  		client.WithNamespace("Development"),
   136  		client.WithServiceName("trpc.app.server.service"),
   137  		// Enable connection multiplexing
   138  		client.WithMultiplexed(true),
   139  }
   140  
   141  clientProxy := pb.NewGreeterClientProxy(opts...)
   142  req := &pb.HelloRequest{
   143  	Msg: "hello",
   144  }
   145  
   146  rsp, err := clientProxy.SayHello(ctx, req)
   147  if err != nil {
   148  	log.Error(err.Error())
   149  	return 
   150  }
   151  
   152  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
   153  ```
   154  
   155  Set custom Connection multiplexing
   156  
   157  ```go
   158  /*
   159  type PoolOptions struct {
   160      connectNumber int  // set the number of connections per address
   161      queueSize     int  // set the request queue length for each connection
   162      dropFull      bool // whether to discard when the queue is full
   163  }
   164  */
   165  // Connection multiplexing parameters can be set through option. For details, please refer to the documentation of trpc-go. Chengdu needs to be set as a global variable.
   166  var m = multiplexed.New(multiplexed.WithConnectNumber(16))
   167  
   168  opts := []client.Option{
   169  		client.WithNamespace("Development"),
   170  		client.WithServiceName("trpc.app.server.service"),
   171  		// Enable connection multiplexing
   172  		client.WithMultiplexed(true),
   173  		client.WithMultiplexedPool(m),
   174  }
   175  
   176  clientProxy := pb.NewGreeterClientProxy(opts...)
   177  req := &pb.HelloRequest{
   178  	Msg: "hello",
   179  }
   180  
   181  rsp, err := clientProxy.SayHello(ctx, req)
   182  if err != nil {
   183  	log.Error(err.Error())
   184  	return 
   185  }
   186  
   187  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
   188  ```