trpc.group/trpc-go/trpc-go@v1.0.3/docs/user_guide/client/connection_mode.md (about)

     1  English | [中文](connection_mode.zh_CN.md)
     2  
     3  # tRPC-Go client connection mode
     4  
     5  
     6  ## Introduction
     7  
     8  Currently, tRPC-Go client supports various connection modes for the initiator of requests, including short connections, connection pools, and connection multiplexing. The client uses a connection pool mode by default, and users can choose different connection modes according to their needs.
     9  
    10  <font color="red">Note: The connection pool here refers to the connection pool implemented in tRPC-Go's transport layer. The database and HTTP plugins replace the transport with open-source libraries using a plugin mode, and do not use this connection pool.</font>
    11  
    12  ## Principle and Implementation
    13  
    14  ### Short connection
    15  
    16  The client creates a new connection for each request, and the connection is destroyed after the request is completed. In the case of a large number of requests, the throughput of the service will be greatly affected, resulting in significant performance loss.
    17  
    18  Use cases: it is suitable for one-time requests or when the called service is an old service that does not support receiving multiple requests on one connection.
    19  
    20  ### Connection pool
    21  
    22  The client maintains a connection pool for each downstream IP, and each request first obtains an IP from the name service, then obtains the corresponding connection pool based on the IP, and retrieves a connection from the connection pool. After the request is completed, the connection is returned to the connection pool. During the request process, this connection is exclusive and cannot be reused. The connections in the connection pool are destroyed and newly created according to a certain strategy. Binding one connection for one invocation may result in a large number of network connections when both upstream and downstream have a large scale, which creates enormous scheduling pressure and computational overhead.
    23  
    24  Use cases: This mode can be used in almost all scenarios.
    25  Note: Since the strategy of the connection pool queue is Last In First Out (LIFO), if the backend uses VIP addressing, it is possible to cause uneven distribution of the number of connections among different instances. In this case, it is advisable to address based on the name service as much as possible.
    26  
    27  ### Connection multiplexing
    28  
    29  The client sends multiple requests simultaneously on the same connection, and each request is distinguished by a serial number ID. The client establishes a long connection with each downstream service node, and by default all requests are sent to the server through this long connection. The server needs to support connection reuse mode. Connection multiplexing can greatly reduce the number of connections between services, but due to TCP header blocking, when the number of concurrent requests on the same connection is too high, it may cause some delay (in milliseconds). This problem can be alleviated to some extent by increasing the number of multiplexing connections (default two connections are established for one IP).
    30  
    31  Use cases: This mode is suitable for scenarios with extreme requirements for stability and throughput. The server needs to support single-connection asynchronous concurrent processing and the ability to distinguish requests by serial number ID, which requires certain server capabilities and protocol fields.
    32  
    33  Warning:
    34  
    35  - Because connection multiplexing will only establish one connection for each backend node, if the backend is in vip addressing mode (only one instance from the perspective of the client), connection multiplexing cannot be used, and the connection pool mode must be used.
    36  - The transferred server (note that it is not your current service, but the service called by you) must support connection multiplexing, that is, each request is processed asynchronously on one connection, multiple sending and multiple receiving, otherwise, there will be a large number of timeout failures on the client side. 
    37  
    38  ## Example
    39  
    40  ### Short connection
    41  
    42  ```go
    43  opts := []client.Option{
    44  		client.WithNamespace("Development"),
    45  		client.WithServiceName("trpc.app.server.service"),
    46  		// If the default connection pool is disabled, the short connection mode will be used
    47  		client.WithDisableConnectionPool(),
    48  }
    49  
    50  clientProxy := pb.NewGreeterClientProxy(opts...)
    51  req := &pb.HelloRequest{
    52  	Msg: "hello",
    53  }
    54  
    55  rsp, err := clientProxy.SayHello(ctx, req)
    56  if err != nil {
    57  	log.Error(err.Error())
    58  	return 
    59  }
    60  
    61  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
    62  ```
    63  
    64  ### Connection pool
    65  
    66  ```go
    67  // The connection pool mode is used by default, no configuration is required
    68  opts := []client.Option{
    69  		client.WithNamespace("Development"),
    70  		client.WithServiceName("trpc.app.server.service"),
    71  }
    72  
    73  clientProxy := pb.NewGreeterClientProxy(opts...)
    74  req := &pb.HelloRequest{
    75  	Msg: "hello",
    76  }
    77  
    78  rsp, err := clientProxy.SayHello(ctx, req)
    79  if err != nil {
    80  	log.Error(err.Error())
    81  	return 
    82  }
    83  
    84  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
    85  ```
    86  
    87  custom connection pool
    88  
    89  ```go
    90  import "trpc.group/trpc-go/trpc-go/pool/connpool"
    91  
    92  /*
    93  connection pool parameters
    94  type Options struct {
    95  	MinIdle             int			  	// the minimum number of idle connections, periodically replenished by the background of the connection pool, 0 means no replenishment
    96  	MaxIdle             int           	// the maximum number of idle connections, 0 means no limit, the default value of the framework is 65535
    97  	MaxActive           int           	// the maximum number of concurrent connections available to users, 0 means no limit
    98  	Wait                bool          	// whether to wait when the available connections reach the maximum number of concurrency, the default is false, do not wait
    99  	IdleTimeout         time.Duration 	// idle connection timeout, 0 means no limit, the default value of the framework is 50s
   100  	MaxConnLifetime     time.Duration 	// the maximum lifetime of the connection, 0 means no limit
   101  	DialTimeout         time.Duration 	// establish connection timeout, the default value of the framework is 200ms
   102  	ForceClose          bool          	// whether to forcibly close the connection after the user uses it, the default is false, and put it back into the connection pool
   103  	PushIdleConnToTail  bool			// the way to put it back into the connection pool, the default is false, using LIFO to get idle connections
   104  }
   105  */
   106  
   107  // The parameters of the connection pool can be set through option, please refer to the documentation of trpc-go for details, the connection pool needs to be set as a global variable
   108  var pool = connpool.NewConnectionPool(connpool.WithMaxIdle(65535))
   109  // The connection pool mode is used by default, no configuration is required
   110  opts := []client.Option{
   111  		client.WithNamespace("Development"),
   112  		client.WithServiceName("trpc.app.server.service"),
   113  		// Set up a custom connection pool
   114  		client.WithPool(pool),
   115  }
   116  
   117  clientProxy := pb.NewGreeterClientProxy(opts...)
   118  req := &pb.HelloRequest{
   119  	Msg: "hello",
   120  }
   121  
   122  rsp, err := clientProxy.SayHello(ctx, req)
   123  if err != nil {
   124  	log.Error(err.Error())
   125  	return 
   126  }
   127  
   128  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
   129  ```
   130  
   131  
   132  #### Setting Idle Connection Timeout
   133  
   134  For the client's connection pool mode, the framework sets a default idle timeout of 50 seconds.
   135  
   136  * For `go-net`, the connection pool maintains a list of idle connections. The idle timeout only affects the connections in this idle list and is only triggered when the connection is retrieved next time, causing idle connections to be closed due to the idle timeout.
   137  * For `tnet`, the idle timeout is implemented by maintaining a timer on each connection. Even if a connection is being used for a client's call, if the downstream does not return a result within the idle timeout period, the connection will still be triggered by the idle timeout and forcibly closed.
   138  
   139  The methods to change the idle timeout are as follows:
   140  
   141  * `go-net`
   142  
   143  ```go
   144  import "trpc.group/trpc-go/trpc-go/pool/connpool"
   145  
   146  func init() {
   147  	connpool.DefaultConnectionPool = connpool.NewConnectionPool(
   148  		connpool.WithIdleTimeout(0), // Setting to 0 disables it.
   149  	)
   150  }
   151  ```
   152  
   153  tnet
   154  
   155  ```go
   156  import (
   157  	"trpc.group/trpc-go/trpc-go/pool/connpool"
   158  	tnettrans "trpc.group/trpc-go/trpc-go/transport/tnet"
   159  )
   160  
   161  func init() {
   162  	tnettrans.DefaultConnPool = connpool.NewConnectionPool(
   163  	      connpool.WithDialFunc(tnettrans.Dial),
   164  	      connpool.WithIdleTimeout(0), // Setting to 0 disables it.
   165  	      connpool.WithHealthChecker(tnettrans.HealthChecker),
   166        )
   167  }
   168  ```
   169  
   170  **Note**: The server also has a default idle timeout, which is 60 seconds. This time is designed to be longer than the 50 seconds, so that under default conditions, it is the client that triggers the idle connection timeout to actively close the connection, rather than the server triggering a forced cleanup. For methods to change the server's idle timeout, see the server usage documentation.
   171  
   172  
   173  ### I/O multiplexing
   174  
   175  ```go
   176  opts := []client.Option{
   177  		client.WithNamespace("Development"),
   178  		client.WithServiceName("trpc.app.server.service"),
   179  		// Enable connection multiplexing
   180  		client.WithMultiplexed(true),
   181  }
   182  
   183  clientProxy := pb.NewGreeterClientProxy(opts...)
   184  req := &pb.HelloRequest{
   185  	Msg: "hello",
   186  }
   187  
   188  rsp, err := clientProxy.SayHello(ctx, req)
   189  if err != nil {
   190  	log.Error(err.Error())
   191  	return 
   192  }
   193  
   194  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
   195  ```
   196  
   197  Set custom Connection multiplexing
   198  
   199  ```go
   200  /*
   201  type PoolOptions struct {
   202      connectNumber int  // set the number of connections per address
   203      queueSize     int  // set the request queue length for each connection
   204      dropFull      bool // whether to discard when the queue is full
   205  }
   206  */
   207  // Connection multiplexing parameters can be set through option. For details, please refer to the documentation of trpc-go. Chengdu needs to be set as a global variable.
   208  var m = multiplexed.New(multiplexed.WithConnectNumber(16))
   209  
   210  opts := []client.Option{
   211  		client.WithNamespace("Development"),
   212  		client.WithServiceName("trpc.app.server.service"),
   213  		// Enable connection multiplexing
   214  		client.WithMultiplexed(true),
   215  		client.WithMultiplexedPool(m),
   216  }
   217  
   218  clientProxy := pb.NewGreeterClientProxy(opts...)
   219  req := &pb.HelloRequest{
   220  	Msg: "hello",
   221  }
   222  
   223  rsp, err := clientProxy.SayHello(ctx, req)
   224  if err != nil {
   225  	log.Error(err.Error())
   226  	return 
   227  }
   228  
   229  log.Info("req:%v, rsp:%v, err:%v", req, rsp, err)
   230  ```