github.com/cockroachdb/cockroach@v20.2.0-alpha.1+incompatible/docs/RFCS/20150720_schema_gossip.md (about)

     1  - Feature Name: schema_gossip
     2  - Status: completed
     3  - Start Date: 2015-07-20
     4  - RFC PR: [#1743](https://github.com/cockroachdb/cockroach/pull/1743)
     5  - Cockroach Issue:
     6  
     7  # Summary
     8  
     9  This RFC suggests implementing eventually-consistent replication of the SQL schema to all Cockroach nodes in a cluster using gossip.
    10  
    11  # Motivation
    12  
    13  In order to support performant SQL queries, each gateway node must be able to address the data requested by the query. Today this requires the node to read `TableDescriptor`s from the KV map, which we believe (though we haven't measured) will cause a substantial performance hit which we can mitigate by actively gossiping the necessary metadata.
    14  
    15  # Detailed design
    16  
    17  The entire `keys.SystemConfigSpan` span will have its writes bifurcated into gossip (with a TTL TBD) in the same way that `storage.(*Replica).maybeGossipConfigs()` works today. The gossip system will provide atomicity in propagating these modifications through the usual sequence number mechanism.
    18  
    19  Complete propagation of new metadata will take at most `numHops * gossipInterval` where `numHops` is the maximum number of hops between any node and the publishing node, and `gossipInterval` is the maximum interval between sequential writes to the gossip network on a given node.
    20  
    21  On the read side, metadata reads' behaviour will change such that they will read from gossip rather than the KV store. This will require plumbing a closure or a reference to the `Gossip` instance down to the `sql.Planner` instance.
    22  
    23  # Drawbacks
    24  
    25  This is slightly more complicated than the current implementation. Because the schema is eventually-consistent, how do we know when migrations are done? We'll have to count on the TTL, which feels a little dirty.
    26  
    27  # Alternatives
    28  
    29  We could augment the current implementation with some of:
    30  - inconsistent reads
    31  - time-bounded local caching
    32  This will be strictly less performant than the gossip approach but will be more optimal in memory as nodes will only cache schema information that they themselves need. Note that at the time of this writing every node will likely need all schema information due to the current uniform distribution of ranges to nodes.
    33  
    34  We could have a special metadata range which all nodes have a replica of. This would probably result in unacceptable read and write times and induce lots of network traffic.
    35  
    36  We could implement this on top of non-voting replicas (which we don't yet support). That would give us eventual consistency without having to go outside of raft, but enforcement of schema freshness remains an open question.
    37  
    38  # Unresolved questions
    39  
    40  How do we measure the performance gain? What should we set the TTL to?