github.com/susy-go/susy-graviton@v0.0.0-20190614130430-36cddae42305/swarm/network/README.md (about)

     1  ## Streaming
     2  
     3  Streaming is a new protocol of the swarm bzz bundle of protocols.
     4  This protocol provides the basic logic for chunk-based data flow.
     5  It implements simple retrieve requests and delivery using priority queue.
     6  A data exchange stream is a directional flow of chunks between peers.
     7  The source of datachunks is the upstream, the receiver is called the
     8  downstream peer. Each streaming protocol defines an outgoing streamer
     9  and an incoming streamer, the former installing on the upstream,
    10  the latter on the downstream peer.
    11  
    12  Subscribe on StreamerPeer launches an incoming streamer that sends
    13  a subscribe msg upstream. The streamer on the upstream peer
    14  handles the subscribe msg by installing the relevant outgoing streamer
    15  . The modules now engage in a process of upstream sending a sequence of hashes of
    16  chunks downstream (OfferedHashesMsg). The downstream peer evaluates which hashes are needed
    17  and get it delivered by sending back a msg (WantedHashesMsg).
    18  
    19  Historical syncing is supported - currently not the right abstraction --
    20  state kept across sessions by saving a series of intervals after their last
    21  batch actually arrived.
    22  
    23  Live streaming is also supported, by starting session from the first item
    24  after the subscription.
    25  
    26  Provable data exchange. In case a stream represents a swarm document's data layer
    27  or higher level chunks, streaming up to a certain index is always provable. It saves on
    28  sending intermediate chunks.
    29  
    30  Using the streamer logic, various stream types are easy to implement:
    31  
    32  * light node requests:
    33    * url lookup with offset
    34    * document download
    35    * document upload
    36  * syncing
    37    * live session syncing
    38    * historical syncing
    39  * simple retrieve requests and deliveries
    40  * swarm feeds streams
    41  * receipting for finger pointing
    42  
    43  ## Syncing
    44  
    45  Syncing is the process that makes sure storer nodes end up storing all and only the chunks that are requested from them.
    46  
    47  ### Requirements
    48  
    49  - eventual consistency: so each chunk historical should be syncable
    50  - since the same chunk can and will arrive from many peers, (network traffic should be
    51  optimised, only one transfer of data per chunk)
    52  - explicit request deliveries should be prioritised higher than recent chunks received
    53  during the ongoing session which in turn should be higher than historical chunks.
    54  - insured chunks should get receipted for finger pointing litigation, the receipts storage
    55  should be organised efficiently, upstream peer should also be able to find these
    56  receipts for a deleted chunk easily to refute their challenge.
    57  - syncing should be resilient to cut connections, metadata should be persisted that
    58  keep track of syncing state across sessions, historical syncing state should survive restart
    59  - extra data structures to support syncing should be kept at minimum
    60  - syncing is not organized separately for chunk types (Swarm feed updates v regular content chunk)
    61  - various types of streams should have common logic abstracted
    62  
    63  Syncing is now entirely mediated by the localstore, ie., no processes or memory leaks due to network contention.
    64  When a new chunk is stored, its chunk hash is index by proximity bin
    65  
    66  peers syncronise by getting the chunks closer to the downstream peer than to the upstream one.
    67  Consequently peers just sync all stored items for the kad bin the receiving peer falls into.
    68  The special case of nearest neighbour sets is handled by the downstream peer
    69  indicating they want to sync all kademlia bins with proximity equal to or higher
    70  than their depth.
    71  
    72  This sync state represents the initial state of a sync connection session.
    73  Retrieval is dictated by downstream peers simply using a special streamer protocol.
    74  
    75  Syncing chunks created during the session by the upstream peer is called live session syncing
    76  while syncing of earlier chunks is historical syncing.
    77  
    78  Once the relevant chunk is retrieved, downstream peer looks up all hash segments in its localstore
    79  and sends to the upstream peer a message with a a bitvector to indicate
    80  missing chunks (e.g., for chunk `k`, hash with chunk internal index which case )
    81  new items. In turn upstream peer sends the relevant chunk data alongside their index.
    82  
    83  On sending chunks there is a priority queue system. If during looking up hashes in its localstore,
    84  downstream peer hits on an open request then a retrieve request is sent immediately to the upstream peer indicating
    85  that no extra round of checks is needed. If another peers syncer hits the same open request, it is slightly unsafe to not ask
    86  that peer too: if the first one disconnects before delivering or fails to deliver and therefore gets
    87  disconnected, we should still be able to continue with the other. The minimum redundant traffic coming from such simultaneous
    88  eventualities should be sufficiently rare not to warrant more complex treatment.
    89  
    90  Session syncing involves downstream peer to request a new state on a bin from upstream.
    91  using the new state, the range (of chunks) between the previous state and the new one are retrieved
    92  and chunks are requested identical to the historical case. After receiving all the missing chunks
    93  from the new hashes, downstream peer will request a new range. If this happens before upstream peer updates a new state,
    94  we say that session syncing is live or the two peers are in sync. In general the time interval passed since downstream peer request up to the current session cursor is a good indication of a permanent (probably increasing) lag.
    95  
    96  If there is no historical backlog, and downstream peer has an acceptable 'last synced' tag, then it is said to be fully synced with the upstream peer.
    97  If a peer is fully synced with all its storer peers, it can advertise itself as globally fully synced.
    98  
    99  The downstream peer persists the record of the last synced offset. When the two peers disconnect and
   100  reconnect syncing can start from there.
   101  This situation however can also happen while historical syncing is not yet complete.
   102  Effectively this means that the peer needs to persist a record of an arbitrary array of offset ranges covered.
   103  
   104  ### Delivery requests
   105  
   106  once the appropriate ranges of the hashstream are retrieved and buffered, downstream peer just scans the hashes, looks them up in localstore, if not found, create a request entry.
   107  The range is referenced by the chunk index. Alongside the name (indicating the stream, e.g., content chunks for bin 6) and the range
   108  downstream peer sends a 128 long bitvector indicating which chunks are needed.
   109  Newly created requests are satisfied bound togsophy in a waitgroup which when done, will promptt sending the next one.
   110  to be able to do check and storage concurrently, we keep a buffer of one, we start with two batches of hashes.
   111  If there is nothing to give, upstream peers SetNextBatch is blocking. Subscription ends with an unsubscribe. which removes the syncer from the map.
   112  
   113  Canceling requests (for instance the late chunks of an erasure batch) should be a chan closed
   114  on the request
   115  
   116  Simple request is also a subscribe
   117  different streaming protocols are different p2p protocols with same message types.
   118  the constructor is the Run function itself. which takes a streamerpeer as argument
   119  
   120  
   121  ### provable streams
   122  
   123  The swarm  hash over the hash stream has many advantages. It implements a provable data transfer
   124  and provide efficient storage for receipts in the form of inclusion proofs useable for finger pointing litigation.
   125  When challenged on a missing chunk, upstream peer will provide an inclusion proof of a chunk hash against the state of the
   126  sync stream. In order to be able to generate such an inclusion proof, upstream peer needs to store the hash index (counting consecutive hash-size segments) alongside the chunk data and preserve it even when the chunk data is deleted until the chunk is no longer insured.
   127  if there is no valid insurance on the files the entry may be deleted.
   128  As long as the chunk is preserved, no takeover proof will be needed since the node can respond to any challenge.
   129  However, once the node needs to delete an insured chunk for capacity reasons, a receipt should be available to
   130  refute the challenge by finger pointing to a downstream peer.
   131  As part of the deletion protocol then, hashes of insured chunks to be removed are pushed to an infinite stream for every bin.
   132  
   133  Downstream peer on the other hand needs to make sure that they can only be finger pointed about a chunk they did receive and store.
   134  For this the check of a state should be exhaustive. If historical syncing finishes on one state, all hashes before are covered, no
   135  surprises. In other words historical syncing this process is self verifying. With session syncing however, it is not enough to check going back covering the range from old offset to new. Continuity (i.e., that the new state is extension of the old) needs to be verified: after downstream peer reads the range into a buffer, it appends the buffer the last known state at the last known offset and verifies the resulting hash matches
   136  the latest state. Past intervals of historical syncing are checked via the session root.
   137  Upstream peer signs the states, downstream peers can use as handover proofs.
   138  Downstream  peers sign off on a state togsophy with an initial offset.
   139  
   140  Once historical syncing is complete and the session does not lag, downstream peer only preserves the latest upstream state and store the signed version.
   141  
   142  Upstream peer needs to keep the latest takeover states: each deleted chunk's hash should be covered by takeover proof of at least one peer. If historical syncing is complete, upstream peer typically will store only the latest takeover proof from downstream peer.
   143  Crucially, the structure is totally independent of the number of peers in the bin, so it scales extremely well.
   144  
   145  ## implementation
   146  
   147  The simplest protocol just involves upstream peer to prefix the key with the kademlia proximity order (say 0-15 or 0-31)
   148  and simply iterate on index per bin when syncing with a peer.
   149  
   150  priority queues are used for sending chunks so that user triggered requests should be responded to first, session syncing second, and historical with lower priority.
   151  The request on chunks remains implemented as a dataless entry in the memory store.
   152  The lifecycle of this object should be more carefully thought through, ie., when it fails to retrieve it should be removed.