github.com/ipld/go-ipld-prime@v0.21.0/linking/types.go (about)

     1  package linking
     2  
     3  import (
     4  	"context"
     5  	"hash"
     6  	"io"
     7  
     8  	"github.com/ipld/go-ipld-prime/codec"
     9  	"github.com/ipld/go-ipld-prime/datamodel"
    10  )
    11  
    12  // LinkSystem is a struct that composes all the individual functions
    13  // needed to load and store content addressed data using IPLD --
    14  // encoding functions, hashing functions, and storage connections --
    15  // and then offers the operations a user wants -- Store and Load -- as methods.
    16  //
    17  // Typically, the functions which are fields of LinkSystem are not used
    18  // directly by users (except to set them, when creating the LinkSystem),
    19  // and it's the higher level operations such as Store and Load that user code then calls.
    20  //
    21  // The most typical way to get a LinkSystem is from the linking/cid package,
    22  // which has a factory function called DefaultLinkSystem.
    23  // The LinkSystem returned by that function will be based on CIDs,
    24  // and use the multicodec registry and multihash registry to select encodings and hashing mechanisms.
    25  // The BlockWriteOpener and BlockReadOpener must still be provided by the user;
    26  // otherwise, only the ComputeLink method will work.
    27  //
    28  // Some implementations of BlockWriteOpener and BlockReadOpener may be
    29  // found in the storage package.  Applications are also free to write their own.
    30  // Custom wrapping of BlockWriteOpener and BlockReadOpener are also common,
    31  // and may be reasonable if one wants to build application features that are block-aware.
    32  type LinkSystem struct {
    33  	EncoderChooser     func(datamodel.LinkPrototype) (codec.Encoder, error)
    34  	DecoderChooser     func(datamodel.Link) (codec.Decoder, error)
    35  	HasherChooser      func(datamodel.LinkPrototype) (hash.Hash, error)
    36  	StorageWriteOpener BlockWriteOpener
    37  	StorageReadOpener  BlockReadOpener
    38  	TrustedStorage     bool
    39  	NodeReifier        NodeReifier
    40  	KnownReifiers      map[string]NodeReifier
    41  }
    42  
    43  // The following three types are the key functionality we need from a "blockstore".
    44  //
    45  // Some libraries might provide a "blockstore" object that has these as methods;
    46  // it may also have more methods (like enumeration features, GC features, etc),
    47  // but IPLD doesn't generally concern itself with those.
    48  // We just need these key things, so we can "put" and "get".
    49  //
    50  // The functions are a tad more complicated than "put" and "get" so that they have good mechanical sympathy.
    51  // In particular, the writing/"put" side is broken into two phases, so that the abstraction
    52  // makes it easy to begin to write data before the hash that will identify it is fully computed.
    53  type (
    54  	// BlockReadOpener defines the shape of a function used to
    55  	// open a reader for a block of data.
    56  	//
    57  	// In a content-addressed system, the Link parameter should be only
    58  	// determiner of what block body is returned.
    59  	//
    60  	// The LinkContext may be zero, or may be used to carry extra information:
    61  	// it may be used to carry info which hints at different storage pools;
    62  	// it may be used to carry authentication data; etc.
    63  	// (Any such behaviors are something that a BlockReadOpener implementation
    64  	// will needs to document at a higher detail level than this interface specifies.
    65  	// In this interface, we can only note that it is possible to pass such information opaquely
    66  	// via the LinkContext or by attachments to the general-purpose Context it contains.)
    67  	// The LinkContext should not have effect on the block body returned, however;
    68  	// at most should only affect data availability
    69  	// (e.g. whether any block body is returned, versus an error).
    70  	//
    71  	// Reads are cancellable by cancelling the LinkContext.Context.
    72  	//
    73  	// Other parts of the IPLD library suite (such as the traversal package, and all its functions)
    74  	// will typically take a Context as a parameter or piece of config from the caller,
    75  	// and will pass that down through the LinkContext, meaning this can be used to
    76  	// carry information as well as cancellation control all the way through the system.
    77  	//
    78  	// BlockReadOpener is typically not used directly, but is instead
    79  	// composed in a LinkSystem and used via the methods of LinkSystem.
    80  	// LinkSystem methods will helpfully handle the entire process of opening block readers,
    81  	// verifying the hash of the data stream, and applying a Decoder to build Nodes -- all as one step.
    82  	//
    83  	// BlockReadOpener implementations are not required to validate that
    84  	// the contents which will be streamed out of the reader actually match
    85  	// and hash in the Link parameter before returning.
    86  	// (This is something that the LinkSystem composition will handle if you're using it.)
    87  	//
    88  	// BlockReadOpener can also be created out of storage.ReadableStorage and attached to a LinkSystem
    89  	// via the LinkSystem.SetReadStorage method.
    90  	//
    91  	// Users of a BlockReadOpener function should also check the io.Reader
    92  	// for matching the io.Closer interface, and use the Close function as appropriate if present.
    93  	BlockReadOpener func(LinkContext, datamodel.Link) (io.Reader, error)
    94  
    95  	// BlockWriteOpener defines the shape of a function used to open a writer
    96  	// into which data can be streamed, and which will eventually be "commited".
    97  	// Committing is done using the BlockWriteCommitter returned by using the BlockWriteOpener,
    98  	// and finishes the write along with requiring stating the Link which should identify this data for future reading.
    99  	//
   100  	// The LinkContext may be zero, or may be used to carry extra information:
   101  	// it may be used to carry info which hints at different storage pools;
   102  	// it may be used to carry authentication data; etc.
   103  	//
   104  	// Writes are cancellable by cancelling the LinkContext.Context.
   105  	//
   106  	// Other parts of the IPLD library suite (such as the traversal package, and all its functions)
   107  	// will typically take a Context as a parameter or piece of config from the caller,
   108  	// and will pass that down through the LinkContext, meaning this can be used to
   109  	// carry information as well as cancellation control all the way through the system.
   110  	//
   111  	// BlockWriteOpener is typically not used directly, but is instead
   112  	// composed in a LinkSystem and used via the methods of LinkSystem.
   113  	// LinkSystem methods will helpfully handle the entire process of traversing a Node tree,
   114  	// encoding this data, hashing it, streaming it to the writer, and committing it -- all as one step.
   115  	//
   116  	// BlockWriteOpener implementations are expected to start writing their content immediately,
   117  	// and later, the returned BlockWriteCommitter should also be able to expect that
   118  	// the Link which it is given is a reasonable hash of the content.
   119  	// (To give an example of how this might be efficiently implemented:
   120  	// One might imagine that if implementing a disk storage mechanism,
   121  	// the io.Writer returned from a BlockWriteOpener will be writing a new tempfile,
   122  	// and when the BlockWriteCommiter is called, it will flush the writes
   123  	// and then use a rename operation to place the tempfile in a permanent path based the Link.)
   124  	//
   125  	// BlockWriteOpener can also be created out of storage.WritableStorage and attached to a LinkSystem
   126  	// via the LinkSystem.SetWriteStorage method.
   127  	BlockWriteOpener func(LinkContext) (io.Writer, BlockWriteCommitter, error)
   128  
   129  	// BlockWriteCommitter defines the shape of a function which, together
   130  	// with BlockWriteOpener, handles the writing and "committing" of a write
   131  	// to a content-addressable storage system.
   132  	//
   133  	// BlockWriteCommitter is a function which is will be called at the end of a write process.
   134  	// It should flush any buffers and close the io.Writer which was
   135  	// made available earlier from the BlockWriteOpener call that also returned this BlockWriteCommitter.
   136  	//
   137  	// BlockWriteCommitter takes a Link parameter.
   138  	// This Link is expected to be a reasonable hash of the content,
   139  	// so that the BlockWriteCommitter can use this to commit the data to storage
   140  	// in a content-addressable fashion.
   141  	// See the documentation of BlockWriteOpener for more description of this
   142  	// and an example of how this is likely to be reduced to practice.
   143  	BlockWriteCommitter func(datamodel.Link) error
   144  
   145  	// NodeReifier defines the shape of a function that given a node with no schema
   146  	// or a basic schema, constructs Advanced Data Layout node
   147  	//
   148  	// The LinkSystem itself is passed to the NodeReifier along with a link context
   149  	// because Node interface methods on an ADL may actually traverse links to other
   150  	// pieces of context addressed data that need to be loaded with the Link system
   151  	//
   152  	// A NodeReifier return one of three things:
   153  	// - original node, no error = no reification occurred, just use original node
   154  	// - reified node, no error = the simple node was converted to an ADL
   155  	// - nil, error = the simple node should have been converted to an ADL but something
   156  	// went wrong when we tried to do so
   157  	//
   158  	NodeReifier func(LinkContext, datamodel.Node, *LinkSystem) (datamodel.Node, error)
   159  )
   160  
   161  // LinkContext is a structure carrying ancilary information that may be used
   162  // while loading or storing data -- see its usage in BlockReadOpener, BlockWriteOpener,
   163  // and in the methods on LinkSystem which handle loading and storing data.
   164  //
   165  // A zero value for LinkContext is generally acceptable in any functions that use it.
   166  // In this case, any operations that need a context.Context will quietly use Context.Background
   167  // (thus being uncancellable) and simply have no additional information to work with.
   168  type LinkContext struct {
   169  	// Ctx is the familiar golang Context pattern.
   170  	// Use this for cancellation, or attaching additional info
   171  	// (for example, perhaps to pass auth tokens through to the storage functions).
   172  	Ctx context.Context
   173  
   174  	// Path where the link was encountered.  May be zero.
   175  	//
   176  	// Functions in the traversal package will set this automatically.
   177  	LinkPath datamodel.Path
   178  
   179  	// When traversing data or encoding: the Node containing the link --
   180  	// it may have additional type info, etc, that can be accessed.
   181  	// When building / decoding: not present.
   182  	//
   183  	// Functions in the traversal package will set this automatically.
   184  	LinkNode datamodel.Node
   185  
   186  	// When building data or decoding: the NodeAssembler that will be receiving the link --
   187  	// it may have additional type info, etc, that can be accessed.
   188  	// When traversing / encoding: not present.
   189  	//
   190  	// Functions in the traversal package will set this automatically.
   191  	LinkNodeAssembler datamodel.NodeAssembler
   192  
   193  	// Parent of the LinkNode.  May be zero.
   194  	//
   195  	// Functions in the traversal package will set this automatically.
   196  	ParentNode datamodel.Node
   197  
   198  	// REVIEW: ParentNode in LinkContext -- so far, this has only ever been hypothetically useful.  Keep or drop?
   199  }