Integrate BACKBEAT SDK and resolve KACHING license validation

Major integrations and fixes:
- Added BACKBEAT SDK integration for P2P operation timing
- Implemented beat-aware status tracking for distributed operations
- Added Docker secrets support for secure license management
- Resolved KACHING license validation via HTTPS/TLS
- Updated docker-compose configuration for clean stack deployment
- Disabled rollback policies to prevent deployment failures
- Added license credential storage (CHORUS-DEV-MULTI-001)

Technical improvements:
- BACKBEAT P2P operation tracking with phase management
- Enhanced configuration system with file-based secrets
- Improved error handling for license validation
- Clean separation of KACHING and CHORUS deployment stacks

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-09-06 07:56:26 +10:00
parent 543ab216f9
commit 9bdcbe0447
4730 changed files with 1480093 additions and 1916 deletions

View File

@@ -0,0 +1,57 @@
Why does this package exist?
----------------------------
The `linking/cid` package bends the `github.com/ipfs/go-cid` package into conforming to the `ipld.Link` interface.
The `linking/cid` package also contains factory functions for `ipld.LinkSystem`.
These LinkSystem will be constructed with `EncoderChooser`, `DecoderChooser`, and `HasherChooser` funcs
which will use multicodec registries and multihash registries respectively.
### Why not use go-cid directly?
We need a "Link" interface in the root `ipld` package or things just aren't definable.
But we don't want the root `ipld.Link` concept to directly map to `go-cid.Cid` for several reasons:
1. We might want to revisit the go-cid library. Possibly in the "significantly breaking changes" sense.
- It's also not clear when we might do this -- and if we do, the transition period will be *long* because it's a highly-depended-upon library.
- See below for some links to a gist that discusses why.
2. We might want to extend the concept of linking to more than just plain CIDs.
- This is hypothetical at present -- but an often-discussed example is "what if CID+Path was also a Link?"
3. We might sometimes want to use IPLD libraries without using any CID implementation at all.
- e.g. it's totally believable to want to use IPLD libraries for handling JSON and CBOR, even if you don't want IPLD linking.
- if the CID packages were cheap enough, maybe this concern would fade -- but right now, they're **definitely** not; the transitive dependency tree of go-cid is *huge*.
#### If go-cid is revisited, what might that look like?
No idea. (At least, not in a committal way.)
https://gist.github.com/warpfork/e871b7fee83cb814fb1f043089983bb3#existing-implementations
gathers some reflections on the problems that would be nice to solve, though.
https://gist.github.com/warpfork/e871b7fee83cb814fb1f043089983bb3#file-cid-go
contains a draft outline of what a revisited API could look like,
but note that at the time of writing, it is not strongly ratified nor in any way committed to.
At any rate, though, the operative question for this package is:
if we do revisit go-cid, how are we going to make the transition managable?
It seems unlikely we'd be able to make the transition manageable without some interface, somewhere.
So we might as well draw that line at `ipld.Link`.
(I hypothesize that a transition story might involve two CID packages,
which could grow towards a shared interface,
doing so in a way that's purely additive in the established `go-cid` package.
We'd need two separate go modules to do this, since the aim is reducing dependency bloat for those that use the new one.
The shared interface in this story could have more info than `ipld.Link` does now,
but would nonetheless still certainly be an interface in order to support the separation of modules.)
### Why are LinkSystem factory functions here, instead of in the main IPLD package?
Same reason as why we don't use go-cid directly.
If we put these LinkSystem defaults in the root `ipld` package,
we'd bring on all the transitive dependencies of `go-cid` onto an user of `ipld` unconditionally...
and we don't want to do that.
You know that Weird Al song "It's all about the pentiums"?
Retune that in your mind to "It's all about dependencies".

View File

@@ -0,0 +1,76 @@
package cidlink
import (
"fmt"
cid "github.com/ipfs/go-cid"
"github.com/ipld/go-ipld-prime/datamodel"
multihash "github.com/multiformats/go-multihash"
)
var (
_ datamodel.Link = Link{}
_ datamodel.LinkPrototype = LinkPrototype{}
)
// Link implements the datamodel.Link interface using a CID.
// See https://github.com/ipfs/go-cid for more information about CIDs.
//
// When using this value, typically you'll use it as `Link`, and not `*Link`.
// This includes when handling the value as an `datamodel.Link` interface -- the non-pointer form is typically preferable.
// This is because the datamodel.Link inteface is often desirable to be able to use as a golang map key,
// and in that context, pointers would not result in the desired behavior.
type Link struct {
cid.Cid
}
func (lnk Link) Prototype() datamodel.LinkPrototype {
return LinkPrototype{lnk.Cid.Prefix()}
}
func (lnk Link) String() string {
return lnk.Cid.String()
}
func (lnk Link) Binary() string {
return lnk.Cid.KeyString()
}
type LinkPrototype struct {
cid.Prefix
}
func (lp LinkPrototype) BuildLink(hashsum []byte) datamodel.Link {
// Does this method body look surprisingly complex? I agree.
// We actually have to do all this work. The go-cid package doesn't expose a constructor that just lets us directly set the bytes and the prefix numbers next to each other.
// No, `cid.Prefix.Sum` is not the method you are looking for: that expects the whole data body.
// Most of the logic here is the same as the body of `cid.Prefix.Sum`; we just couldn't get at the relevant parts without copypasta.
// There is also some logic that's sort of folded in from the go-multihash module. This is really a mess.
// The go-cid package needs review. So does go-multihash. Their responsibilies are not well compartmentalized and they don't play well with other stdlib golang interfaces.
p := lp.Prefix
length := p.MhLength
if p.MhType == multihash.IDENTITY {
length = -1
}
if p.Version == 0 && (p.MhType != multihash.SHA2_256 ||
(p.MhLength != 32 && p.MhLength != -1)) {
panic(fmt.Errorf("invalid cid v0 prefix"))
}
if length != -1 {
hashsum = hashsum[:p.MhLength]
}
mh, err := multihash.Encode(hashsum, p.MhType)
if err != nil {
panic(err) // No longer possible, but multihash still returns an error for legacy reasons.
}
switch lp.Prefix.Version {
case 0:
return Link{cid.NewCidV0(mh)}
case 1:
return Link{cid.NewCidV1(p.Codec, mh)}
default:
panic(fmt.Errorf("invalid cid version"))
}
}

View File

@@ -0,0 +1,71 @@
package cidlink
import (
"fmt"
"hash"
"github.com/multiformats/go-multihash/core"
"github.com/ipld/go-ipld-prime/codec"
"github.com/ipld/go-ipld-prime/datamodel"
"github.com/ipld/go-ipld-prime/linking"
"github.com/ipld/go-ipld-prime/multicodec"
)
// DefaultLinkSystem returns a linking.LinkSystem which uses cidlink.Link for datamodel.Link.
// During selection of encoders, decoders, and hashers, it examines the multicodec indicator numbers and multihash indicator numbers from the CID,
// and uses the default global multicodec registry (see the go-ipld-prime/multicodec package) for resolving codec implementations,
// and the default global multihash registry (see the go-multihash/core package) for resolving multihash implementations.
//
// No storage functions are present in the returned LinkSystem.
// The caller can assign those themselves as desired.
func DefaultLinkSystem() linking.LinkSystem {
return LinkSystemUsingMulticodecRegistry(multicodec.DefaultRegistry)
}
// LinkSystemUsingMulticodecRegistry is similar to DefaultLinkSystem, but accepts a multicodec.Registry as a parameter.
//
// This can help create a LinkSystem which uses different multicodec implementations than the global registry.
// (Sometimes this can be desired if you want some parts of a program to support a more limited suite of codecs than other parts of the program,
// or needed to use a different multicodec registry than the global one for synchronization purposes, or etc.)
func LinkSystemUsingMulticodecRegistry(mcReg multicodec.Registry) linking.LinkSystem {
return linking.LinkSystem{
EncoderChooser: func(lp datamodel.LinkPrototype) (codec.Encoder, error) {
switch lp2 := lp.(type) {
case LinkPrototype:
fn, err := mcReg.LookupEncoder(lp2.GetCodec())
if err != nil {
return nil, err
}
return fn, nil
default:
return nil, fmt.Errorf("this encoderChooser can only handle cidlink.LinkPrototype; got %T", lp)
}
},
DecoderChooser: func(lnk datamodel.Link) (codec.Decoder, error) {
lp := lnk.Prototype()
switch lp2 := lp.(type) {
case LinkPrototype:
fn, err := mcReg.LookupDecoder(lp2.GetCodec())
if err != nil {
return nil, err
}
return fn, nil
default:
return nil, fmt.Errorf("this decoderChooser can only handle cidlink.LinkPrototype; got %T", lp)
}
},
HasherChooser: func(lp datamodel.LinkPrototype) (hash.Hash, error) {
switch lp2 := lp.(type) {
case LinkPrototype:
h, err := multihash.GetHasher(lp2.MhType)
if err != nil {
return nil, fmt.Errorf("no hasher registered for multihash indicator 0x%x: %w", lp2.MhType, err)
}
return h, nil
default:
return nil, fmt.Errorf("this hasherChooser can only handle cidlink.LinkPrototype; got %T", lp)
}
},
}
}

View File

@@ -0,0 +1,56 @@
package cidlink
import (
"bytes"
"fmt"
"io"
"os"
"github.com/ipld/go-ipld-prime/datamodel"
"github.com/ipld/go-ipld-prime/linking"
)
// Memory is a simple in-memory storage for cidlinks. It's the same as `storage.Memory`
// but uses typical multihash semantics used when reading/writing cidlinks.
//
// Using multihash as the storage key rather than the whole CID will remove the
// distinction between CIDv0 and their CIDv1 counterpart. It also removes the
// distinction between CIDs where the multihash is the same but the codec is
// different, e.g. `dag-cbor` and a `raw` version of the same data.
type Memory struct {
Bag map[string][]byte
}
func (store *Memory) beInitialized() {
if store.Bag != nil {
return
}
store.Bag = make(map[string][]byte)
}
func (store *Memory) OpenRead(lnkCtx linking.LinkContext, lnk datamodel.Link) (io.Reader, error) {
store.beInitialized()
cl, ok := lnk.(Link)
if !ok {
return nil, fmt.Errorf("incompatible link type: %T", lnk)
}
data, exists := store.Bag[string(cl.Hash())]
if !exists {
return nil, os.ErrNotExist
}
return bytes.NewReader(data), nil
}
func (store *Memory) OpenWrite(lnkCtx linking.LinkContext) (io.Writer, linking.BlockWriteCommitter, error) {
store.beInitialized()
buf := bytes.Buffer{}
return &buf, func(lnk datamodel.Link) error {
cl, ok := lnk.(Link)
if !ok {
return fmt.Errorf("incompatible link type: %T", lnk)
}
store.Bag[string(cl.Hash())] = buf.Bytes()
return nil
}, nil
}

30
vendor/github.com/ipld/go-ipld-prime/linking/errors.go generated vendored Normal file
View File

@@ -0,0 +1,30 @@
package linking
import (
"fmt"
"github.com/ipld/go-ipld-prime/datamodel"
)
// ErrLinkingSetup is returned by methods on LinkSystem when some part of the system is not set up correctly,
// or when one of the components refuses to handle a Link or LinkPrototype given.
// (It is not yielded for errors from the storage nor codec systems once they've started; those errors rise without interference.)
type ErrLinkingSetup struct {
Detail string // Perhaps an enum here as well, which states which internal function was to blame?
Cause error
}
func (e ErrLinkingSetup) Error() string { return fmt.Sprintf("%s: %v", e.Detail, e.Cause) }
func (e ErrLinkingSetup) Unwrap() error { return e.Cause }
// ErrHashMismatch is the error returned when loading data and verifying its hash
// and finding that the loaded data doesn't re-hash to the expected value.
// It is typically seen returned by functions like LinkSystem.Load or LinkSystem.Fill.
type ErrHashMismatch struct {
Actual datamodel.Link
Expected datamodel.Link
}
func (e ErrHashMismatch) Error() string {
return fmt.Sprintf("hash mismatch! %v (actual) != %v (expected)", e.Actual, e.Expected)
}

View File

@@ -0,0 +1,290 @@
package linking
import (
"bytes"
"context"
"io"
"github.com/ipld/go-ipld-prime/datamodel"
)
// This file contains all the functions on LinkSystem.
// These are the helpful, user-facing functions we expect folks to use "most of the time" when loading and storing data.
// Variations:
// - Load vs Store vs ComputeLink
// - Load vs LoadPlusRaw
// - With or without LinkContext?
// - Brevity would be nice but I can't think of what to name the functions, so: everything takes LinkContext. Zero value is fine though.
// - [for load direction only]: Prototype (and return Node|error) or Assembler (and just return error)?
// - naming: Load vs Fill.
// - 'Must' variants.
// Can we get as far as a `QuickLoad(lnk Link) (Node, error)` function, which doesn't even ask you for a NodePrototype?
// No, not quite. (Alas.) If we tried to do so, and make it use `basicnode.Prototype`, we'd have import cycles; ded.
// Load looks up some data identified by a Link, and does everything necessary to turn it into usable data.
// In detail, that means it:
// brings that data into memory,
// verifies the hash,
// parses it into the Data Model using a codec,
// and returns an IPLD Node.
//
// Where the data will be loaded from is determined by the configuration of the LinkSystem
// (namely, the StorageReadOpener callback, which can either be set directly,
// or configured via the SetReadStorage function).
//
// The in-memory form used for the returned Node is determined by the given NodePrototype parameter.
// A new builder and a new node will be allocated, via NodePrototype.NewBuilder.
// (If you'd like more control over memory allocation, you may wish to see the Fill function instead.)
//
// A schema may also be used, and apply additional data validation during loading,
// by using a schema.TypedNodePrototype as the NodePrototype argument.
//
// The LinkContext parameter may be used to pass contextual information down to the loading layer.
//
// Which hashing function is used to validate the loaded data is determined by LinkSystem.HasherChooser.
// Which codec is used to parse the loaded data into the Data Model is determined by LinkSystem.DecoderChooser.
//
// The LinkSystem.NodeReifier callback is also applied before returning the Node,
// and so Load may also thereby return an ADL.
func (lsys *LinkSystem) Load(lnkCtx LinkContext, lnk datamodel.Link, np datamodel.NodePrototype) (datamodel.Node, error) {
nb := np.NewBuilder()
if err := lsys.Fill(lnkCtx, lnk, nb); err != nil {
return nil, err
}
nd := nb.Build()
if lsys.NodeReifier == nil {
return nd, nil
}
return lsys.NodeReifier(lnkCtx, nd, lsys)
}
// MustLoad is identical to Load, but panics in the case of errors.
//
// This function is meant for convenience of use in test and demo code, but should otherwise probably be avoided.
func (lsys *LinkSystem) MustLoad(lnkCtx LinkContext, lnk datamodel.Link, np datamodel.NodePrototype) datamodel.Node {
if n, err := lsys.Load(lnkCtx, lnk, np); err != nil {
panic(err)
} else {
return n
}
}
// LoadPlusRaw is similar to Load, but additionally retains and returns the byte slice of the raw data parsed.
//
// Be wary of using this with large data, since it will hold all data in memory at once.
// For more control over streaming, you may want to construct a LinkSystem where you wrap the storage opener callbacks,
// and thus can access the streams (and tee them, or whatever you need to do) as they're opened.
// This function is meant for convenience when data sizes are small enough that fitting them into memory at once is not a problem.
func (lsys *LinkSystem) LoadPlusRaw(lnkCtx LinkContext, lnk datamodel.Link, np datamodel.NodePrototype) (datamodel.Node, []byte, error) {
// Choose all the parts.
decoder, err := lsys.DecoderChooser(lnk)
if err != nil {
return nil, nil, ErrLinkingSetup{"could not choose a decoder", err}
}
// Use LoadRaw to get the data.
// If we're going to have everything in memory at once, we might as well do that first, and then give the codec and the hasher the whole thing at once.
block, err := lsys.LoadRaw(lnkCtx, lnk)
if err != nil {
return nil, block, err
}
// Create a NodeBuilder.
// Deploy the codec.
// Build the node.
nb := np.NewBuilder()
if err := decoder(nb, bytes.NewBuffer(block)); err != nil {
return nil, block, err
}
nd := nb.Build()
// Consider applying NodeReifier, if applicable.
if lsys.NodeReifier == nil {
return nd, block, nil
}
nd, err = lsys.NodeReifier(lnkCtx, nd, lsys)
return nd, block, err
}
// LoadRaw looks up some data identified by a Link, brings that data into memory,
// verifies the hash, and returns it directly as a byte slice.
//
// LoadRaw does not return a data model view of the data,
// nor does it verify that a codec can parse the data at all!
// Use this function at your own risk; it does not provide the same guarantees as the Load or Fill functions do.
func (lsys *LinkSystem) LoadRaw(lnkCtx LinkContext, lnk datamodel.Link) ([]byte, error) {
if lnkCtx.Ctx == nil {
lnkCtx.Ctx = context.Background()
}
// Choose all the parts.
hasher, err := lsys.HasherChooser(lnk.Prototype())
if err != nil {
return nil, ErrLinkingSetup{"could not choose a hasher", err}
}
if lsys.StorageReadOpener == nil {
return nil, ErrLinkingSetup{"no storage configured for reading", io.ErrClosedPipe} // REVIEW: better cause?
}
// Open storage: get the data.
// FUTURE: this could probably use storage.ReadableStorage.Get instead of streaming and a buffer, if we refactored LinkSystem to carry that interface through.
reader, err := lsys.StorageReadOpener(lnkCtx, lnk)
if err != nil {
return nil, err
}
if closer, ok := reader.(io.Closer); ok {
defer closer.Close()
}
var buf bytes.Buffer
if _, err := io.Copy(&buf, reader); err != nil {
return nil, err
}
// Compute the hash.
// (Then do a bit of a jig to build a link out of it -- because that's what we do the actual hash equality check on.)
hasher.Write(buf.Bytes())
hash := hasher.Sum(nil)
lnk2 := lnk.Prototype().BuildLink(hash)
if lnk2.Binary() != lnk.Binary() {
return nil, ErrHashMismatch{Actual: lnk2, Expected: lnk}
}
// No codec to deploy; this is the raw load function.
// So we're done.
return buf.Bytes(), nil
}
// Fill is similar to Load, but allows more control over memory allocations.
// Instead of taking a NodePrototype parameter, Fill takes a NodeAssembler parameter:
// this allows you to use your own NodeBuilder (and reset it, etc, thus controlling allocations),
// or, to fill in some part of a larger structure.
//
// Note that Fill does not regard NodeReifier, even if one has been configured.
// (This is in contrast to Load, which does regard a NodeReifier if one is configured, and thus may return an ADL node).
func (lsys *LinkSystem) Fill(lnkCtx LinkContext, lnk datamodel.Link, na datamodel.NodeAssembler) error {
if lnkCtx.Ctx == nil {
lnkCtx.Ctx = context.Background()
}
// Choose all the parts.
decoder, err := lsys.DecoderChooser(lnk)
if err != nil {
return ErrLinkingSetup{"could not choose a decoder", err}
}
hasher, err := lsys.HasherChooser(lnk.Prototype())
if err != nil {
return ErrLinkingSetup{"could not choose a hasher", err}
}
if lsys.StorageReadOpener == nil {
return ErrLinkingSetup{"no storage configured for reading", io.ErrClosedPipe} // REVIEW: better cause?
}
// Open storage; get a reader stream.
reader, err := lsys.StorageReadOpener(lnkCtx, lnk)
if err != nil {
return err
}
if closer, ok := reader.(io.Closer); ok {
defer closer.Close()
}
// TrustedStorage indicates the data coming out of this reader has already been hashed and verified earlier.
// As a result, we can skip rehashing it
if lsys.TrustedStorage {
return decoder(na, reader)
}
// Tee the stream so that the hasher is fed as the unmarshal progresses through the stream.
tee := io.TeeReader(reader, hasher)
// The actual read is then dragged forward by the codec.
decodeErr := decoder(na, tee)
if decodeErr != nil {
// It is important to security to check the hash before returning any other observation about the content,
// so, if the decode process returns any error, we have several steps to take before potentially returning it.
// First, we try to copy any data remaining that wasn't already pulled through the TeeReader by the decoder,
// so that the hasher can reach the end of the stream.
// If _that_ errors, return the I/O level error.
// We hang onto decodeErr for a while: we can't return that until all the way after we check the hash equality.
_, err := io.Copy(hasher, reader)
if err != nil {
return err
}
}
// Compute the hash.
// (Then do a bit of a jig to build a link out of it -- because that's what we do the actual hash equality check on.)
hash := hasher.Sum(nil)
lnk2 := lnk.Prototype().BuildLink(hash)
if lnk2.Binary() != lnk.Binary() {
return ErrHashMismatch{Actual: lnk2, Expected: lnk}
}
// If we got all the way through IO and through the hash check:
// now, finally, if we did get an error from the codec, we can admit to that.
if decodeErr != nil {
return decodeErr
}
return nil
}
// MustFill is identical to Fill, but panics in the case of errors.
//
// This function is meant for convenience of use in test and demo code, but should otherwise probably be avoided.
func (lsys *LinkSystem) MustFill(lnkCtx LinkContext, lnk datamodel.Link, na datamodel.NodeAssembler) {
if err := lsys.Fill(lnkCtx, lnk, na); err != nil {
panic(err)
}
}
func (lsys *LinkSystem) Store(lnkCtx LinkContext, lp datamodel.LinkPrototype, n datamodel.Node) (datamodel.Link, error) {
if lnkCtx.Ctx == nil {
lnkCtx.Ctx = context.Background()
}
// Choose all the parts.
encoder, err := lsys.EncoderChooser(lp)
if err != nil {
return nil, ErrLinkingSetup{"could not choose an encoder", err}
}
hasher, err := lsys.HasherChooser(lp)
if err != nil {
return nil, ErrLinkingSetup{"could not choose a hasher", err}
}
if lsys.StorageWriteOpener == nil {
return nil, ErrLinkingSetup{"no storage configured for writing", io.ErrClosedPipe} // REVIEW: better cause?
}
// Open storage write stream, feed serial data to the storage and the hasher, and funnel the codec output into both.
writer, commitFn, err := lsys.StorageWriteOpener(lnkCtx)
if err != nil {
return nil, err
}
tee := io.MultiWriter(writer, hasher)
err = encoder(n, tee)
if err != nil {
return nil, err
}
lnk := lp.BuildLink(hasher.Sum(nil))
return lnk, commitFn(lnk)
}
func (lsys *LinkSystem) MustStore(lnkCtx LinkContext, lp datamodel.LinkPrototype, n datamodel.Node) datamodel.Link {
if lnk, err := lsys.Store(lnkCtx, lp, n); err != nil {
panic(err)
} else {
return lnk
}
}
// ComputeLink returns a Link for the given data, but doesn't do anything else
// (e.g. it doesn't try to store any of the serial-form data anywhere else).
func (lsys *LinkSystem) ComputeLink(lp datamodel.LinkPrototype, n datamodel.Node) (datamodel.Link, error) {
encoder, err := lsys.EncoderChooser(lp)
if err != nil {
return nil, ErrLinkingSetup{"could not choose an encoder", err}
}
hasher, err := lsys.HasherChooser(lp)
if err != nil {
return nil, ErrLinkingSetup{"could not choose a hasher", err}
}
err = encoder(n, hasher)
if err != nil {
return nil, err
}
return lp.BuildLink(hasher.Sum(nil)), nil
}
func (lsys *LinkSystem) MustComputeLink(lp datamodel.LinkPrototype, n datamodel.Node) datamodel.Link {
if lnk, err := lsys.ComputeLink(lp, n); err != nil {
panic(err)
} else {
return lnk
}
}

41
vendor/github.com/ipld/go-ipld-prime/linking/setup.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
package linking
import (
"io"
"github.com/ipld/go-ipld-prime/datamodel"
"github.com/ipld/go-ipld-prime/storage"
)
// SetReadStorage configures how the LinkSystem will look for information to load,
// setting it to look at the given storage.ReadableStorage.
//
// This will overwrite the LinkSystem.StorageReadOpener field.
//
// This mechanism only supports setting exactly one ReadableStorage.
// If you would like to make a more complex configuration
// (for example, perhaps using information from a LinkContext to decide which storage area to use?)
// then you should set LinkSystem.StorageReadOpener to a custom callback of your own creation instead.
func (lsys *LinkSystem) SetReadStorage(store storage.ReadableStorage) {
lsys.StorageReadOpener = func(lctx LinkContext, lnk datamodel.Link) (io.Reader, error) {
return storage.GetStream(lctx.Ctx, store, lnk.Binary())
}
}
// SetWriteStorage configures how the LinkSystem will store information,
// setting it to write into the given storage.WritableStorage.
//
// This will overwrite the LinkSystem.StorageWriteOpener field.
//
// This mechanism only supports setting exactly one WritableStorage.
// If you would like to make a more complex configuration
// (for example, perhaps using information from a LinkContext to decide which storage area to use?)
// then you should set LinkSystem.StorageWriteOpener to a custom callback of your own creation instead.
func (lsys *LinkSystem) SetWriteStorage(store storage.WritableStorage) {
lsys.StorageWriteOpener = func(lctx LinkContext) (io.Writer, BlockWriteCommitter, error) {
wr, wrcommit, err := storage.PutStream(lctx.Ctx, store)
return wr, func(lnk datamodel.Link) error {
return wrcommit(lnk.Binary())
}, err
}
}

199
vendor/github.com/ipld/go-ipld-prime/linking/types.go generated vendored Normal file
View File

@@ -0,0 +1,199 @@
package linking
import (
"context"
"hash"
"io"
"github.com/ipld/go-ipld-prime/codec"
"github.com/ipld/go-ipld-prime/datamodel"
)
// LinkSystem is a struct that composes all the individual functions
// needed to load and store content addressed data using IPLD --
// encoding functions, hashing functions, and storage connections --
// and then offers the operations a user wants -- Store and Load -- as methods.
//
// Typically, the functions which are fields of LinkSystem are not used
// directly by users (except to set them, when creating the LinkSystem),
// and it's the higher level operations such as Store and Load that user code then calls.
//
// The most typical way to get a LinkSystem is from the linking/cid package,
// which has a factory function called DefaultLinkSystem.
// The LinkSystem returned by that function will be based on CIDs,
// and use the multicodec registry and multihash registry to select encodings and hashing mechanisms.
// The BlockWriteOpener and BlockReadOpener must still be provided by the user;
// otherwise, only the ComputeLink method will work.
//
// Some implementations of BlockWriteOpener and BlockReadOpener may be
// found in the storage package. Applications are also free to write their own.
// Custom wrapping of BlockWriteOpener and BlockReadOpener are also common,
// and may be reasonable if one wants to build application features that are block-aware.
type LinkSystem struct {
EncoderChooser func(datamodel.LinkPrototype) (codec.Encoder, error)
DecoderChooser func(datamodel.Link) (codec.Decoder, error)
HasherChooser func(datamodel.LinkPrototype) (hash.Hash, error)
StorageWriteOpener BlockWriteOpener
StorageReadOpener BlockReadOpener
TrustedStorage bool
NodeReifier NodeReifier
KnownReifiers map[string]NodeReifier
}
// The following three types are the key functionality we need from a "blockstore".
//
// Some libraries might provide a "blockstore" object that has these as methods;
// it may also have more methods (like enumeration features, GC features, etc),
// but IPLD doesn't generally concern itself with those.
// We just need these key things, so we can "put" and "get".
//
// The functions are a tad more complicated than "put" and "get" so that they have good mechanical sympathy.
// In particular, the writing/"put" side is broken into two phases, so that the abstraction
// makes it easy to begin to write data before the hash that will identify it is fully computed.
type (
// BlockReadOpener defines the shape of a function used to
// open a reader for a block of data.
//
// In a content-addressed system, the Link parameter should be only
// determiner of what block body is returned.
//
// The LinkContext may be zero, or may be used to carry extra information:
// it may be used to carry info which hints at different storage pools;
// it may be used to carry authentication data; etc.
// (Any such behaviors are something that a BlockReadOpener implementation
// will needs to document at a higher detail level than this interface specifies.
// In this interface, we can only note that it is possible to pass such information opaquely
// via the LinkContext or by attachments to the general-purpose Context it contains.)
// The LinkContext should not have effect on the block body returned, however;
// at most should only affect data availability
// (e.g. whether any block body is returned, versus an error).
//
// Reads are cancellable by cancelling the LinkContext.Context.
//
// Other parts of the IPLD library suite (such as the traversal package, and all its functions)
// will typically take a Context as a parameter or piece of config from the caller,
// and will pass that down through the LinkContext, meaning this can be used to
// carry information as well as cancellation control all the way through the system.
//
// BlockReadOpener is typically not used directly, but is instead
// composed in a LinkSystem and used via the methods of LinkSystem.
// LinkSystem methods will helpfully handle the entire process of opening block readers,
// verifying the hash of the data stream, and applying a Decoder to build Nodes -- all as one step.
//
// BlockReadOpener implementations are not required to validate that
// the contents which will be streamed out of the reader actually match
// and hash in the Link parameter before returning.
// (This is something that the LinkSystem composition will handle if you're using it.)
//
// BlockReadOpener can also be created out of storage.ReadableStorage and attached to a LinkSystem
// via the LinkSystem.SetReadStorage method.
//
// Users of a BlockReadOpener function should also check the io.Reader
// for matching the io.Closer interface, and use the Close function as appropriate if present.
BlockReadOpener func(LinkContext, datamodel.Link) (io.Reader, error)
// BlockWriteOpener defines the shape of a function used to open a writer
// into which data can be streamed, and which will eventually be "commited".
// Committing is done using the BlockWriteCommitter returned by using the BlockWriteOpener,
// and finishes the write along with requiring stating the Link which should identify this data for future reading.
//
// The LinkContext may be zero, or may be used to carry extra information:
// it may be used to carry info which hints at different storage pools;
// it may be used to carry authentication data; etc.
//
// Writes are cancellable by cancelling the LinkContext.Context.
//
// Other parts of the IPLD library suite (such as the traversal package, and all its functions)
// will typically take a Context as a parameter or piece of config from the caller,
// and will pass that down through the LinkContext, meaning this can be used to
// carry information as well as cancellation control all the way through the system.
//
// BlockWriteOpener is typically not used directly, but is instead
// composed in a LinkSystem and used via the methods of LinkSystem.
// LinkSystem methods will helpfully handle the entire process of traversing a Node tree,
// encoding this data, hashing it, streaming it to the writer, and committing it -- all as one step.
//
// BlockWriteOpener implementations are expected to start writing their content immediately,
// and later, the returned BlockWriteCommitter should also be able to expect that
// the Link which it is given is a reasonable hash of the content.
// (To give an example of how this might be efficiently implemented:
// One might imagine that if implementing a disk storage mechanism,
// the io.Writer returned from a BlockWriteOpener will be writing a new tempfile,
// and when the BlockWriteCommiter is called, it will flush the writes
// and then use a rename operation to place the tempfile in a permanent path based the Link.)
//
// BlockWriteOpener can also be created out of storage.WritableStorage and attached to a LinkSystem
// via the LinkSystem.SetWriteStorage method.
BlockWriteOpener func(LinkContext) (io.Writer, BlockWriteCommitter, error)
// BlockWriteCommitter defines the shape of a function which, together
// with BlockWriteOpener, handles the writing and "committing" of a write
// to a content-addressable storage system.
//
// BlockWriteCommitter is a function which is will be called at the end of a write process.
// It should flush any buffers and close the io.Writer which was
// made available earlier from the BlockWriteOpener call that also returned this BlockWriteCommitter.
//
// BlockWriteCommitter takes a Link parameter.
// This Link is expected to be a reasonable hash of the content,
// so that the BlockWriteCommitter can use this to commit the data to storage
// in a content-addressable fashion.
// See the documentation of BlockWriteOpener for more description of this
// and an example of how this is likely to be reduced to practice.
BlockWriteCommitter func(datamodel.Link) error
// NodeReifier defines the shape of a function that given a node with no schema
// or a basic schema, constructs Advanced Data Layout node
//
// The LinkSystem itself is passed to the NodeReifier along with a link context
// because Node interface methods on an ADL may actually traverse links to other
// pieces of context addressed data that need to be loaded with the Link system
//
// A NodeReifier return one of three things:
// - original node, no error = no reification occurred, just use original node
// - reified node, no error = the simple node was converted to an ADL
// - nil, error = the simple node should have been converted to an ADL but something
// went wrong when we tried to do so
//
NodeReifier func(LinkContext, datamodel.Node, *LinkSystem) (datamodel.Node, error)
)
// LinkContext is a structure carrying ancilary information that may be used
// while loading or storing data -- see its usage in BlockReadOpener, BlockWriteOpener,
// and in the methods on LinkSystem which handle loading and storing data.
//
// A zero value for LinkContext is generally acceptable in any functions that use it.
// In this case, any operations that need a context.Context will quietly use Context.Background
// (thus being uncancellable) and simply have no additional information to work with.
type LinkContext struct {
// Ctx is the familiar golang Context pattern.
// Use this for cancellation, or attaching additional info
// (for example, perhaps to pass auth tokens through to the storage functions).
Ctx context.Context
// Path where the link was encountered. May be zero.
//
// Functions in the traversal package will set this automatically.
LinkPath datamodel.Path
// When traversing data or encoding: the Node containing the link --
// it may have additional type info, etc, that can be accessed.
// When building / decoding: not present.
//
// Functions in the traversal package will set this automatically.
LinkNode datamodel.Node
// When building data or decoding: the NodeAssembler that will be receiving the link --
// it may have additional type info, etc, that can be accessed.
// When traversing / encoding: not present.
//
// Functions in the traversal package will set this automatically.
LinkNodeAssembler datamodel.NodeAssembler
// Parent of the LinkNode. May be zero.
//
// Functions in the traversal package will set this automatically.
ParentNode datamodel.Node
// REVIEW: ParentNode in LinkContext -- so far, this has only ever been hypothetically useful. Keep or drop?
}