Integrate BACKBEAT SDK and resolve KACHING license validation

Major integrations and fixes:
- Added BACKBEAT SDK integration for P2P operation timing
- Implemented beat-aware status tracking for distributed operations
- Added Docker secrets support for secure license management
- Resolved KACHING license validation via HTTPS/TLS
- Updated docker-compose configuration for clean stack deployment
- Disabled rollback policies to prevent deployment failures
- Added license credential storage (CHORUS-DEV-MULTI-001)

Technical improvements:
- BACKBEAT P2P operation tracking with phase management
- Enhanced configuration system with file-based secrets
- Improved error handling for license validation
- Clean separation of KACHING and CHORUS deployment stacks

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-09-06 07:56:26 +10:00
parent 543ab216f9
commit 9bdcbe0447
4730 changed files with 1480093 additions and 1916 deletions

View File

@@ -0,0 +1,125 @@
Storage Adapters
================
The go-ipld-prime storage APIs were introduced in the v0.14.x ranges of go-ipld-prime,
which happened in fall 2021.
There are many other pieces of code in the IPLD (and even more so, the IPFS) ecosystem
which predate this, and have interfaces that are very _similar_, but not quite exactly the same.
In order to keep using that code, we've built a series of adapters.
You can see these in packages beneath this one:
- `go-ipld-prime/storage/bsadapter` is an adapter to `github.com/ipfs/go-ipfs-blockstore`.
- `go-ipld-prime/storage/dsadapter` is an adapter to `github.com/ipfs/go-datastore`.
- `go-ipld-prime/storage/bsrvadapter` is an adapter to `github.com/ipfs/go-blockservice`.
Note that there are also other packages which implement the go-ipld-prime storage APIs,
but are not considered "adapters" -- these just implement the storage APIs directly:
- `go-ipld-prime/storage/memstore` is a simple in-memory storage system.
- `go-ipld-prime/storage/fsstore` is a simple filesystem-backed storage system
(comparable to, and compatible with [flatfs](https://pkg.go.dev/github.com/ipfs/go-ds-flatfs),
if you're familiar with that -- but higher efficiency).
Finally, note that there are some shared benchmarks across all this:
- check out `go-ipld-prime/storage/benchmarks`!
Why structured like this?
-------------------------
### Why is there adapter code at all?
The `go-ipld-prime/storage` interfaces are a newer generation.
A new generation of APIs was desirable because it unifies the old APIs,
and also because we were able to improves and update several things in the process.
(You can see some of the list of improvements in https://github.com/ipld/go-ipld-prime/pull/265,
where these APIs were first introduced.)
The new generation of APIs avoids several types present in the old APIs which forced otherwise-avoidable allocations.
(See notes later in this document about "which adapter should I use" for more on that.)
Finally, the new generation of APIs is carefully designed to support minimal implementations,
by carefully avoiding use of non-standard-library types in key API definitions,
and by keeping most advanced features behind a standardized convention of feature detection.
Because the newer generation of APIs are not exactly the same as the multiple older APIs we're unifying and updating,
some amount of adapter code is necessary.
(Fortunately, it's not much! But it's not "none", either.)
### Why have this code in a shared place?
The glue code to connect `go-datastore` and the other older APIs
to the new `go-ipld-prime/storage` APIs is fairly minimal...
but there's also no reason for anyone to write it twice,
so we want to put it somewhere easy to share.
### Why do the adapters have their own go modules?
A separate module is used because it's important that go-ipld-prime can be used
without forming a dependency on `go-datastore` (or the other relevant modules, per adapter).
We want this so that there's a reasonable deprecation pathway -- it must be
possible to write new code that doesn't take on transitive dependencies to old code.
(As a bonus, looking at the module dependency graphs makes an interestingly
clear statement about why minimal APIs that don't force transitive dependencies are a good idea!)
### Why is this code all together in this repo?
We put these separate modules in the same git repo as `go-ipld-prime`... because we can.
Technically, neither the storage adapter modules nor the `go-ipld-prime` module depend on each other --
they just have interfaces that are aligned with each other -- so it's very easy to
hold them as separate go modules in the same repo, even though that can otherwise sometimes be tricky.
You may want to make a point of pulling updated versions of the storage adapters that you use
when pulling updates to go-ipld-prime, though.
### Could we put these adapters upstream into the other relevant repos?
Certainly!
We started with them here because it seemed developmentally lower-friction.
That may change; these APIs could move.
This code is just interface satisfaction, so even having multiple copies of it is utterly harmless.
Which of `dsadapter` vs `bsadapter` vs `bsrvadapter` should I use?
------------------------------------------------------------------
None of them, ideally.
A direct implementation of the storage APIs will almost certainly be able to perform better than any of these adapters.
(Check out the `fsstore` package, for example.)
Failing that: use the adapter matching whatever you've got on hand in your code.
There is no correct choice.
`dsadapter` suffers avoidable excessive allocs in processing its key type,
due to choices in the interior of `github.com/ipfs/go-datastore`.
It is also unable to support streaming operation, should you desire it.
`bsadapter` and `bsrvadapter` both also suffer overhead due to their key type,
because they require a transformation back from the plain binary strings used in the storage API to the concrete go-cid type,
which spends some avoidable CPU time (and also, at present, causes avoidable allocs because of some interesting absenses in `go-cid`).
Additionally, they suffer avoidable allocs because they wrap the raw binary data in a "block" type,
which is an interface, and thus heap-escapes; and we need none of that in the storage APIs, and just return the raw data.
They are also unable to support streaming operation, should you desire it.
It's best to choose the shortest path and use the adapter to whatever layer you need to get to --
for example, if you really want to use a `go-datastore` implementation,
*don't* use `bsadapter` and have it wrap a `go-blockstore` that wraps a `go-datastore` if you can help it:
instead, use `dsadapter` and wrap the `go-datastore` without any extra layers of indirection.
You should prefer this because most of the notes above about avoidable allocs are true when
the legacy interfaces are communicating with each other, as well...
so the less you use the internal layering of the legacy interfaces, the better off you'll be.
Using a direct implementation of the storage APIs will suffer none of these overheads,
and so will always be your best bet if possible.
If you have to use one of these adapters, hopefully the performance overheads fall within an acceptable margin.
If not: we'll be overjoyed to accept help porting things.

173
vendor/github.com/ipld/go-ipld-prime/storage/api.go generated vendored Normal file
View File

@@ -0,0 +1,173 @@
package storage
import (
"context"
"io"
)
// --- basics --->
// Storage is one of the base interfaces in the storage APIs.
// This type is rarely seen by itself alone (and never useful to implement alone),
// but is included in both ReadableStorage and WritableStorage.
// Because it's included in both the of the other two useful base interfaces,
// you can define functions that work on either one of them
// by using this type to describe your function's parameters.
//
// Library functions that work with storage systems should take either
// ReadableStorage, or WritableStorage, or Storage, as a parameter,
// depending on whether the function deals with the reading of data,
// or the writing of data, or may be found on either, respectively.
//
// An implementation of Storage may also support many other methods.
// At the very least, it should also support one of either ReadableStorage or WritableStorage.
// It may support even more interfaces beyond that for additional feature detection.
// See the package-wide docs for more discussion of this design.
//
// The Storage interface does not include much of use in itself alone,
// because ReadableStorage and WritableStorage are meant to be the most used types in declarations.
// However, it does include the Has function, because that function is reasonable to require ubiquitously from all implementations,
// and it serves as a reasonable marker to make sure the Storage interface is not trivially satisfied.
type Storage interface {
Has(ctx context.Context, key string) (bool, error)
}
// ReadableStorage is one of the base interfaces in the storage APIs;
// a storage system should implement at minimum either this, or WritableStorage,
// depending on whether it supports reading or writing.
// (One type may also implement both.)
//
// ReadableStorage implementations must at minimum provide
// a way to ask the store whether it contains a key,
// and a way to ask it to return the value.
//
// Library functions that work with storage systems should take either
// ReadableStorage, or WritableStorage, or Storage, as a parameter,
// depending on whether the function deals with the reading of data,
// or the writing of data, or may be found on either, respectively.
//
// An implementation of ReadableStorage may also support many other methods --
// for example, it may additionally match StreamingReadableStorage, or yet more interfaces.
// Usually, you should not need to check for this yourself; instead,
// you should use the storage package's functions to ask for the desired mode of interaction.
// Those functions will will accept any ReadableStorage as an argument,
// detect the additional interfaces automatically and use them if present,
// or, fall back to synthesizing equivalent behaviors from the basics.
// See the package-wide docs for more discussion of this design.
type ReadableStorage interface {
Storage
Get(ctx context.Context, key string) ([]byte, error)
}
// WritableStorage is one of the base interfaces in the storage APIs;
// a storage system should implement at minimum either this, or ReadableStorage,
// depending on whether it supports reading or writing.
// (One type may also implement both.)
//
// WritableStorage implementations must at minimum provide
// a way to ask the store whether it contains a key,
// and a way to put a value into storage indexed by some key.
//
// Library functions that work with storage systems should take either
// ReadableStorage, or WritableStorage, or Storage, as a parameter,
// depending on whether the function deals with the reading of data,
// or the writing of data, or may be found on either, respectively.
//
// An implementation of WritableStorage may also support many other methods --
// for example, it may additionally match StreamingWritableStorage, or yet more interfaces.
// Usually, you should not need to check for this yourself; instead,
// you should use the storage package's functions to ask for the desired mode of interaction.
// Those functions will will accept any WritableStorage as an argument,
// detect the additional interfaces automatically and use them if present,
// or, fall back to synthesizing equivalent behaviors from the basics.
// See the package-wide docs for more discussion of this design.
type WritableStorage interface {
Storage
Put(ctx context.Context, key string, content []byte) error
}
// --- streaming --->
type StreamingReadableStorage interface {
GetStream(ctx context.Context, key string) (io.ReadCloser, error)
}
// StreamingWritableStorage is a feature-detection interface that advertises support for streaming writes.
// It is normal for APIs to use WritableStorage in their exported API surface,
// and then internally check if that value implements StreamingWritableStorage if they wish to use streaming operations.
//
// Streaming writes can be preferable to the all-in-one style of writing of WritableStorage.Put,
// because with streaming writes, the high water mark for memory usage can be kept lower.
// On the other hand, streaming writes can incur slightly higher allocation counts,
// which may cause some performance overhead when handling many small writes in sequence.
//
// The PutStream function returns three parameters: an io.Writer (as you'd expect), another function, and an error.
// The function returned is called a "WriteCommitter".
// The final error value is as usual: it will contain an error value if the write could not be begun.
// ("WriteCommitter" will be refered to as such throughout the docs, but we don't give it a named type --
// unfortunately, this is important, because we don't want to force implementers of storage systems to import this package just for a type name.)
//
// The WriteCommitter function should be called when you're done writing,
// at which time you give it the key you want to commit the data as.
// It will close and flush any streams, and commit the data to its final location under this key.
// (If the io.Writer is also an io.WriteCloser, it is not necessary to call Close on it,
// because using the WriteCommiter will do this for you.)
//
// Because these storage APIs are meant to work well for content-addressed systems,
// the key argument is not provided at the start of the write -- it's provided at the end.
// (This gives the opportunity to be computing a hash of the contents as they're written to the stream.)
//
// As a special case, giving a key of the zero string to the WriteCommiter will
// instead close and remove any temp files, and store nothing.
// An error may still be returned from the WriteCommitter if there is an error cleaning up
// any temporary storage buffers that were created.
//
// Continuing to write to the io.Writer after calling the WriteCommitter function will result in errors.
// Calling the WriteCommitter function more than once will result in errors.
type StreamingWritableStorage interface {
PutStream(ctx context.Context) (io.Writer, func(key string) error, error)
}
// --- other specializations --->
// VectorWritableStorage is an API for writing several slices of bytes at once into storage.
// It's meant a feature-detection interface; not all storage implementations need to provide this feature.
// This kind of API can be useful for maximizing performance in scenarios where
// data is already loaded completely into memory, but scattered across several non-contiguous regions.
type VectorWritableStorage interface {
PutVec(ctx context.Context, key string, blobVec [][]byte) error
}
// PeekableStorage is a feature-detection interface which a storage implementation can use to advertise
// the ability to look at a piece of data, and return it in shared memory.
// The PeekableStorage.Peek method is essentially the same as ReadableStorage.Get --
// but by contrast, ReadableStorage is expected to return a safe copy.
// PeekableStorage can be used when the caller knows they will not mutate the returned slice.
//
// An io.Closer is returned along with the byte slice.
// The Close method on the Closer must be called when the caller is done with the byte slice;
// otherwise, memory leaks may result.
// (Implementers of this interface may be expecting to reuse the byte slice after Close is called.)
//
// Note that Peek does not imply that the caller can use the byte slice freely;
// doing so may result in storage corruption or other undefined behavior.
type PeekableStorage interface {
Peek(ctx context.Context, key string) ([]byte, io.Closer, error)
}
// the following are all hypothetical additional future interfaces (in varying degress of speculativeness):
// FUTURE: an EnumerableStorage API, that lets you list all keys present?
// FUTURE: a cleanup API (for getting rid of tmp files that might've been left behind on rough shutdown)?
// FUTURE: a sync-forcing API?
// FUTURE: a delete API? sure. (just document carefully what its consistency model is -- i.e. basically none.)
// (hunch: if you do want some sort of consistency model -- consider offering a whole family of methods that have some sort of generation or sequencing number on them.)
// FUTURE: a force-overwrite API? (not useful for a content-address system. but maybe a gesture towards wider reusability is acceptable to have on offer.)
// FUTURE: a size estimation API? (unclear if we need to standardize this, but we could. an offer, anyway.)
// FUTURE: a GC API? (dubious -- doing it well probably crosses logical domains, and should not be tied down here.)

75
vendor/github.com/ipld/go-ipld-prime/storage/doc.go generated vendored Normal file
View File

@@ -0,0 +1,75 @@
// The storage package contains interfaces for storage systems, and functions for using them.
//
// These are very low-level storage primitives.
// The interfaces here deal only with raw keys and raw binary blob values.
//
// In IPLD, you can often avoid dealing with storage directly yourself,
// and instead use linking.LinkSystem to handle serialization, hashing, and storage all at once.
// (You'll hand some values that match interfaces from this package to LinkSystem when configuring it.)
// It's probably best to work at that level and above as much as possible.
// If you do need to interact with storage more directly, the read on.
//
// The most basic APIs are ReadableStorage and WritableStorage.
// When writing code that works with storage systems, these two interfaces should be seen in almost all situations:
// user code is recommended to think in terms of these types;
// functions provided by this package will accept parameters of these types and work on them;
// implementations are expected to provide these types first;
// and any new library code is recommended to keep with the theme: use these interfaces preferentially.
//
// Users should decide which actions they want to take using a storage system,
// find the appropriate function in this package (n.b., package function -- not a method on an interface!
// You will likely find one of each, with the same name: pick the package function!),
// and use that function, providing it the storage system (e.g. either ReadableStorage, WritableStorage, or sometimes just Storage)
// as a parameter.
// That function will then use feature-detection (checking for matches to the other,
// more advanced and more specific interfaces in this package) and choose the best way
// to satisfy the request; or, if it can't feature-detect any relevant features,
// the function will fall back to synthesizing the requested behavior out of the most basic API.
// Using the package functions, and letting them do the feature detection for you,
// should provide the most consistent user experience and minimize the amount of work you need to do.
// (Bonus: It also gives us a convenient place to smooth out any future library migrations for you!)
//
// If writing new APIs that are meant to work reusably for any storage implementation:
// APIs should usually be designed around accepting ReadableStorage or WritableStorage as parameters
// (depending on which direction of data flow the API is regarding).
// and use the other interfaces (e.g. StreamingReadableStorage) thereafter internally for feature detection.
// For APIs which may sometimes be found relating to either a read or a write direction of data flow,
// the Storage interface may be used in order to define a function that should accept either ReadableStorage or WritableStorage.
// In other words: when writing reusable APIs, one should follow the same pattern as this package's own functions do.
//
// Similarly, implementers of storage systems should always implement either ReadableStorage or WritableStorage first.
// Only after satisfying one of those should the implementation then move on to further supporting
// additional interfaces in this package (all of which are meant to support feature-detection).
// Beyond one of the basic two, all the other interfaces are optional:
// you can implement them if you want to advertise additional features,
// or advertise fastpaths that your storage system supports;
// but you don't have implement any of those additional interfaces if you don't want to,
// or if your implementation can't offer useful fastpaths for them.
//
// Storage systems as described by this package are allowed to make some interesting trades.
// Generally, write operations are allowed to be first-write-wins.
// Furthermore, there is no requirement that the system return an error if a subsequent write to the same key has different content.
// These rules are reasonable for a content-addressed storage system, and allow great optimizations to be made.
//
// Note that all of the interfaces in this package only use types that are present in the golang standard library.
// This is intentional, and was done very carefully.
// If implementing a storage system, you should find it possible to do so *without* importing this package.
// Because only standard library types are present in the interface contracts,
// it's possible to implement types that align with the interfaces without refering to them.
//
// Note that where keys are discussed in this package, they use the golang string type --
// however, they may be binary. (The golang string type allows arbitrary bytes in general,
// and here, we both use that, and explicitly disavow the usual "norm" that the string type implies UTF-8.
// This is roughly the same as the practical truth that appears when using e.g. os.OpenFile and other similar functions.)
// If you are creating a storage implementation where the underlying medium does not support arbitrary binary keys,
// then it is strongly recommend that your storage implementation should support being configured with
// an "escaping function", which should typically simply be of the form `func(string) string`.
// Additional, your storage implementation's documentation should also clearly describe its internal limitations,
// so that users have enough information to write an escaping function which
// maps their domain into the domain your storage implementation can handle.
package storage
// also note:
// LinkContext stays *out* of this package. It's a chooser-related thing.
// LinkSystem can think about it (and your callbacks over there can think about it), and that's the end of its road.
// (Future: probably LinkSystem should have SetStorage and SetupStorageChooser methods for helping you set things up -- where the former doesn't discuss LinkContext at all.)

122
vendor/github.com/ipld/go-ipld-prime/storage/funcs.go generated vendored Normal file
View File

@@ -0,0 +1,122 @@
package storage
import (
"bytes"
"context"
"fmt"
"io"
)
/*
This file contains equivalents of every method that can be feature-detected on a storage system.
You can always call these functions, and give them the most basic storage interface,
and they'll attempt to feature-detect their way to the best possible implementation of the behavior,
or they'll fall back to synthesizing the same behavior from more basic interfaces.
Long story short: you can always use these functions as an end user, and get the behavior you want --
regardless of how much explicit support the storage implementation has for the exact behavior you requested.
*/
func Has(ctx context.Context, store Storage, key string) (bool, error) {
// Okay, not much going on here -- this function is only here for consistency of style.
return store.Has(ctx, key)
}
func Get(ctx context.Context, store ReadableStorage, key string) ([]byte, error) {
// Okay, not much going on here -- this function is only here for consistency of style.
return store.Get(ctx, key)
}
func Put(ctx context.Context, store WritableStorage, key string, content []byte) error {
// Okay, not much going on here -- this function is only here for consistency of style.
return store.Put(ctx, key, content)
}
// GetStream returns a streaming reader.
// This function will feature-detect the StreamingReadableStorage interface, and use that if possible;
// otherwise it will fall back to using basic ReadableStorage methods transparently
// (at the cost of loading all the data into memory at once and up front).
func GetStream(ctx context.Context, store ReadableStorage, key string) (io.ReadCloser, error) {
// Prefer the feature itself, first.
if streamable, ok := store.(StreamingReadableStorage); ok {
return streamable.GetStream(ctx, key)
}
// Fallback to basic.
blob, err := store.Get(ctx, key)
return noopCloser{bytes.NewReader(blob)}, err
}
// PutStream returns an io.Writer and a WriteCommitter callback.
// (See the docs on StreamingWritableStorage.PutStream for details on what that means.)
// This function will feature-detect the StreamingWritableStorage interface, and use that if possible;
// otherwise it will fall back to using basic WritableStorage methods transparently
// (at the cost of needing to buffer all of the content in memory while the write is in progress).
func PutStream(ctx context.Context, store WritableStorage) (io.Writer, func(key string) error, error) {
// Prefer the feature itself, first.
if streamable, ok := store.(StreamingWritableStorage); ok {
return streamable.PutStream(ctx)
}
// Fallback to basic.
var buf bytes.Buffer
var written bool
return &buf, func(key string) error {
if written {
return fmt.Errorf("WriteCommitter already used")
}
written = true
return store.Put(ctx, key, buf.Bytes())
}, nil
}
// PutVec is an API for writing several slices of bytes at once into storage.
// This kind of API can be useful for maximizing performance in scenarios where
// data is already loaded completely into memory, but scattered across several non-contiguous regions.
// This function will feature-detect the VectorWritableStorage interface, and use that if possible;
// otherwise it will fall back to using StreamingWritableStorage,
// or failing that, fall further back to basic WritableStorage methods, transparently.
func PutVec(ctx context.Context, store WritableStorage, key string, blobVec [][]byte) error {
// Prefer the feature itself, first.
if putvable, ok := store.(VectorWritableStorage); ok {
return putvable.PutVec(ctx, key, blobVec)
}
// Fallback to streaming mode.
// ... or, fallback to basic, and use emulated streaming. Still presumably preferable to doing a big giant memcopy.
// Conveniently, the PutStream function makes that transparent for our implementation, too.
wr, wrcommit, err := PutStream(ctx, store)
if err != nil {
return err
}
for _, blob := range blobVec {
_, err := wr.Write(blob)
if err != nil {
return err
}
}
return wrcommit(key)
}
// Peek accessess the same data as Get, but indicates that the caller promises not to mutate the returned byte slice.
// (By contrast, Get is expected to return a safe copy.)
// This function will feature-detect the PeekableStorage interface, and use that if possible;
// otherwise it will fall back to using basic ReadableStorage methods transparently
// (meaning that a no-copy fastpath simply wasn't available).
//
// An io.Closer is returned along with the byte slice.
// The Close method on the Closer must be called when the caller is done with the byte slice;
// otherwise, memory leaks may result.
// (Implementers of this interface may be expecting to reuse the byte slice after Close is called.)
func Peek(ctx context.Context, store ReadableStorage, key string) ([]byte, io.Closer, error) {
// Prefer the feature itself, first.
if peekable, ok := store.(PeekableStorage); ok {
return peekable.Peek(ctx, key)
}
// Fallback to basic.
bs, err := store.Get(ctx, key)
return bs, noopCloser{nil}, err
}
type noopCloser struct {
io.Reader
}
func (noopCloser) Close() error { return nil }