Integrate BACKBEAT SDK and resolve KACHING license validation
Major integrations and fixes: - Added BACKBEAT SDK integration for P2P operation timing - Implemented beat-aware status tracking for distributed operations - Added Docker secrets support for secure license management - Resolved KACHING license validation via HTTPS/TLS - Updated docker-compose configuration for clean stack deployment - Disabled rollback policies to prevent deployment failures - Added license credential storage (CHORUS-DEV-MULTI-001) Technical improvements: - BACKBEAT P2P operation tracking with phase management - Enhanced configuration system with file-based secrets - Improved error handling for license validation - Clean separation of KACHING and CHORUS deployment stacks 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
229
vendor/github.com/ipfs/boxo/LICENSE.md
generated
vendored
Normal file
229
vendor/github.com/ipfs/boxo/LICENSE.md
generated
vendored
Normal file
@@ -0,0 +1,229 @@
|
||||
The contents of this repository are Copyright (c) corresponding authors and
|
||||
contributors, licensed under the `Permissive License Stack` meaning either of:
|
||||
|
||||
- Apache-2.0 Software License: https://www.apache.org/licenses/LICENSE-2.0
|
||||
([...4tr2kfsq](https://dweb.link/ipfs/bafkreiankqxazcae4onkp436wag2lj3ccso4nawxqkkfckd6cg4tr2kfsq))
|
||||
|
||||
- MIT Software License: https://opensource.org/licenses/MIT
|
||||
([...vljevcba](https://dweb.link/ipfs/bafkreiepofszg4gfe2gzuhojmksgemsub2h4uy2gewdnr35kswvljevcba))
|
||||
|
||||
You may not use the contents of this repository except in compliance
|
||||
with one of the listed Licenses. For an extended clarification of the
|
||||
intent behind the choice of Licensing please refer to
|
||||
https://protocol.ai/blog/announcing-the-permissive-license-stack/
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the terms listed in this notice is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
|
||||
either express or implied. See each License for the specific language
|
||||
governing permissions and limitations under that License.
|
||||
|
||||
<!--- SPDX-License-Identifier: Apache-2.0 OR MIT -->
|
||||
`SPDX-License-Identifier: Apache-2.0 OR MIT`
|
||||
|
||||
Verbatim copies of both licenses are included below:
|
||||
|
||||
<details><summary>Apache-2.0 Software License</summary>
|
||||
|
||||
```
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
```
|
||||
</details>
|
||||
|
||||
<details><summary>MIT Software License</summary>
|
||||
|
||||
```
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
```
|
||||
</details>
|
||||
31
vendor/github.com/ipfs/boxo/ipns/README.md
generated
vendored
Normal file
31
vendor/github.com/ipfs/boxo/ipns/README.md
generated
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
## Usage
|
||||
|
||||
To create a new IPNS record:
|
||||
|
||||
```go
|
||||
import (
|
||||
"time"
|
||||
|
||||
ipns "github.com/ipfs/boxo/ipns"
|
||||
crypto "github.com/libp2p/go-libp2p/core/crypto"
|
||||
)
|
||||
|
||||
// Generate a private key to sign the IPNS record with. Most of the time,
|
||||
// however, you'll want to retrieve an already-existing key from IPFS using the
|
||||
// go-ipfs/core/coreapi CoreAPI.KeyAPI() interface.
|
||||
privateKey, publicKey, err := crypto.GenerateKeyPair(crypto.RSA, 2048)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Create an IPNS record that expires in one hour and points to the IPFS address
|
||||
// /ipfs/Qme1knMqwt1hKZbc1BmQFmnm9f36nyQGwXxPGVpVJ9rMK5
|
||||
ipnsRecord, err := ipns.Create(privateKey, []byte("/ipfs/Qme1knMqwt1hKZbc1BmQFmnm9f36nyQGwXxPGVpVJ9rMK5"), 0, time.Now().Add(1*time.Hour))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
Once you have the record, you’ll need to use IPFS to *publish* it.
|
||||
|
||||
There are several other major operations you can do with `go-ipns`. Check out the [API docs](https://pkg.go.dev/github.com/ipfs/boxo/ipns) or look at the tests in this repo for examples.
|
||||
44
vendor/github.com/ipfs/boxo/ipns/errors.go
generated
vendored
Normal file
44
vendor/github.com/ipfs/boxo/ipns/errors.go
generated
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
package ipns
|
||||
|
||||
import (
|
||||
"errors"
|
||||
)
|
||||
|
||||
// ErrExpiredRecord should be returned when an ipns record is
|
||||
// invalid due to being too old
|
||||
var ErrExpiredRecord = errors.New("expired record")
|
||||
|
||||
// ErrUnrecognizedValidity is returned when an IpnsRecord has an
|
||||
// unknown validity type.
|
||||
var ErrUnrecognizedValidity = errors.New("unrecognized validity type")
|
||||
|
||||
// ErrInvalidPath should be returned when an ipns record path
|
||||
// is not in a valid format
|
||||
var ErrInvalidPath = errors.New("record path invalid")
|
||||
|
||||
// ErrSignature should be returned when an ipns record fails
|
||||
// signature verification
|
||||
var ErrSignature = errors.New("record signature verification failed")
|
||||
|
||||
// ErrKeyFormat should be returned when an ipns record key is
|
||||
// incorrectly formatted (not a peer ID)
|
||||
var ErrKeyFormat = errors.New("record key could not be parsed into peer ID")
|
||||
|
||||
// ErrPublicKeyNotFound should be returned when the public key
|
||||
// corresponding to the ipns record path cannot be retrieved
|
||||
// from the peer store
|
||||
var ErrPublicKeyNotFound = errors.New("public key not found in peer store")
|
||||
|
||||
// ErrPublicKeyMismatch should be returned when the public key embedded in the
|
||||
// record doesn't match the expected public key.
|
||||
var ErrPublicKeyMismatch = errors.New("public key in record did not match expected pubkey")
|
||||
|
||||
// ErrBadRecord should be returned when an ipns record cannot be unmarshalled
|
||||
var ErrBadRecord = errors.New("record could not be unmarshalled")
|
||||
|
||||
// 10 KiB limit defined in https://github.com/ipfs/specs/pull/319
|
||||
const MaxRecordSize int = 10 << (10 * 1)
|
||||
|
||||
// ErrRecordSize should be returned when an ipns record is
|
||||
// invalid due to being too big
|
||||
var ErrRecordSize = errors.New("record exceeds allowed size limit")
|
||||
419
vendor/github.com/ipfs/boxo/ipns/ipns.go
generated
vendored
Normal file
419
vendor/github.com/ipfs/boxo/ipns/ipns.go
generated
vendored
Normal file
@@ -0,0 +1,419 @@
|
||||
package ipns
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/multiformats/go-multicodec"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/ipld/go-ipld-prime"
|
||||
_ "github.com/ipld/go-ipld-prime/codec/dagcbor" // used to import the DagCbor encoder/decoder
|
||||
ipldcodec "github.com/ipld/go-ipld-prime/multicodec"
|
||||
basicnode "github.com/ipld/go-ipld-prime/node/basic"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
|
||||
pb "github.com/ipfs/boxo/ipns/pb"
|
||||
|
||||
u "github.com/ipfs/boxo/util"
|
||||
ic "github.com/libp2p/go-libp2p/core/crypto"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
)
|
||||
|
||||
const (
|
||||
validity = "Validity"
|
||||
validityType = "ValidityType"
|
||||
value = "Value"
|
||||
sequence = "Sequence"
|
||||
ttl = "TTL"
|
||||
)
|
||||
|
||||
// Create creates a new IPNS entry and signs it with the given private key.
|
||||
//
|
||||
// This function does not embed the public key. If you want to do that, use
|
||||
// `EmbedPublicKey`.
|
||||
func Create(sk ic.PrivKey, val []byte, seq uint64, eol time.Time, ttl time.Duration) (*pb.IpnsEntry, error) {
|
||||
entry := new(pb.IpnsEntry)
|
||||
|
||||
entry.Value = val
|
||||
typ := pb.IpnsEntry_EOL
|
||||
entry.ValidityType = &typ
|
||||
entry.Sequence = &seq
|
||||
entry.Validity = []byte(u.FormatRFC3339(eol))
|
||||
|
||||
ttlNs := uint64(ttl.Nanoseconds())
|
||||
entry.Ttl = proto.Uint64(ttlNs)
|
||||
|
||||
cborData, err := createCborDataForIpnsEntry(entry)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
entry.Data = cborData
|
||||
|
||||
// For now we still create V1 signatures. These are deprecated, and not
|
||||
// used during verification anymore (Validate func requires SignatureV2),
|
||||
// but setting it here allows legacy nodes (e.g., go-ipfs < v0.9.0) to
|
||||
// still resolve IPNS published by modern nodes.
|
||||
sig1, err := sk.Sign(ipnsEntryDataForSigV1(entry))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute signature data")
|
||||
}
|
||||
entry.SignatureV1 = sig1
|
||||
|
||||
sig2Data, err := ipnsEntryDataForSigV2(entry)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sig2, err := sk.Sign(sig2Data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
entry.SignatureV2 = sig2
|
||||
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
func createCborDataForIpnsEntry(e *pb.IpnsEntry) ([]byte, error) {
|
||||
m := make(map[string]ipld.Node)
|
||||
var keys []string
|
||||
m[value] = basicnode.NewBytes(e.GetValue())
|
||||
keys = append(keys, value)
|
||||
|
||||
m[validity] = basicnode.NewBytes(e.GetValidity())
|
||||
keys = append(keys, validity)
|
||||
|
||||
m[validityType] = basicnode.NewInt(int64(e.GetValidityType()))
|
||||
keys = append(keys, validityType)
|
||||
|
||||
m[sequence] = basicnode.NewInt(int64(e.GetSequence()))
|
||||
keys = append(keys, sequence)
|
||||
|
||||
m[ttl] = basicnode.NewInt(int64(e.GetTtl()))
|
||||
keys = append(keys, ttl)
|
||||
|
||||
sort.Sort(cborMapKeyString_RFC7049(keys))
|
||||
|
||||
newNd := basicnode.Prototype__Map{}.NewBuilder()
|
||||
ma, err := newNd.BeginMap(int64(len(keys)))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, k := range keys {
|
||||
if err := ma.AssembleKey().AssignString(k); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := ma.AssembleValue().AssignNode(m[k]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if err := ma.Finish(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nd := newNd.Build()
|
||||
|
||||
enc, err := ipldcodec.LookupEncoder(uint64(multicodec.DagCbor))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
if err := enc(nd, buf); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// ValidateWithPeerID validates the given IPNS entry against the given peer ID.
|
||||
func ValidateWithPeerID(pid peer.ID, entry *pb.IpnsEntry) error {
|
||||
pk, err := ExtractPublicKey(pid, entry)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return Validate(pk, entry)
|
||||
}
|
||||
|
||||
// Validates validates the given IPNS entry against the given public key.
|
||||
func Validate(pk ic.PubKey, entry *pb.IpnsEntry) error {
|
||||
// Make sure max size is respected
|
||||
if entry.Size() > MaxRecordSize {
|
||||
return ErrRecordSize
|
||||
}
|
||||
|
||||
// Check the ipns record signature with the public key
|
||||
if entry.GetSignatureV2() == nil {
|
||||
// always error if no valid signature could be found
|
||||
return ErrSignature
|
||||
}
|
||||
|
||||
sig2Data, err := ipnsEntryDataForSigV2(entry)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not compute signature data: %w", err)
|
||||
}
|
||||
if ok, err := pk.Verify(sig2Data, entry.GetSignatureV2()); err != nil || !ok {
|
||||
return ErrSignature
|
||||
}
|
||||
|
||||
// TODO: If we switch from pb.IpnsEntry to a more generic IpnsRecord type then perhaps we should only check
|
||||
// this if there is no v1 signature. In the meanwhile this helps avoid some potential rough edges around people
|
||||
// checking the entry fields instead of doing CBOR decoding everywhere.
|
||||
// See https://github.com/ipfs/boxo/ipns/pull/42 for next steps here
|
||||
if err := validateCborDataMatchesPbData(entry); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
eol, err := GetEOL(entry)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if time.Now().After(eol) {
|
||||
return ErrExpiredRecord
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// TODO: Most of this function could probably be replaced with codegen
|
||||
func validateCborDataMatchesPbData(entry *pb.IpnsEntry) error {
|
||||
if len(entry.GetData()) == 0 {
|
||||
return fmt.Errorf("record data is missing")
|
||||
}
|
||||
|
||||
dec, err := ipldcodec.LookupDecoder(uint64(multicodec.DagCbor))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ndbuilder := basicnode.Prototype__Map{}.NewBuilder()
|
||||
if err := dec(ndbuilder, bytes.NewReader(entry.GetData())); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fullNd := ndbuilder.Build()
|
||||
nd, err := fullNd.LookupByString(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ndBytes, err := nd.AsBytes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !bytes.Equal(entry.GetValue(), ndBytes) {
|
||||
return fmt.Errorf("field \"%v\" did not match between protobuf and CBOR", value)
|
||||
}
|
||||
|
||||
nd, err = fullNd.LookupByString(validity)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ndBytes, err = nd.AsBytes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !bytes.Equal(entry.GetValidity(), ndBytes) {
|
||||
return fmt.Errorf("field \"%v\" did not match between protobuf and CBOR", validity)
|
||||
}
|
||||
|
||||
nd, err = fullNd.LookupByString(validityType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ndInt, err := nd.AsInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if int64(entry.GetValidityType()) != ndInt {
|
||||
return fmt.Errorf("field \"%v\" did not match between protobuf and CBOR", validityType)
|
||||
}
|
||||
|
||||
nd, err = fullNd.LookupByString(sequence)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ndInt, err = nd.AsInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if entry.GetSequence() != uint64(ndInt) {
|
||||
return fmt.Errorf("field \"%v\" did not match between protobuf and CBOR", sequence)
|
||||
}
|
||||
|
||||
nd, err = fullNd.LookupByString("TTL")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ndInt, err = nd.AsInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if entry.GetTtl() != uint64(ndInt) {
|
||||
return fmt.Errorf("field \"%v\" did not match between protobuf and CBOR", ttl)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetEOL returns the EOL of this IPNS entry
|
||||
//
|
||||
// This function returns ErrUnrecognizedValidity if the validity type of the
|
||||
// record isn't EOL. Otherwise, it returns an error if it can't parse the EOL.
|
||||
func GetEOL(entry *pb.IpnsEntry) (time.Time, error) {
|
||||
if entry.GetValidityType() != pb.IpnsEntry_EOL {
|
||||
return time.Time{}, ErrUnrecognizedValidity
|
||||
}
|
||||
return u.ParseRFC3339(string(entry.GetValidity()))
|
||||
}
|
||||
|
||||
// EmbedPublicKey embeds the given public key in the given ipns entry. While not
|
||||
// strictly required, some nodes (e.g., DHT servers) may reject IPNS entries
|
||||
// that don't embed their public keys as they may not be able to validate them
|
||||
// efficiently.
|
||||
func EmbedPublicKey(pk ic.PubKey, entry *pb.IpnsEntry) error {
|
||||
// Try extracting the public key from the ID. If we can, *don't* embed
|
||||
// it.
|
||||
id, err := peer.IDFromPublicKey(pk)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := id.ExtractPublicKey(); err != peer.ErrNoPublicKey {
|
||||
// Either a *real* error or nil.
|
||||
return err
|
||||
}
|
||||
|
||||
// We failed to extract the public key from the peer ID, embed it in the
|
||||
// record.
|
||||
pkBytes, err := ic.MarshalPublicKey(pk)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
entry.PubKey = pkBytes
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnmarshalIpnsEntry unmarshalls an IPNS entry from a slice of bytes.
|
||||
func UnmarshalIpnsEntry(data []byte) (*pb.IpnsEntry, error) {
|
||||
var entry pb.IpnsEntry
|
||||
err := proto.Unmarshal(data, &entry)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &entry, nil
|
||||
}
|
||||
|
||||
// ExtractPublicKey extracts a public key matching `pid` from the IPNS record,
|
||||
// if possible.
|
||||
//
|
||||
// This function returns (nil, nil) when no public key can be extracted and
|
||||
// nothing is malformed.
|
||||
func ExtractPublicKey(pid peer.ID, entry *pb.IpnsEntry) (ic.PubKey, error) {
|
||||
if entry.PubKey != nil {
|
||||
pk, err := ic.UnmarshalPublicKey(entry.PubKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unmarshaling pubkey in record: %s", err)
|
||||
}
|
||||
|
||||
expPid, err := peer.IDFromPublicKey(pk)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not regenerate peerID from pubkey: %s", err)
|
||||
}
|
||||
|
||||
if pid != expPid {
|
||||
return nil, ErrPublicKeyMismatch
|
||||
}
|
||||
return pk, nil
|
||||
}
|
||||
|
||||
return pid.ExtractPublicKey()
|
||||
}
|
||||
|
||||
// Compare compares two IPNS entries. It returns:
|
||||
//
|
||||
// * -1 if a is older than b
|
||||
// * 0 if a and b cannot be ordered (this doesn't mean that they are equal)
|
||||
// * +1 if a is newer than b
|
||||
//
|
||||
// It returns an error when either a or b are malformed.
|
||||
//
|
||||
// NOTE: It *does not* validate the records, the caller is responsible for calling
|
||||
// `Validate` first.
|
||||
//
|
||||
// NOTE: If a and b cannot be ordered by this function, you can determine their
|
||||
// order by comparing their serialized byte representations (using
|
||||
// `bytes.Compare`). You must do this if you are implementing a libp2p record
|
||||
// validator (or you can just use the one provided for you by this package).
|
||||
func Compare(a, b *pb.IpnsEntry) (int, error) {
|
||||
aHasV2Sig := a.GetSignatureV2() != nil
|
||||
bHasV2Sig := b.GetSignatureV2() != nil
|
||||
|
||||
// Having a newer signature version is better than an older signature version
|
||||
if aHasV2Sig && !bHasV2Sig {
|
||||
return 1, nil
|
||||
} else if !aHasV2Sig && bHasV2Sig {
|
||||
return -1, nil
|
||||
}
|
||||
|
||||
as := a.GetSequence()
|
||||
bs := b.GetSequence()
|
||||
|
||||
if as > bs {
|
||||
return 1, nil
|
||||
} else if as < bs {
|
||||
return -1, nil
|
||||
}
|
||||
|
||||
at, err := u.ParseRFC3339(string(a.GetValidity()))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
bt, err := u.ParseRFC3339(string(b.GetValidity()))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if at.After(bt) {
|
||||
return 1, nil
|
||||
} else if bt.After(at) {
|
||||
return -1, nil
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func ipnsEntryDataForSigV1(e *pb.IpnsEntry) []byte {
|
||||
return bytes.Join([][]byte{
|
||||
e.Value,
|
||||
e.Validity,
|
||||
[]byte(fmt.Sprint(e.GetValidityType())),
|
||||
},
|
||||
[]byte{})
|
||||
}
|
||||
|
||||
func ipnsEntryDataForSigV2(e *pb.IpnsEntry) ([]byte, error) {
|
||||
dataForSig := []byte("ipns-signature:")
|
||||
dataForSig = append(dataForSig, e.Data...)
|
||||
|
||||
return dataForSig, nil
|
||||
}
|
||||
|
||||
type cborMapKeyString_RFC7049 []string
|
||||
|
||||
func (x cborMapKeyString_RFC7049) Len() int { return len(x) }
|
||||
func (x cborMapKeyString_RFC7049) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
func (x cborMapKeyString_RFC7049) Less(i, j int) bool {
|
||||
li, lj := len(x[i]), len(x[j])
|
||||
if li == lj {
|
||||
return x[i] < x[j]
|
||||
}
|
||||
return li < lj
|
||||
}
|
||||
|
||||
var _ sort.Interface = (cborMapKeyString_RFC7049)(nil)
|
||||
11
vendor/github.com/ipfs/boxo/ipns/pb/Makefile
generated
vendored
Normal file
11
vendor/github.com/ipfs/boxo/ipns/pb/Makefile
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
PB = $(wildcard *.proto)
|
||||
GO = $(PB:.proto=.pb.go)
|
||||
|
||||
all: $(GO)
|
||||
|
||||
%.pb.go: %.proto
|
||||
protoc --proto_path=$(GOPATH)/src:. --gogofast_out=. $<
|
||||
|
||||
clean:
|
||||
rm -f *.pb.go
|
||||
rm -f *.go
|
||||
992
vendor/github.com/ipfs/boxo/ipns/pb/ipns.pb.go
generated
vendored
Normal file
992
vendor/github.com/ipfs/boxo/ipns/pb/ipns.pb.go
generated
vendored
Normal file
@@ -0,0 +1,992 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: ipns.proto
|
||||
|
||||
package ipns_pb
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type IpnsEntry_ValidityType int32
|
||||
|
||||
const (
|
||||
// setting an EOL says "this record is valid until..."
|
||||
IpnsEntry_EOL IpnsEntry_ValidityType = 0
|
||||
)
|
||||
|
||||
var IpnsEntry_ValidityType_name = map[int32]string{
|
||||
0: "EOL",
|
||||
}
|
||||
|
||||
var IpnsEntry_ValidityType_value = map[string]int32{
|
||||
"EOL": 0,
|
||||
}
|
||||
|
||||
func (x IpnsEntry_ValidityType) Enum() *IpnsEntry_ValidityType {
|
||||
p := new(IpnsEntry_ValidityType)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x IpnsEntry_ValidityType) String() string {
|
||||
return proto.EnumName(IpnsEntry_ValidityType_name, int32(x))
|
||||
}
|
||||
|
||||
func (x *IpnsEntry_ValidityType) UnmarshalJSON(data []byte) error {
|
||||
value, err := proto.UnmarshalJSONEnum(IpnsEntry_ValidityType_value, data, "IpnsEntry_ValidityType")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*x = IpnsEntry_ValidityType(value)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (IpnsEntry_ValidityType) EnumDescriptor() ([]byte, []int) {
|
||||
return fileDescriptor_4d5b16fb32bfe8ea, []int{0, 0}
|
||||
}
|
||||
|
||||
type IpnsEntry struct {
|
||||
Value []byte `protobuf:"bytes,1,opt,name=value" json:"value,omitempty"`
|
||||
SignatureV1 []byte `protobuf:"bytes,2,opt,name=signatureV1" json:"signatureV1,omitempty"`
|
||||
ValidityType *IpnsEntry_ValidityType `protobuf:"varint,3,opt,name=validityType,enum=ipns.v1.pb.IpnsEntry_ValidityType" json:"validityType,omitempty"`
|
||||
Validity []byte `protobuf:"bytes,4,opt,name=validity" json:"validity,omitempty"`
|
||||
Sequence *uint64 `protobuf:"varint,5,opt,name=sequence" json:"sequence,omitempty"`
|
||||
Ttl *uint64 `protobuf:"varint,6,opt,name=ttl" json:"ttl,omitempty"`
|
||||
// in order for nodes to properly validate a record upon receipt, they need the public
|
||||
// key associated with it. For old RSA keys, its easiest if we just send this as part of
|
||||
// the record itself. For newer ed25519 keys, the public key can be embedded in the
|
||||
// peerID, making this field unnecessary.
|
||||
PubKey []byte `protobuf:"bytes,7,opt,name=pubKey" json:"pubKey,omitempty"`
|
||||
SignatureV2 []byte `protobuf:"bytes,8,opt,name=signatureV2" json:"signatureV2,omitempty"`
|
||||
Data []byte `protobuf:"bytes,9,opt,name=data" json:"data,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) Reset() { *m = IpnsEntry{} }
|
||||
func (m *IpnsEntry) String() string { return proto.CompactTextString(m) }
|
||||
func (*IpnsEntry) ProtoMessage() {}
|
||||
func (*IpnsEntry) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_4d5b16fb32bfe8ea, []int{0}
|
||||
}
|
||||
func (m *IpnsEntry) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *IpnsEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_IpnsEntry.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *IpnsEntry) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_IpnsEntry.Merge(m, src)
|
||||
}
|
||||
func (m *IpnsEntry) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *IpnsEntry) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_IpnsEntry.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_IpnsEntry proto.InternalMessageInfo
|
||||
|
||||
func (m *IpnsEntry) GetValue() []byte {
|
||||
if m != nil {
|
||||
return m.Value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetSignatureV1() []byte {
|
||||
if m != nil {
|
||||
return m.SignatureV1
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetValidityType() IpnsEntry_ValidityType {
|
||||
if m != nil && m.ValidityType != nil {
|
||||
return *m.ValidityType
|
||||
}
|
||||
return IpnsEntry_EOL
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetValidity() []byte {
|
||||
if m != nil {
|
||||
return m.Validity
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetSequence() uint64 {
|
||||
if m != nil && m.Sequence != nil {
|
||||
return *m.Sequence
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetTtl() uint64 {
|
||||
if m != nil && m.Ttl != nil {
|
||||
return *m.Ttl
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetPubKey() []byte {
|
||||
if m != nil {
|
||||
return m.PubKey
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetSignatureV2() []byte {
|
||||
if m != nil {
|
||||
return m.SignatureV2
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) GetData() []byte {
|
||||
if m != nil {
|
||||
return m.Data
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type IpnsSignatureV2Checker struct {
|
||||
PubKey []byte `protobuf:"bytes,7,opt,name=pubKey" json:"pubKey,omitempty"`
|
||||
SignatureV2 []byte `protobuf:"bytes,8,opt,name=signatureV2" json:"signatureV2,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *IpnsSignatureV2Checker) Reset() { *m = IpnsSignatureV2Checker{} }
|
||||
func (m *IpnsSignatureV2Checker) String() string { return proto.CompactTextString(m) }
|
||||
func (*IpnsSignatureV2Checker) ProtoMessage() {}
|
||||
func (*IpnsSignatureV2Checker) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_4d5b16fb32bfe8ea, []int{1}
|
||||
}
|
||||
func (m *IpnsSignatureV2Checker) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *IpnsSignatureV2Checker) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_IpnsSignatureV2Checker.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *IpnsSignatureV2Checker) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_IpnsSignatureV2Checker.Merge(m, src)
|
||||
}
|
||||
func (m *IpnsSignatureV2Checker) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *IpnsSignatureV2Checker) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_IpnsSignatureV2Checker.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_IpnsSignatureV2Checker proto.InternalMessageInfo
|
||||
|
||||
func (m *IpnsSignatureV2Checker) GetPubKey() []byte {
|
||||
if m != nil {
|
||||
return m.PubKey
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *IpnsSignatureV2Checker) GetSignatureV2() []byte {
|
||||
if m != nil {
|
||||
return m.SignatureV2
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterEnum("ipns.v1.pb.IpnsEntry_ValidityType", IpnsEntry_ValidityType_name, IpnsEntry_ValidityType_value)
|
||||
proto.RegisterType((*IpnsEntry)(nil), "ipns.v1.pb.IpnsEntry")
|
||||
proto.RegisterType((*IpnsSignatureV2Checker)(nil), "ipns.v1.pb.IpnsSignatureV2Checker")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("ipns.proto", fileDescriptor_4d5b16fb32bfe8ea) }
|
||||
|
||||
var fileDescriptor_4d5b16fb32bfe8ea = []byte{
|
||||
// 272 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0xca, 0x2c, 0xc8, 0x2b,
|
||||
0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x82, 0xb0, 0xcb, 0x0c, 0xf5, 0x0a, 0x92, 0x94, 0xf6,
|
||||
0x30, 0x71, 0x71, 0x7a, 0x16, 0xe4, 0x15, 0xbb, 0xe6, 0x95, 0x14, 0x55, 0x0a, 0x89, 0x70, 0xb1,
|
||||
0x96, 0x25, 0xe6, 0x94, 0xa6, 0x4a, 0x30, 0x2a, 0x30, 0x6a, 0xf0, 0x04, 0x41, 0x38, 0x42, 0x0a,
|
||||
0x5c, 0xdc, 0xc5, 0x99, 0xe9, 0x79, 0x89, 0x25, 0xa5, 0x45, 0xa9, 0x61, 0x86, 0x12, 0x4c, 0x60,
|
||||
0x39, 0x64, 0x21, 0x21, 0x37, 0x2e, 0x9e, 0xb2, 0xc4, 0x9c, 0xcc, 0x94, 0xcc, 0x92, 0xca, 0x90,
|
||||
0xca, 0x82, 0x54, 0x09, 0x66, 0x05, 0x46, 0x0d, 0x3e, 0x23, 0x25, 0x3d, 0x84, 0x45, 0x7a, 0x70,
|
||||
0x4b, 0xf4, 0xc2, 0x90, 0x54, 0x06, 0xa1, 0xe8, 0x13, 0x92, 0xe2, 0xe2, 0x80, 0xf1, 0x25, 0x58,
|
||||
0xc0, 0xd6, 0xc0, 0xf9, 0x20, 0xb9, 0xe2, 0xd4, 0xc2, 0xd2, 0xd4, 0xbc, 0xe4, 0x54, 0x09, 0x56,
|
||||
0x05, 0x46, 0x0d, 0x96, 0x20, 0x38, 0x5f, 0x48, 0x80, 0x8b, 0xb9, 0xa4, 0x24, 0x47, 0x82, 0x0d,
|
||||
0x2c, 0x0c, 0x62, 0x0a, 0x89, 0x71, 0xb1, 0x15, 0x94, 0x26, 0x79, 0xa7, 0x56, 0x4a, 0xb0, 0x83,
|
||||
0xcd, 0x81, 0xf2, 0x50, 0xfd, 0x62, 0x24, 0xc1, 0x81, 0xee, 0x17, 0x23, 0x21, 0x21, 0x2e, 0x96,
|
||||
0x94, 0xc4, 0x92, 0x44, 0x09, 0x4e, 0xb0, 0x14, 0x98, 0xad, 0x24, 0xce, 0xc5, 0x83, 0xec, 0x6a,
|
||||
0x21, 0x76, 0x2e, 0x66, 0x57, 0x7f, 0x1f, 0x01, 0x06, 0xa5, 0x20, 0x2e, 0x31, 0x90, 0xc7, 0x82,
|
||||
0x11, 0xfa, 0x9d, 0x33, 0x52, 0x93, 0xb3, 0x53, 0x8b, 0xc8, 0x77, 0x80, 0x93, 0xe8, 0x89, 0x47,
|
||||
0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x3e, 0x78, 0x24, 0xc7, 0x18, 0xc5, 0x0e, 0x0a, 0xc3, 0xf8,
|
||||
0x82, 0x24, 0x40, 0x00, 0x00, 0x00, 0xff, 0xff, 0xbd, 0x45, 0xdd, 0x1a, 0xc2, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *IpnsEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.Data != nil {
|
||||
i -= len(m.Data)
|
||||
copy(dAtA[i:], m.Data)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.Data)))
|
||||
i--
|
||||
dAtA[i] = 0x4a
|
||||
}
|
||||
if m.SignatureV2 != nil {
|
||||
i -= len(m.SignatureV2)
|
||||
copy(dAtA[i:], m.SignatureV2)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.SignatureV2)))
|
||||
i--
|
||||
dAtA[i] = 0x42
|
||||
}
|
||||
if m.PubKey != nil {
|
||||
i -= len(m.PubKey)
|
||||
copy(dAtA[i:], m.PubKey)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.PubKey)))
|
||||
i--
|
||||
dAtA[i] = 0x3a
|
||||
}
|
||||
if m.Ttl != nil {
|
||||
i = encodeVarintIpns(dAtA, i, uint64(*m.Ttl))
|
||||
i--
|
||||
dAtA[i] = 0x30
|
||||
}
|
||||
if m.Sequence != nil {
|
||||
i = encodeVarintIpns(dAtA, i, uint64(*m.Sequence))
|
||||
i--
|
||||
dAtA[i] = 0x28
|
||||
}
|
||||
if m.Validity != nil {
|
||||
i -= len(m.Validity)
|
||||
copy(dAtA[i:], m.Validity)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.Validity)))
|
||||
i--
|
||||
dAtA[i] = 0x22
|
||||
}
|
||||
if m.ValidityType != nil {
|
||||
i = encodeVarintIpns(dAtA, i, uint64(*m.ValidityType))
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
if m.SignatureV1 != nil {
|
||||
i -= len(m.SignatureV1)
|
||||
copy(dAtA[i:], m.SignatureV1)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.SignatureV1)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
if m.Value != nil {
|
||||
i -= len(m.Value)
|
||||
copy(dAtA[i:], m.Value)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.Value)))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *IpnsSignatureV2Checker) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *IpnsSignatureV2Checker) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *IpnsSignatureV2Checker) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.SignatureV2 != nil {
|
||||
i -= len(m.SignatureV2)
|
||||
copy(dAtA[i:], m.SignatureV2)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.SignatureV2)))
|
||||
i--
|
||||
dAtA[i] = 0x42
|
||||
}
|
||||
if m.PubKey != nil {
|
||||
i -= len(m.PubKey)
|
||||
copy(dAtA[i:], m.PubKey)
|
||||
i = encodeVarintIpns(dAtA, i, uint64(len(m.PubKey)))
|
||||
i--
|
||||
dAtA[i] = 0x3a
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintIpns(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovIpns(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *IpnsEntry) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Value != nil {
|
||||
l = len(m.Value)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.SignatureV1 != nil {
|
||||
l = len(m.SignatureV1)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.ValidityType != nil {
|
||||
n += 1 + sovIpns(uint64(*m.ValidityType))
|
||||
}
|
||||
if m.Validity != nil {
|
||||
l = len(m.Validity)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.Sequence != nil {
|
||||
n += 1 + sovIpns(uint64(*m.Sequence))
|
||||
}
|
||||
if m.Ttl != nil {
|
||||
n += 1 + sovIpns(uint64(*m.Ttl))
|
||||
}
|
||||
if m.PubKey != nil {
|
||||
l = len(m.PubKey)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.SignatureV2 != nil {
|
||||
l = len(m.SignatureV2)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.Data != nil {
|
||||
l = len(m.Data)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *IpnsSignatureV2Checker) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.PubKey != nil {
|
||||
l = len(m.PubKey)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.SignatureV2 != nil {
|
||||
l = len(m.SignatureV2)
|
||||
n += 1 + l + sovIpns(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovIpns(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozIpns(x uint64) (n int) {
|
||||
return sovIpns(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *IpnsEntry) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: IpnsEntry: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: IpnsEntry: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Value = append(m.Value[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.Value == nil {
|
||||
m.Value = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field SignatureV1", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.SignatureV1 = append(m.SignatureV1[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.SignatureV1 == nil {
|
||||
m.SignatureV1 = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ValidityType", wireType)
|
||||
}
|
||||
var v IpnsEntry_ValidityType
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
v |= IpnsEntry_ValidityType(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
m.ValidityType = &v
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Validity", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Validity = append(m.Validity[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.Validity == nil {
|
||||
m.Validity = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 5:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Sequence", wireType)
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
v |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
m.Sequence = &v
|
||||
case 6:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Ttl", wireType)
|
||||
}
|
||||
var v uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
v |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
m.Ttl = &v
|
||||
case 7:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field PubKey", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.PubKey = append(m.PubKey[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.PubKey == nil {
|
||||
m.PubKey = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 8:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field SignatureV2", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.SignatureV2 = append(m.SignatureV2[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.SignatureV2 == nil {
|
||||
m.SignatureV2 = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 9:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.Data == nil {
|
||||
m.Data = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipIpns(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *IpnsSignatureV2Checker) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: IpnsSignatureV2Checker: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: IpnsSignatureV2Checker: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 7:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field PubKey", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.PubKey = append(m.PubKey[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.PubKey == nil {
|
||||
m.PubKey = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 8:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field SignatureV2", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.SignatureV2 = append(m.SignatureV2[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.SignatureV2 == nil {
|
||||
m.SignatureV2 = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipIpns(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthIpns
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipIpns(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowIpns
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthIpns
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupIpns
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthIpns
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthIpns = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowIpns = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupIpns = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
||||
36
vendor/github.com/ipfs/boxo/ipns/pb/ipns.proto
generated
vendored
Normal file
36
vendor/github.com/ipfs/boxo/ipns/pb/ipns.proto
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
syntax = "proto2";
|
||||
|
||||
package ipns.v1.pb;
|
||||
|
||||
option go_package = "ipns_pb";
|
||||
|
||||
message IpnsEntry {
|
||||
enum ValidityType {
|
||||
// setting an EOL says "this record is valid until..."
|
||||
EOL = 0;
|
||||
}
|
||||
optional bytes value = 1;
|
||||
optional bytes signatureV1 = 2;
|
||||
|
||||
optional ValidityType validityType = 3;
|
||||
optional bytes validity = 4;
|
||||
|
||||
optional uint64 sequence = 5;
|
||||
|
||||
optional uint64 ttl = 6;
|
||||
|
||||
// in order for nodes to properly validate a record upon receipt, they need the public
|
||||
// key associated with it. For old RSA keys, its easiest if we just send this as part of
|
||||
// the record itself. For newer ed25519 keys, the public key can be embedded in the
|
||||
// peerID, making this field unnecessary.
|
||||
optional bytes pubKey = 7;
|
||||
|
||||
optional bytes signatureV2 = 8;
|
||||
|
||||
optional bytes data = 9;
|
||||
}
|
||||
|
||||
message IpnsSignatureV2Checker {
|
||||
optional bytes pubKey = 7;
|
||||
optional bytes signatureV2 = 8;
|
||||
}
|
||||
126
vendor/github.com/ipfs/boxo/ipns/record.go
generated
vendored
Normal file
126
vendor/github.com/ipfs/boxo/ipns/record.go
generated
vendored
Normal file
@@ -0,0 +1,126 @@
|
||||
package ipns
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
|
||||
pb "github.com/ipfs/boxo/ipns/pb"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
logging "github.com/ipfs/go-log/v2"
|
||||
record "github.com/libp2p/go-libp2p-record"
|
||||
ic "github.com/libp2p/go-libp2p/core/crypto"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
pstore "github.com/libp2p/go-libp2p/core/peerstore"
|
||||
)
|
||||
|
||||
var log = logging.Logger("ipns")
|
||||
|
||||
var _ record.Validator = Validator{}
|
||||
|
||||
// RecordKey returns the libp2p record key for a given peer ID.
|
||||
func RecordKey(pid peer.ID) string {
|
||||
return "/ipns/" + string(pid)
|
||||
}
|
||||
|
||||
// Validator is an IPNS record validator that satisfies the libp2p record
|
||||
// validator interface.
|
||||
type Validator struct {
|
||||
// KeyBook, if non-nil, will be used to lookup keys for validating IPNS
|
||||
// records.
|
||||
KeyBook pstore.KeyBook
|
||||
}
|
||||
|
||||
// Validate validates an IPNS record.
|
||||
func (v Validator) Validate(key string, value []byte) error {
|
||||
ns, pidString, err := record.SplitKey(key)
|
||||
if err != nil || ns != "ipns" {
|
||||
return ErrInvalidPath
|
||||
}
|
||||
|
||||
// Parse the value into an IpnsEntry
|
||||
entry := new(pb.IpnsEntry)
|
||||
err = proto.Unmarshal(value, entry)
|
||||
if err != nil {
|
||||
return ErrBadRecord
|
||||
}
|
||||
|
||||
// Get the public key defined by the ipns path
|
||||
pid, err := peer.IDFromBytes([]byte(pidString))
|
||||
if err != nil {
|
||||
log.Debugf("failed to parse ipns record key %s into peer ID", pidString)
|
||||
return ErrKeyFormat
|
||||
}
|
||||
|
||||
pubk, err := v.getPublicKey(pid, entry)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return Validate(pubk, entry)
|
||||
}
|
||||
|
||||
func (v Validator) getPublicKey(pid peer.ID, entry *pb.IpnsEntry) (ic.PubKey, error) {
|
||||
switch pk, err := ExtractPublicKey(pid, entry); err {
|
||||
case peer.ErrNoPublicKey:
|
||||
case nil:
|
||||
return pk, nil
|
||||
default:
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if v.KeyBook == nil {
|
||||
log.Debugf("public key with hash %s not found in IPNS record and no peer store provided", pid)
|
||||
return nil, ErrPublicKeyNotFound
|
||||
}
|
||||
|
||||
pubk := v.KeyBook.PubKey(pid)
|
||||
if pubk == nil {
|
||||
log.Debugf("public key with hash %s not found in peer store", pid)
|
||||
return nil, ErrPublicKeyNotFound
|
||||
}
|
||||
return pubk, nil
|
||||
}
|
||||
|
||||
// Select selects the best record by checking which has the highest sequence
|
||||
// number and latest EOL.
|
||||
//
|
||||
// This function returns an error if any of the records fail to parse. Validate
|
||||
// your records first!
|
||||
func (v Validator) Select(k string, vals [][]byte) (int, error) {
|
||||
var recs []*pb.IpnsEntry
|
||||
for _, v := range vals {
|
||||
e := new(pb.IpnsEntry)
|
||||
if err := proto.Unmarshal(v, e); err != nil {
|
||||
return -1, err
|
||||
}
|
||||
recs = append(recs, e)
|
||||
}
|
||||
|
||||
return selectRecord(recs, vals)
|
||||
}
|
||||
|
||||
func selectRecord(recs []*pb.IpnsEntry, vals [][]byte) (int, error) {
|
||||
switch len(recs) {
|
||||
case 0:
|
||||
return -1, errors.New("no usable records in given set")
|
||||
case 1:
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
var i int
|
||||
for j := 1; j < len(recs); j++ {
|
||||
cmp, err := Compare(recs[i], recs[j])
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
if cmp == 0 {
|
||||
cmp = bytes.Compare(vals[i], vals[j])
|
||||
}
|
||||
if cmp < 0 {
|
||||
i = j
|
||||
}
|
||||
}
|
||||
|
||||
return i, nil
|
||||
}
|
||||
1
vendor/github.com/ipfs/boxo/util/.gitignore
generated
vendored
Normal file
1
vendor/github.com/ipfs/boxo/util/.gitignore
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*.swp
|
||||
12
vendor/github.com/ipfs/boxo/util/file.go
generated
vendored
Normal file
12
vendor/github.com/ipfs/boxo/util/file.go
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
package util
|
||||
|
||||
import "os"
|
||||
|
||||
// FileExists check if the file with the given path exits.
|
||||
func FileExists(filename string) bool {
|
||||
fi, err := os.Lstat(filename)
|
||||
if fi != nil || (err != nil && !os.IsNotExist(err)) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
22
vendor/github.com/ipfs/boxo/util/time.go
generated
vendored
Normal file
22
vendor/github.com/ipfs/boxo/util/time.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
package util
|
||||
|
||||
import "time"
|
||||
|
||||
// TimeFormatIpfs is the format ipfs uses to represent time in string form.
|
||||
var TimeFormatIpfs = time.RFC3339Nano
|
||||
|
||||
// ParseRFC3339 parses an RFC3339Nano-formatted time stamp and
|
||||
// returns the UTC time.
|
||||
func ParseRFC3339(s string) (time.Time, error) {
|
||||
t, err := time.Parse(TimeFormatIpfs, s)
|
||||
if err != nil {
|
||||
return time.Time{}, err
|
||||
}
|
||||
return t.UTC(), nil
|
||||
}
|
||||
|
||||
// FormatRFC3339 returns the string representation of the
|
||||
// UTC value of the given time in RFC3339Nano format.
|
||||
func FormatRFC3339(t time.Time) string {
|
||||
return t.UTC().Format(TimeFormatIpfs)
|
||||
}
|
||||
158
vendor/github.com/ipfs/boxo/util/util.go
generated
vendored
Normal file
158
vendor/github.com/ipfs/boxo/util/util.go
generated
vendored
Normal file
@@ -0,0 +1,158 @@
|
||||
// Package util implements various utility functions used within ipfs
|
||||
// that do not currently have a better place to live.
|
||||
package util
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
b58 "github.com/mr-tron/base58/base58"
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
)
|
||||
|
||||
// DefaultIpfsHash is the current default hash function used by IPFS.
|
||||
const DefaultIpfsHash = mh.SHA2_256
|
||||
|
||||
// Debug is a global flag for debugging.
|
||||
var Debug bool
|
||||
|
||||
// ErrNotImplemented signifies a function has not been implemented yet.
|
||||
var ErrNotImplemented = errors.New("error: not implemented yet")
|
||||
|
||||
// ErrTimeout implies that a timeout has been triggered
|
||||
var ErrTimeout = errors.New("error: call timed out")
|
||||
|
||||
// ErrSearchIncomplete implies that a search type operation didn't
|
||||
// find the expected node, but did find 'a' node.
|
||||
var ErrSearchIncomplete = errors.New("error: search incomplete")
|
||||
|
||||
// ErrCast is returned when a cast fails AND the program should not panic.
|
||||
func ErrCast() error {
|
||||
debug.PrintStack()
|
||||
return errCast
|
||||
}
|
||||
|
||||
var errCast = errors.New("cast error")
|
||||
|
||||
// ExpandPathnames takes a set of paths and turns them into absolute paths
|
||||
func ExpandPathnames(paths []string) ([]string, error) {
|
||||
var out []string
|
||||
for _, p := range paths {
|
||||
abspath, err := filepath.Abs(p)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out = append(out, abspath)
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
type randGen struct {
|
||||
rand.Rand
|
||||
}
|
||||
|
||||
// NewTimeSeededRand returns a random bytes reader
|
||||
// which has been initialized with the current time.
|
||||
func NewTimeSeededRand() io.Reader {
|
||||
src := rand.NewSource(time.Now().UnixNano())
|
||||
return &randGen{
|
||||
Rand: *rand.New(src),
|
||||
}
|
||||
}
|
||||
|
||||
// NewSeededRand returns a random bytes reader
|
||||
// initialized with the given seed.
|
||||
func NewSeededRand(seed int64) io.Reader {
|
||||
src := rand.NewSource(seed)
|
||||
return &randGen{
|
||||
Rand: *rand.New(src),
|
||||
}
|
||||
}
|
||||
|
||||
func (r *randGen) Read(p []byte) (n int, err error) {
|
||||
for i := 0; i < len(p); i++ {
|
||||
p[i] = byte(r.Rand.Intn(255))
|
||||
}
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// GetenvBool is the way to check an env var as a boolean
|
||||
func GetenvBool(name string) bool {
|
||||
v := strings.ToLower(os.Getenv(name))
|
||||
return v == "true" || v == "t" || v == "1"
|
||||
}
|
||||
|
||||
// MultiErr is a util to return multiple errors
|
||||
type MultiErr []error
|
||||
|
||||
func (m MultiErr) Error() string {
|
||||
if len(m) == 0 {
|
||||
return "no errors"
|
||||
}
|
||||
|
||||
s := "Multiple errors: "
|
||||
for i, e := range m {
|
||||
if i != 0 {
|
||||
s += ", "
|
||||
}
|
||||
s += e.Error()
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Partition splits a subject 3 parts: prefix, separator, suffix.
|
||||
// The first occurrence of the separator will be matched.
|
||||
// ie. Partition("Ready, steady, go!", ", ") -> ["Ready", ", ", "steady, go!"]
|
||||
func Partition(subject string, sep string) (string, string, string) {
|
||||
if i := strings.Index(subject, sep); i != -1 {
|
||||
return subject[:i], subject[i : i+len(sep)], subject[i+len(sep):]
|
||||
}
|
||||
return subject, "", ""
|
||||
}
|
||||
|
||||
// RPartition splits a subject 3 parts: prefix, separator, suffix.
|
||||
// The last occurrence of the separator will be matched.
|
||||
// ie. RPartition("Ready, steady, go!", ", ") -> ["Ready, steady", ", ", "go!"]
|
||||
func RPartition(subject string, sep string) (string, string, string) {
|
||||
if i := strings.LastIndex(subject, sep); i != -1 {
|
||||
return subject[:i], subject[i : i+len(sep)], subject[i+len(sep):]
|
||||
}
|
||||
return subject, "", ""
|
||||
}
|
||||
|
||||
// Hash is the global IPFS hash function. uses multihash SHA2_256, 256 bits
|
||||
func Hash(data []byte) mh.Multihash {
|
||||
h, err := mh.Sum(data, DefaultIpfsHash, -1)
|
||||
if err != nil {
|
||||
// this error can be safely ignored (panic) because multihash only fails
|
||||
// from the selection of hash function. If the fn + length are valid, it
|
||||
// won't error.
|
||||
panic("multihash failed to hash using SHA2_256.")
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
// IsValidHash checks whether a given hash is valid (b58 decodable, len > 0)
|
||||
func IsValidHash(s string) bool {
|
||||
out, err := b58.Decode(s)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
_, err = mh.Cast(out)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// XOR takes two byte slices, XORs them together, returns the resulting slice.
|
||||
func XOR(a, b []byte) []byte {
|
||||
c := make([]byte, len(a))
|
||||
for i := 0; i < len(a); i++ {
|
||||
c[i] = a[i] ^ b[i]
|
||||
}
|
||||
return c
|
||||
}
|
||||
1
vendor/github.com/ipfs/go-cid/.gitignore
generated
vendored
Normal file
1
vendor/github.com/ipfs/go-cid/.gitignore
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
cid-fuzz.zip
|
||||
21
vendor/github.com/ipfs/go-cid/LICENSE
generated
vendored
Normal file
21
vendor/github.com/ipfs/go-cid/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 Protocol Labs, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
5
vendor/github.com/ipfs/go-cid/Makefile
generated
vendored
Normal file
5
vendor/github.com/ipfs/go-cid/Makefile
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
all: deps
|
||||
|
||||
deps:
|
||||
go get github.com/mattn/goveralls
|
||||
go get golang.org/x/tools/cmd/cover
|
||||
115
vendor/github.com/ipfs/go-cid/README.md
generated
vendored
Normal file
115
vendor/github.com/ipfs/go-cid/README.md
generated
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
go-cid
|
||||
==================
|
||||
|
||||
[](http://ipn.io)
|
||||
[](http://ipfs.io/)
|
||||
[](http://webchat.freenode.net/?channels=%23ipfs)
|
||||
[](https://github.com/RichardLitt/standard-readme)
|
||||
[](https://godoc.org/github.com/ipfs/go-cid)
|
||||
[](https://coveralls.io/github/ipfs/go-cid?branch=master)
|
||||
[](https://travis-ci.org/ipfs/go-cid)
|
||||
|
||||
> A package to handle content IDs in Go.
|
||||
|
||||
This is an implementation in Go of the [CID spec](https://github.com/ipld/cid).
|
||||
It is used in `go-ipfs` and related packages to refer to a typed hunk of data.
|
||||
|
||||
## Lead Maintainer
|
||||
|
||||
[Eric Myhre](https://github.com/warpfork)
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Install](#install)
|
||||
- [Usage](#usage)
|
||||
- [API](#api)
|
||||
- [Contribute](#contribute)
|
||||
- [License](#license)
|
||||
|
||||
## Install
|
||||
|
||||
`go-cid` is a standard Go module which can be installed with:
|
||||
|
||||
```sh
|
||||
go get github.com/ipfs/go-cid
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Running tests
|
||||
|
||||
Run tests with `go test` from the directory root
|
||||
|
||||
```sh
|
||||
go test
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
#### Parsing string input from users
|
||||
|
||||
```go
|
||||
// Create a cid from a marshaled string
|
||||
c, err := cid.Decode("bafzbeigai3eoy2ccc7ybwjfz5r3rdxqrinwi4rwytly24tdbh6yk7zslrm")
|
||||
if err != nil {...}
|
||||
|
||||
fmt.Println("Got CID: ", c)
|
||||
```
|
||||
|
||||
#### Creating a CID from scratch
|
||||
|
||||
```go
|
||||
|
||||
import (
|
||||
cid "github.com/ipfs/go-cid"
|
||||
mc "github.com/multiformats/go-multicodec"
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
)
|
||||
|
||||
// Create a cid manually by specifying the 'prefix' parameters
|
||||
pref := cid.Prefix{
|
||||
Version: 1,
|
||||
Codec: uint64(mc.Raw),
|
||||
MhType: mh.SHA2_256,
|
||||
MhLength: -1, // default length
|
||||
}
|
||||
|
||||
// And then feed it some data
|
||||
c, err := pref.Sum([]byte("Hello World!"))
|
||||
if err != nil {...}
|
||||
|
||||
fmt.Println("Created CID: ", c)
|
||||
```
|
||||
|
||||
#### Check if two CIDs match
|
||||
|
||||
```go
|
||||
// To test if two cid's are equivalent, be sure to use the 'Equals' method:
|
||||
if c1.Equals(c2) {
|
||||
fmt.Println("These two refer to the same exact data!")
|
||||
}
|
||||
```
|
||||
|
||||
#### Check if some data matches a given CID
|
||||
|
||||
```go
|
||||
// To check if some data matches a given cid,
|
||||
// Get your CIDs prefix, and use that to sum the data in question:
|
||||
other, err := c.Prefix().Sum(mydata)
|
||||
if err != nil {...}
|
||||
|
||||
if !c.Equals(other) {
|
||||
fmt.Println("This data is different.")
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Contribute
|
||||
|
||||
PRs are welcome!
|
||||
|
||||
Small note: If editing the Readme, please conform to the [standard-readme](https://github.com/RichardLitt/standard-readme) specification.
|
||||
|
||||
## License
|
||||
|
||||
MIT © Jeromy Johnson
|
||||
74
vendor/github.com/ipfs/go-cid/builder.go
generated
vendored
Normal file
74
vendor/github.com/ipfs/go-cid/builder.go
generated
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
package cid
|
||||
|
||||
import (
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
)
|
||||
|
||||
type Builder interface {
|
||||
Sum(data []byte) (Cid, error)
|
||||
GetCodec() uint64
|
||||
WithCodec(uint64) Builder
|
||||
}
|
||||
|
||||
type V0Builder struct{}
|
||||
|
||||
type V1Builder struct {
|
||||
Codec uint64
|
||||
MhType uint64
|
||||
MhLength int // MhLength <= 0 means the default length
|
||||
}
|
||||
|
||||
func (p Prefix) GetCodec() uint64 {
|
||||
return p.Codec
|
||||
}
|
||||
|
||||
func (p Prefix) WithCodec(c uint64) Builder {
|
||||
if c == p.Codec {
|
||||
return p
|
||||
}
|
||||
p.Codec = c
|
||||
if c != DagProtobuf {
|
||||
p.Version = 1
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
func (p V0Builder) Sum(data []byte) (Cid, error) {
|
||||
hash, err := mh.Sum(data, mh.SHA2_256, -1)
|
||||
if err != nil {
|
||||
return Undef, err
|
||||
}
|
||||
return Cid{string(hash)}, nil
|
||||
}
|
||||
|
||||
func (p V0Builder) GetCodec() uint64 {
|
||||
return DagProtobuf
|
||||
}
|
||||
|
||||
func (p V0Builder) WithCodec(c uint64) Builder {
|
||||
if c == DagProtobuf {
|
||||
return p
|
||||
}
|
||||
return V1Builder{Codec: c, MhType: mh.SHA2_256}
|
||||
}
|
||||
|
||||
func (p V1Builder) Sum(data []byte) (Cid, error) {
|
||||
mhLen := p.MhLength
|
||||
if mhLen <= 0 {
|
||||
mhLen = -1
|
||||
}
|
||||
hash, err := mh.Sum(data, p.MhType, mhLen)
|
||||
if err != nil {
|
||||
return Undef, err
|
||||
}
|
||||
return NewCidV1(p.Codec, hash), nil
|
||||
}
|
||||
|
||||
func (p V1Builder) GetCodec() uint64 {
|
||||
return p.Codec
|
||||
}
|
||||
|
||||
func (p V1Builder) WithCodec(c uint64) Builder {
|
||||
p.Codec = c
|
||||
return p
|
||||
}
|
||||
817
vendor/github.com/ipfs/go-cid/cid.go
generated
vendored
Normal file
817
vendor/github.com/ipfs/go-cid/cid.go
generated
vendored
Normal file
@@ -0,0 +1,817 @@
|
||||
// Package cid implements the Content-IDentifiers specification
|
||||
// (https://github.com/ipld/cid) in Go. CIDs are
|
||||
// self-describing content-addressed identifiers useful for
|
||||
// distributed information systems. CIDs are used in the IPFS
|
||||
// (https://ipfs.io) project ecosystem.
|
||||
//
|
||||
// CIDs have two major versions. A CIDv0 corresponds to a multihash of type
|
||||
// DagProtobuf, is deprecated and exists for compatibility reasons. Usually,
|
||||
// CIDv1 should be used.
|
||||
//
|
||||
// A CIDv1 has four parts:
|
||||
//
|
||||
// <cidv1> ::= <multibase-prefix><cid-version><multicodec-packed-content-type><multihash-content-address>
|
||||
//
|
||||
// As shown above, the CID implementation relies heavily on Multiformats,
|
||||
// particularly Multibase
|
||||
// (https://github.com/multiformats/go-multibase), Multicodec
|
||||
// (https://github.com/multiformats/multicodec) and Multihash
|
||||
// implementations (https://github.com/multiformats/go-multihash).
|
||||
package cid
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding"
|
||||
"encoding/binary"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
mbase "github.com/multiformats/go-multibase"
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
varint "github.com/multiformats/go-varint"
|
||||
)
|
||||
|
||||
// UnsupportedVersionString just holds an error message
|
||||
const UnsupportedVersionString = "<unsupported cid version>"
|
||||
|
||||
// ErrInvalidCid is an error that indicates that a CID is invalid.
|
||||
type ErrInvalidCid struct {
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e ErrInvalidCid) Error() string {
|
||||
return fmt.Sprintf("invalid cid: %s", e.Err)
|
||||
}
|
||||
|
||||
func (e ErrInvalidCid) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
|
||||
func (e ErrInvalidCid) Is(err error) bool {
|
||||
switch err.(type) {
|
||||
case ErrInvalidCid, *ErrInvalidCid:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
// ErrCidTooShort means that the cid passed to decode was not long
|
||||
// enough to be a valid Cid
|
||||
ErrCidTooShort = ErrInvalidCid{errors.New("cid too short")}
|
||||
|
||||
// ErrInvalidEncoding means that selected encoding is not supported
|
||||
// by this Cid version
|
||||
ErrInvalidEncoding = errors.New("invalid base encoding")
|
||||
)
|
||||
|
||||
// Consts below are DEPRECATED and left only for legacy reasons:
|
||||
// <https://github.com/ipfs/go-cid/pull/137>
|
||||
// Modern code should use consts from go-multicodec instead:
|
||||
// <https://github.com/multiformats/go-multicodec>
|
||||
const (
|
||||
// common ones
|
||||
Raw = 0x55
|
||||
DagProtobuf = 0x70 // https://ipld.io/docs/codecs/known/dag-pb/
|
||||
DagCBOR = 0x71 // https://ipld.io/docs/codecs/known/dag-cbor/
|
||||
DagJSON = 0x0129 // https://ipld.io/docs/codecs/known/dag-json/
|
||||
Libp2pKey = 0x72 // https://github.com/libp2p/specs/blob/master/peer-ids/peer-ids.md#peer-ids
|
||||
|
||||
// other
|
||||
GitRaw = 0x78
|
||||
DagJOSE = 0x85 // https://ipld.io/specs/codecs/dag-jose/spec/
|
||||
EthBlock = 0x90
|
||||
EthBlockList = 0x91
|
||||
EthTxTrie = 0x92
|
||||
EthTx = 0x93
|
||||
EthTxReceiptTrie = 0x94
|
||||
EthTxReceipt = 0x95
|
||||
EthStateTrie = 0x96
|
||||
EthAccountSnapshot = 0x97
|
||||
EthStorageTrie = 0x98
|
||||
BitcoinBlock = 0xb0
|
||||
BitcoinTx = 0xb1
|
||||
ZcashBlock = 0xc0
|
||||
ZcashTx = 0xc1
|
||||
DecredBlock = 0xe0
|
||||
DecredTx = 0xe1
|
||||
DashBlock = 0xf0
|
||||
DashTx = 0xf1
|
||||
FilCommitmentUnsealed = 0xf101
|
||||
FilCommitmentSealed = 0xf102
|
||||
)
|
||||
|
||||
// tryNewCidV0 tries to convert a multihash into a CIDv0 CID and returns an
|
||||
// error on failure.
|
||||
func tryNewCidV0(mhash mh.Multihash) (Cid, error) {
|
||||
// Need to make sure hash is valid for CidV0 otherwise we will
|
||||
// incorrectly detect it as CidV1 in the Version() method
|
||||
dec, err := mh.Decode(mhash)
|
||||
if err != nil {
|
||||
return Undef, ErrInvalidCid{err}
|
||||
}
|
||||
if dec.Code != mh.SHA2_256 || dec.Length != 32 {
|
||||
return Undef, ErrInvalidCid{fmt.Errorf("invalid hash for cidv0 %d-%d", dec.Code, dec.Length)}
|
||||
}
|
||||
return Cid{string(mhash)}, nil
|
||||
}
|
||||
|
||||
// NewCidV0 returns a Cid-wrapped multihash.
|
||||
// They exist to allow IPFS to work with Cids while keeping
|
||||
// compatibility with the plain-multihash format used used in IPFS.
|
||||
// NewCidV1 should be used preferentially.
|
||||
//
|
||||
// Panics if the multihash isn't sha2-256.
|
||||
func NewCidV0(mhash mh.Multihash) Cid {
|
||||
c, err := tryNewCidV0(mhash)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// NewCidV1 returns a new Cid using the given multicodec-packed
|
||||
// content type.
|
||||
//
|
||||
// Panics if the multihash is invalid.
|
||||
func NewCidV1(codecType uint64, mhash mh.Multihash) Cid {
|
||||
hashlen := len(mhash)
|
||||
|
||||
// Two 8 bytes (max) numbers plus hash.
|
||||
// We use strings.Builder to only allocate once.
|
||||
var b strings.Builder
|
||||
b.Grow(1 + varint.UvarintSize(codecType) + hashlen)
|
||||
|
||||
b.WriteByte(1)
|
||||
|
||||
var buf [binary.MaxVarintLen64]byte
|
||||
n := varint.PutUvarint(buf[:], codecType)
|
||||
b.Write(buf[:n])
|
||||
|
||||
cn, _ := b.Write(mhash)
|
||||
if cn != hashlen {
|
||||
panic("copy hash length is inconsistent")
|
||||
}
|
||||
|
||||
return Cid{b.String()}
|
||||
}
|
||||
|
||||
var (
|
||||
_ encoding.BinaryMarshaler = Cid{}
|
||||
_ encoding.BinaryUnmarshaler = (*Cid)(nil)
|
||||
_ encoding.TextMarshaler = Cid{}
|
||||
_ encoding.TextUnmarshaler = (*Cid)(nil)
|
||||
)
|
||||
|
||||
// Cid represents a self-describing content addressed
|
||||
// identifier. It is formed by a Version, a Codec (which indicates
|
||||
// a multicodec-packed content type) and a Multihash.
|
||||
type Cid struct{ str string }
|
||||
|
||||
// Undef can be used to represent a nil or undefined Cid, using Cid{}
|
||||
// directly is also acceptable.
|
||||
var Undef = Cid{}
|
||||
|
||||
// Defined returns true if a Cid is defined
|
||||
// Calling any other methods on an undefined Cid will result in
|
||||
// undefined behavior.
|
||||
func (c Cid) Defined() bool {
|
||||
return c.str != ""
|
||||
}
|
||||
|
||||
// Parse is a short-hand function to perform Decode, Cast etc... on
|
||||
// a generic interface{} type.
|
||||
func Parse(v interface{}) (Cid, error) {
|
||||
switch v2 := v.(type) {
|
||||
case string:
|
||||
if strings.Contains(v2, "/ipfs/") {
|
||||
return Decode(strings.Split(v2, "/ipfs/")[1])
|
||||
}
|
||||
return Decode(v2)
|
||||
case []byte:
|
||||
return Cast(v2)
|
||||
case mh.Multihash:
|
||||
return tryNewCidV0(v2)
|
||||
case Cid:
|
||||
return v2, nil
|
||||
default:
|
||||
return Undef, ErrInvalidCid{fmt.Errorf("can't parse %+v as Cid", v2)}
|
||||
}
|
||||
}
|
||||
|
||||
// MustParse calls Parse but will panic on error.
|
||||
func MustParse(v interface{}) Cid {
|
||||
c, err := Parse(v)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// Decode parses a Cid-encoded string and returns a Cid object.
|
||||
// For CidV1, a Cid-encoded string is primarily a multibase string:
|
||||
//
|
||||
// <multibase-type-code><base-encoded-string>
|
||||
//
|
||||
// The base-encoded string represents a:
|
||||
//
|
||||
// <version><codec-type><multihash>
|
||||
//
|
||||
// Decode will also detect and parse CidV0 strings. Strings
|
||||
// starting with "Qm" are considered CidV0 and treated directly
|
||||
// as B58-encoded multihashes.
|
||||
func Decode(v string) (Cid, error) {
|
||||
if len(v) < 2 {
|
||||
return Undef, ErrCidTooShort
|
||||
}
|
||||
|
||||
if len(v) == 46 && v[:2] == "Qm" {
|
||||
hash, err := mh.FromB58String(v)
|
||||
if err != nil {
|
||||
return Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return tryNewCidV0(hash)
|
||||
}
|
||||
|
||||
_, data, err := mbase.Decode(v)
|
||||
if err != nil {
|
||||
return Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return Cast(data)
|
||||
}
|
||||
|
||||
// Extract the encoding from a Cid. If Decode on the same string did
|
||||
// not return an error neither will this function.
|
||||
func ExtractEncoding(v string) (mbase.Encoding, error) {
|
||||
if len(v) < 2 {
|
||||
return -1, ErrCidTooShort
|
||||
}
|
||||
|
||||
if len(v) == 46 && v[:2] == "Qm" {
|
||||
return mbase.Base58BTC, nil
|
||||
}
|
||||
|
||||
encoding := mbase.Encoding(v[0])
|
||||
|
||||
// check encoding is valid
|
||||
_, err := mbase.NewEncoder(encoding)
|
||||
if err != nil {
|
||||
return -1, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return encoding, nil
|
||||
}
|
||||
|
||||
// Cast takes a Cid data slice, parses it and returns a Cid.
|
||||
// For CidV1, the data buffer is in the form:
|
||||
//
|
||||
// <version><codec-type><multihash>
|
||||
//
|
||||
// CidV0 are also supported. In particular, data buffers starting
|
||||
// with length 34 bytes, which starts with bytes [18,32...] are considered
|
||||
// binary multihashes.
|
||||
//
|
||||
// Please use decode when parsing a regular Cid string, as Cast does not
|
||||
// expect multibase-encoded data. Cast accepts the output of Cid.Bytes().
|
||||
func Cast(data []byte) (Cid, error) {
|
||||
nr, c, err := CidFromBytes(data)
|
||||
if err != nil {
|
||||
return Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
if nr != len(data) {
|
||||
return Undef, ErrInvalidCid{fmt.Errorf("trailing bytes in data buffer passed to cid Cast")}
|
||||
}
|
||||
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// UnmarshalBinary is equivalent to Cast(). It implements the
|
||||
// encoding.BinaryUnmarshaler interface.
|
||||
func (c *Cid) UnmarshalBinary(data []byte) error {
|
||||
casted, err := Cast(data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.str = casted.str
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnmarshalText is equivalent to Decode(). It implements the
|
||||
// encoding.TextUnmarshaler interface.
|
||||
func (c *Cid) UnmarshalText(text []byte) error {
|
||||
decodedCid, err := Decode(string(text))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.str = decodedCid.str
|
||||
return nil
|
||||
}
|
||||
|
||||
// Version returns the Cid version.
|
||||
func (c Cid) Version() uint64 {
|
||||
if len(c.str) == 34 && c.str[0] == 18 && c.str[1] == 32 {
|
||||
return 0
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
// Type returns the multicodec-packed content type of a Cid.
|
||||
func (c Cid) Type() uint64 {
|
||||
if c.Version() == 0 {
|
||||
return DagProtobuf
|
||||
}
|
||||
_, n, _ := uvarint(c.str)
|
||||
codec, _, _ := uvarint(c.str[n:])
|
||||
return codec
|
||||
}
|
||||
|
||||
// String returns the default string representation of a
|
||||
// Cid. Currently, Base32 is used for CIDV1 as the encoding for the
|
||||
// multibase string, Base58 is used for CIDV0.
|
||||
func (c Cid) String() string {
|
||||
switch c.Version() {
|
||||
case 0:
|
||||
return c.Hash().B58String()
|
||||
case 1:
|
||||
mbstr, err := mbase.Encode(mbase.Base32, c.Bytes())
|
||||
if err != nil {
|
||||
panic("should not error with hardcoded mbase: " + err.Error())
|
||||
}
|
||||
|
||||
return mbstr
|
||||
default:
|
||||
panic("not possible to reach this point")
|
||||
}
|
||||
}
|
||||
|
||||
// String returns the string representation of a Cid
|
||||
// encoded is selected base
|
||||
func (c Cid) StringOfBase(base mbase.Encoding) (string, error) {
|
||||
switch c.Version() {
|
||||
case 0:
|
||||
if base != mbase.Base58BTC {
|
||||
return "", ErrInvalidEncoding
|
||||
}
|
||||
return c.Hash().B58String(), nil
|
||||
case 1:
|
||||
return mbase.Encode(base, c.Bytes())
|
||||
default:
|
||||
panic("not possible to reach this point")
|
||||
}
|
||||
}
|
||||
|
||||
// Encode return the string representation of a Cid in a given base
|
||||
// when applicable. Version 0 Cid's are always in Base58 as they do
|
||||
// not take a multibase prefix.
|
||||
func (c Cid) Encode(base mbase.Encoder) string {
|
||||
switch c.Version() {
|
||||
case 0:
|
||||
return c.Hash().B58String()
|
||||
case 1:
|
||||
return base.Encode(c.Bytes())
|
||||
default:
|
||||
panic("not possible to reach this point")
|
||||
}
|
||||
}
|
||||
|
||||
// Hash returns the multihash contained by a Cid.
|
||||
func (c Cid) Hash() mh.Multihash {
|
||||
bytes := c.Bytes()
|
||||
|
||||
if c.Version() == 0 {
|
||||
return mh.Multihash(bytes)
|
||||
}
|
||||
|
||||
// skip version length
|
||||
_, n1, _ := varint.FromUvarint(bytes)
|
||||
// skip codec length
|
||||
_, n2, _ := varint.FromUvarint(bytes[n1:])
|
||||
|
||||
return mh.Multihash(bytes[n1+n2:])
|
||||
}
|
||||
|
||||
// Bytes returns the byte representation of a Cid.
|
||||
// The output of bytes can be parsed back into a Cid
|
||||
// with Cast().
|
||||
//
|
||||
// If c.Defined() == false, it return a nil slice and may not
|
||||
// be parsable with Cast().
|
||||
func (c Cid) Bytes() []byte {
|
||||
if !c.Defined() {
|
||||
return nil
|
||||
}
|
||||
return []byte(c.str)
|
||||
}
|
||||
|
||||
// ByteLen returns the length of the CID in bytes.
|
||||
// It's equivalent to `len(c.Bytes())`, but works without an allocation,
|
||||
// and should therefore be preferred.
|
||||
//
|
||||
// (See also the WriteTo method for other important operations that work without allocation.)
|
||||
func (c Cid) ByteLen() int {
|
||||
return len(c.str)
|
||||
}
|
||||
|
||||
// WriteBytes writes the CID bytes to the given writer.
|
||||
// This method works without incurring any allocation.
|
||||
//
|
||||
// (See also the ByteLen method for other important operations that work without allocation.)
|
||||
func (c Cid) WriteBytes(w io.Writer) (int, error) {
|
||||
n, err := io.WriteString(w, c.str)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
if n != len(c.str) {
|
||||
return n, fmt.Errorf("failed to write entire cid string")
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// MarshalBinary is equivalent to Bytes(). It implements the
|
||||
// encoding.BinaryMarshaler interface.
|
||||
func (c Cid) MarshalBinary() ([]byte, error) {
|
||||
return c.Bytes(), nil
|
||||
}
|
||||
|
||||
// MarshalText is equivalent to String(). It implements the
|
||||
// encoding.TextMarshaler interface.
|
||||
func (c Cid) MarshalText() ([]byte, error) {
|
||||
return []byte(c.String()), nil
|
||||
}
|
||||
|
||||
// Equals checks that two Cids are the same.
|
||||
// In order for two Cids to be considered equal, the
|
||||
// Version, the Codec and the Multihash must match.
|
||||
func (c Cid) Equals(o Cid) bool {
|
||||
return c == o
|
||||
}
|
||||
|
||||
// UnmarshalJSON parses the JSON representation of a Cid.
|
||||
func (c *Cid) UnmarshalJSON(b []byte) error {
|
||||
if len(b) < 2 {
|
||||
return ErrInvalidCid{fmt.Errorf("invalid cid json blob")}
|
||||
}
|
||||
obj := struct {
|
||||
CidTarget string `json:"/"`
|
||||
}{}
|
||||
objptr := &obj
|
||||
err := json.Unmarshal(b, &objptr)
|
||||
if err != nil {
|
||||
return ErrInvalidCid{err}
|
||||
}
|
||||
if objptr == nil {
|
||||
*c = Cid{}
|
||||
return nil
|
||||
}
|
||||
|
||||
if obj.CidTarget == "" {
|
||||
return ErrInvalidCid{fmt.Errorf("cid was incorrectly formatted")}
|
||||
}
|
||||
|
||||
out, err := Decode(obj.CidTarget)
|
||||
if err != nil {
|
||||
return ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
*c = out
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalJSON procudes a JSON representation of a Cid, which looks as follows:
|
||||
//
|
||||
// { "/": "<cid-string>" }
|
||||
//
|
||||
// Note that this formatting comes from the IPLD specification
|
||||
// (https://github.com/ipld/specs/tree/master/ipld)
|
||||
func (c Cid) MarshalJSON() ([]byte, error) {
|
||||
if !c.Defined() {
|
||||
return []byte("null"), nil
|
||||
}
|
||||
return []byte(fmt.Sprintf("{\"/\":\"%s\"}", c.String())), nil
|
||||
}
|
||||
|
||||
// KeyString returns the binary representation of the Cid as a string
|
||||
func (c Cid) KeyString() string {
|
||||
return c.str
|
||||
}
|
||||
|
||||
// Loggable returns a Loggable (as defined by
|
||||
// https://godoc.org/github.com/ipfs/go-log).
|
||||
func (c Cid) Loggable() map[string]interface{} {
|
||||
return map[string]interface{}{
|
||||
"cid": c,
|
||||
}
|
||||
}
|
||||
|
||||
// Prefix builds and returns a Prefix out of a Cid.
|
||||
func (c Cid) Prefix() Prefix {
|
||||
if c.Version() == 0 {
|
||||
return Prefix{
|
||||
MhType: mh.SHA2_256,
|
||||
MhLength: 32,
|
||||
Version: 0,
|
||||
Codec: DagProtobuf,
|
||||
}
|
||||
}
|
||||
|
||||
offset := 0
|
||||
version, n, _ := uvarint(c.str[offset:])
|
||||
offset += n
|
||||
codec, n, _ := uvarint(c.str[offset:])
|
||||
offset += n
|
||||
mhtype, n, _ := uvarint(c.str[offset:])
|
||||
offset += n
|
||||
mhlen, _, _ := uvarint(c.str[offset:])
|
||||
|
||||
return Prefix{
|
||||
MhType: mhtype,
|
||||
MhLength: int(mhlen),
|
||||
Version: version,
|
||||
Codec: codec,
|
||||
}
|
||||
}
|
||||
|
||||
// Prefix represents all the metadata of a Cid,
|
||||
// that is, the Version, the Codec, the Multihash type
|
||||
// and the Multihash length. It does not contains
|
||||
// any actual content information.
|
||||
// NOTE: The use -1 in MhLength to mean default length is deprecated,
|
||||
//
|
||||
// use the V0Builder or V1Builder structures instead
|
||||
type Prefix struct {
|
||||
Version uint64
|
||||
Codec uint64
|
||||
MhType uint64
|
||||
MhLength int
|
||||
}
|
||||
|
||||
// Sum uses the information in a prefix to perform a multihash.Sum()
|
||||
// and return a newly constructed Cid with the resulting multihash.
|
||||
func (p Prefix) Sum(data []byte) (Cid, error) {
|
||||
length := p.MhLength
|
||||
if p.MhType == mh.IDENTITY {
|
||||
length = -1
|
||||
}
|
||||
|
||||
if p.Version == 0 && (p.MhType != mh.SHA2_256 ||
|
||||
(p.MhLength != 32 && p.MhLength != -1)) {
|
||||
|
||||
return Undef, ErrInvalidCid{fmt.Errorf("invalid v0 prefix")}
|
||||
}
|
||||
|
||||
hash, err := mh.Sum(data, p.MhType, length)
|
||||
if err != nil {
|
||||
return Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
switch p.Version {
|
||||
case 0:
|
||||
return NewCidV0(hash), nil
|
||||
case 1:
|
||||
return NewCidV1(p.Codec, hash), nil
|
||||
default:
|
||||
return Undef, ErrInvalidCid{fmt.Errorf("invalid cid version")}
|
||||
}
|
||||
}
|
||||
|
||||
// Bytes returns a byte representation of a Prefix. It looks like:
|
||||
//
|
||||
// <version><codec><mh-type><mh-length>
|
||||
func (p Prefix) Bytes() []byte {
|
||||
size := varint.UvarintSize(p.Version)
|
||||
size += varint.UvarintSize(p.Codec)
|
||||
size += varint.UvarintSize(p.MhType)
|
||||
size += varint.UvarintSize(uint64(p.MhLength))
|
||||
|
||||
buf := make([]byte, size)
|
||||
n := varint.PutUvarint(buf, p.Version)
|
||||
n += varint.PutUvarint(buf[n:], p.Codec)
|
||||
n += varint.PutUvarint(buf[n:], p.MhType)
|
||||
n += varint.PutUvarint(buf[n:], uint64(p.MhLength))
|
||||
if n != size {
|
||||
panic("size mismatch")
|
||||
}
|
||||
return buf
|
||||
}
|
||||
|
||||
// PrefixFromBytes parses a Prefix-byte representation onto a
|
||||
// Prefix.
|
||||
func PrefixFromBytes(buf []byte) (Prefix, error) {
|
||||
r := bytes.NewReader(buf)
|
||||
vers, err := varint.ReadUvarint(r)
|
||||
if err != nil {
|
||||
return Prefix{}, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
codec, err := varint.ReadUvarint(r)
|
||||
if err != nil {
|
||||
return Prefix{}, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
mhtype, err := varint.ReadUvarint(r)
|
||||
if err != nil {
|
||||
return Prefix{}, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
mhlen, err := varint.ReadUvarint(r)
|
||||
if err != nil {
|
||||
return Prefix{}, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return Prefix{
|
||||
Version: vers,
|
||||
Codec: codec,
|
||||
MhType: mhtype,
|
||||
MhLength: int(mhlen),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func CidFromBytes(data []byte) (int, Cid, error) {
|
||||
if len(data) > 2 && data[0] == mh.SHA2_256 && data[1] == 32 {
|
||||
if len(data) < 34 {
|
||||
return 0, Undef, ErrInvalidCid{fmt.Errorf("not enough bytes for cid v0")}
|
||||
}
|
||||
|
||||
h, err := mh.Cast(data[:34])
|
||||
if err != nil {
|
||||
return 0, Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return 34, Cid{string(h)}, nil
|
||||
}
|
||||
|
||||
vers, n, err := varint.FromUvarint(data)
|
||||
if err != nil {
|
||||
return 0, Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
if vers != 1 {
|
||||
return 0, Undef, ErrInvalidCid{fmt.Errorf("expected 1 as the cid version number, got: %d", vers)}
|
||||
}
|
||||
|
||||
_, cn, err := varint.FromUvarint(data[n:])
|
||||
if err != nil {
|
||||
return 0, Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
mhnr, _, err := mh.MHFromBytes(data[n+cn:])
|
||||
if err != nil {
|
||||
return 0, Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
l := n + cn + mhnr
|
||||
|
||||
return l, Cid{string(data[0:l])}, nil
|
||||
}
|
||||
|
||||
func toBufByteReader(r io.Reader, dst []byte) *bufByteReader {
|
||||
// If the reader already implements ByteReader, use it directly.
|
||||
// Otherwise, use a fallback that does 1-byte Reads.
|
||||
if br, ok := r.(io.ByteReader); ok {
|
||||
return &bufByteReader{direct: br, dst: dst}
|
||||
}
|
||||
return &bufByteReader{fallback: r, dst: dst}
|
||||
}
|
||||
|
||||
type bufByteReader struct {
|
||||
direct io.ByteReader
|
||||
fallback io.Reader
|
||||
|
||||
dst []byte
|
||||
}
|
||||
|
||||
func (r *bufByteReader) ReadByte() (byte, error) {
|
||||
// The underlying reader has ReadByte; use it.
|
||||
if br := r.direct; br != nil {
|
||||
b, err := br.ReadByte()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
r.dst = append(r.dst, b)
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// Fall back to a one-byte Read.
|
||||
// TODO: consider reading straight into dst,
|
||||
// once we have benchmarks and if they prove that to be faster.
|
||||
var p [1]byte
|
||||
if _, err := io.ReadFull(r.fallback, p[:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
r.dst = append(r.dst, p[0])
|
||||
return p[0], nil
|
||||
}
|
||||
|
||||
// CidFromReader reads a precise number of bytes for a CID from a given reader.
|
||||
// It returns the number of bytes read, the CID, and any error encountered.
|
||||
// The number of bytes read is accurate even if a non-nil error is returned.
|
||||
//
|
||||
// It's recommended to supply a reader that buffers and implements io.ByteReader,
|
||||
// as CidFromReader has to do many single-byte reads to decode varints.
|
||||
// If the argument only implements io.Reader, single-byte Read calls are used instead.
|
||||
//
|
||||
// If the Reader is found to yield zero bytes, an io.EOF error is returned directly, in all
|
||||
// other error cases, an ErrInvalidCid, wrapping the original error, is returned.
|
||||
func CidFromReader(r io.Reader) (int, Cid, error) {
|
||||
// 64 bytes is enough for any CIDv0,
|
||||
// and it's enough for most CIDv1s in practice.
|
||||
// If the digest is too long, we'll allocate more.
|
||||
br := toBufByteReader(r, make([]byte, 0, 64))
|
||||
|
||||
// We read the first varint, to tell if this is a CIDv0 or a CIDv1.
|
||||
// The varint package wants a io.ByteReader, so we must wrap our io.Reader.
|
||||
vers, err := varint.ReadUvarint(br)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
// First-byte read in ReadUvarint errors with io.EOF, so reader has no data.
|
||||
// Subsequent reads with an EOF will return io.ErrUnexpectedEOF and be wrapped here.
|
||||
return 0, Undef, err
|
||||
}
|
||||
return len(br.dst), Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
// If we have a CIDv0, read the rest of the bytes and cast the buffer.
|
||||
if vers == mh.SHA2_256 {
|
||||
if n, err := io.ReadFull(r, br.dst[1:34]); err != nil {
|
||||
return len(br.dst) + n, Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
br.dst = br.dst[:34]
|
||||
h, err := mh.Cast(br.dst)
|
||||
if err != nil {
|
||||
return len(br.dst), Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return len(br.dst), Cid{string(h)}, nil
|
||||
}
|
||||
|
||||
if vers != 1 {
|
||||
return len(br.dst), Undef, ErrInvalidCid{fmt.Errorf("expected 1 as the cid version number, got: %d", vers)}
|
||||
}
|
||||
|
||||
// CID block encoding multicodec.
|
||||
_, err = varint.ReadUvarint(br)
|
||||
if err != nil {
|
||||
return len(br.dst), Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
// We could replace most of the code below with go-multihash's ReadMultihash.
|
||||
// Note that it would save code, but prevent reusing buffers.
|
||||
// Plus, we already have a ByteReader now.
|
||||
mhStart := len(br.dst)
|
||||
|
||||
// Multihash hash function code.
|
||||
_, err = varint.ReadUvarint(br)
|
||||
if err != nil {
|
||||
return len(br.dst), Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
// Multihash digest length.
|
||||
mhl, err := varint.ReadUvarint(br)
|
||||
if err != nil {
|
||||
return len(br.dst), Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
// Refuse to make large allocations to prevent OOMs due to bugs.
|
||||
const maxDigestAlloc = 32 << 20 // 32MiB
|
||||
if mhl > maxDigestAlloc {
|
||||
return len(br.dst), Undef, ErrInvalidCid{fmt.Errorf("refusing to allocate %d bytes for a digest", mhl)}
|
||||
}
|
||||
|
||||
// Fine to convert mhl to int, given maxDigestAlloc.
|
||||
prefixLength := len(br.dst)
|
||||
cidLength := prefixLength + int(mhl)
|
||||
if cidLength > cap(br.dst) {
|
||||
// If the multihash digest doesn't fit in our initial 64 bytes,
|
||||
// efficiently extend the slice via append+make.
|
||||
br.dst = append(br.dst, make([]byte, cidLength-len(br.dst))...)
|
||||
} else {
|
||||
// The multihash digest fits inside our buffer,
|
||||
// so just extend its capacity.
|
||||
br.dst = br.dst[:cidLength]
|
||||
}
|
||||
|
||||
if n, err := io.ReadFull(r, br.dst[prefixLength:cidLength]); err != nil {
|
||||
// We can't use len(br.dst) here,
|
||||
// as we've only read n bytes past prefixLength.
|
||||
return prefixLength + n, Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
// This simply ensures the multihash is valid.
|
||||
// TODO: consider removing this bit, as it's probably redundant;
|
||||
// for now, it helps ensure consistency with CidFromBytes.
|
||||
_, _, err = mh.MHFromBytes(br.dst[mhStart:])
|
||||
if err != nil {
|
||||
return len(br.dst), Undef, ErrInvalidCid{err}
|
||||
}
|
||||
|
||||
return len(br.dst), Cid{string(br.dst)}, nil
|
||||
}
|
||||
36
vendor/github.com/ipfs/go-cid/cid_fuzz.go
generated
vendored
Normal file
36
vendor/github.com/ipfs/go-cid/cid_fuzz.go
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
//go:build gofuzz
|
||||
|
||||
package cid
|
||||
|
||||
func Fuzz(data []byte) int {
|
||||
cid, err := Cast(data)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
_ = cid.Bytes()
|
||||
_ = cid.String()
|
||||
p := cid.Prefix()
|
||||
_ = p.Bytes()
|
||||
|
||||
if !cid.Equals(cid) {
|
||||
panic("inequality")
|
||||
}
|
||||
|
||||
// json loop
|
||||
json, err := cid.MarshalJSON()
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
cid2 := Cid{}
|
||||
err = cid2.UnmarshalJSON(json)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
|
||||
if !cid.Equals(cid2) {
|
||||
panic("json loop not equal")
|
||||
}
|
||||
|
||||
return 1
|
||||
}
|
||||
3
vendor/github.com/ipfs/go-cid/codecov.yml
generated
vendored
Normal file
3
vendor/github.com/ipfs/go-cid/codecov.yml
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
coverage:
|
||||
range: "50...100"
|
||||
comment: off
|
||||
28
vendor/github.com/ipfs/go-cid/deprecated.go
generated
vendored
Normal file
28
vendor/github.com/ipfs/go-cid/deprecated.go
generated
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
package cid
|
||||
|
||||
import (
|
||||
mh "github.com/multiformats/go-multihash"
|
||||
)
|
||||
|
||||
// NewPrefixV0 returns a CIDv0 prefix with the specified multihash type.
|
||||
// DEPRECATED: Use V0Builder
|
||||
func NewPrefixV0(mhType uint64) Prefix {
|
||||
return Prefix{
|
||||
MhType: mhType,
|
||||
MhLength: mh.DefaultLengths[mhType],
|
||||
Version: 0,
|
||||
Codec: DagProtobuf,
|
||||
}
|
||||
}
|
||||
|
||||
// NewPrefixV1 returns a CIDv1 prefix with the specified codec and multihash
|
||||
// type.
|
||||
// DEPRECATED: Use V1Builder
|
||||
func NewPrefixV1(codecType uint64, mhType uint64) Prefix {
|
||||
return Prefix{
|
||||
MhType: mhType,
|
||||
MhLength: mh.DefaultLengths[mhType],
|
||||
Version: 1,
|
||||
Codec: codecType,
|
||||
}
|
||||
}
|
||||
65
vendor/github.com/ipfs/go-cid/set.go
generated
vendored
Normal file
65
vendor/github.com/ipfs/go-cid/set.go
generated
vendored
Normal file
@@ -0,0 +1,65 @@
|
||||
package cid
|
||||
|
||||
// Set is a implementation of a set of Cids, that is, a structure
|
||||
// to which holds a single copy of every Cids that is added to it.
|
||||
type Set struct {
|
||||
set map[Cid]struct{}
|
||||
}
|
||||
|
||||
// NewSet initializes and returns a new Set.
|
||||
func NewSet() *Set {
|
||||
return &Set{set: make(map[Cid]struct{})}
|
||||
}
|
||||
|
||||
// Add puts a Cid in the Set.
|
||||
func (s *Set) Add(c Cid) {
|
||||
s.set[c] = struct{}{}
|
||||
}
|
||||
|
||||
// Has returns if the Set contains a given Cid.
|
||||
func (s *Set) Has(c Cid) bool {
|
||||
_, ok := s.set[c]
|
||||
return ok
|
||||
}
|
||||
|
||||
// Remove deletes a Cid from the Set.
|
||||
func (s *Set) Remove(c Cid) {
|
||||
delete(s.set, c)
|
||||
}
|
||||
|
||||
// Len returns how many elements the Set has.
|
||||
func (s *Set) Len() int {
|
||||
return len(s.set)
|
||||
}
|
||||
|
||||
// Keys returns the Cids in the set.
|
||||
func (s *Set) Keys() []Cid {
|
||||
out := make([]Cid, 0, len(s.set))
|
||||
for k := range s.set {
|
||||
out = append(out, k)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// Visit adds a Cid to the set only if it is
|
||||
// not in it already.
|
||||
func (s *Set) Visit(c Cid) bool {
|
||||
if !s.Has(c) {
|
||||
s.Add(c)
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// ForEach allows to run a custom function on each
|
||||
// Cid in the set.
|
||||
func (s *Set) ForEach(f func(c Cid) error) error {
|
||||
for c := range s.set {
|
||||
err := f(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
37
vendor/github.com/ipfs/go-cid/varint.go
generated
vendored
Normal file
37
vendor/github.com/ipfs/go-cid/varint.go
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
package cid
|
||||
|
||||
import (
|
||||
"github.com/multiformats/go-varint"
|
||||
)
|
||||
|
||||
// Version of varint function that works with a string rather than
|
||||
// []byte to avoid unnecessary allocation
|
||||
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license as given at https://golang.org/LICENSE
|
||||
|
||||
// uvarint decodes a uint64 from buf and returns that value and the
|
||||
// number of bytes read (> 0). If an error occurred, then 0 is
|
||||
// returned for both the value and the number of bytes read, and an
|
||||
// error is returned.
|
||||
func uvarint(buf string) (uint64, int, error) {
|
||||
var x uint64
|
||||
var s uint
|
||||
// we have a binary string so we can't use a range loop
|
||||
for i := 0; i < len(buf); i++ {
|
||||
b := buf[i]
|
||||
if b < 0x80 {
|
||||
if i > 9 || i == 9 && b > 1 {
|
||||
return 0, 0, varint.ErrOverflow
|
||||
}
|
||||
if b == 0 && i > 0 {
|
||||
return 0, 0, varint.ErrNotMinimal
|
||||
}
|
||||
return x | uint64(b)<<s, i + 1, nil
|
||||
}
|
||||
x |= uint64(b&0x7f) << s
|
||||
s += 7
|
||||
}
|
||||
return 0, 0, varint.ErrUnderflow
|
||||
}
|
||||
3
vendor/github.com/ipfs/go-cid/version.json
generated
vendored
Normal file
3
vendor/github.com/ipfs/go-cid/version.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"version": "v0.4.1"
|
||||
}
|
||||
1
vendor/github.com/ipfs/go-datastore/.gitignore
generated
vendored
Normal file
1
vendor/github.com/ipfs/go-datastore/.gitignore
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*.swp
|
||||
21
vendor/github.com/ipfs/go-datastore/LICENSE
generated
vendored
Normal file
21
vendor/github.com/ipfs/go-datastore/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License
|
||||
|
||||
Copyright (c) 2016 Juan Batiz-Benet
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
47
vendor/github.com/ipfs/go-datastore/README.md
generated
vendored
Normal file
47
vendor/github.com/ipfs/go-datastore/README.md
generated
vendored
Normal file
@@ -0,0 +1,47 @@
|
||||
# go-datastore
|
||||
|
||||
[](http://ipn.io)
|
||||
[](http://ipfs.io/)
|
||||
[](http://webchat.freenode.net/?channels=%23ipfs)
|
||||
[](https://github.com/RichardLitt/standard-readme)
|
||||
[](https://godoc.org/github.com/ipfs/go-datastore)
|
||||
|
||||
> key-value datastore interfaces
|
||||
|
||||
## Lead Maintainer
|
||||
|
||||
[Steven Allen](https://github.com/Stebalien)
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Background](#background)
|
||||
- [Documentation](#documentation)
|
||||
- [Contribute](#contribute)
|
||||
- [License](#license)
|
||||
|
||||
## Background
|
||||
|
||||
Datastore is a generic layer of abstraction for data store and database access. It is a simple API with the aim to enable application development in a datastore-agnostic way, allowing datastores to be swapped seamlessly without changing application code. Thus, one can leverage different datastores with different strengths without committing the application to one datastore throughout its lifetime.
|
||||
|
||||
In addition, grouped datastores significantly simplify interesting data access patterns (such as caching and sharding).
|
||||
|
||||
Based on [datastore.py](https://github.com/datastore/datastore).
|
||||
|
||||
## Documentation
|
||||
|
||||
https://godoc.org/github.com/ipfs/go-datastore
|
||||
|
||||
## Contribute
|
||||
|
||||
Feel free to join in. All welcome. Open an [issue](https://github.com/ipfs/go-datastore/issues)!
|
||||
|
||||
This repository falls under the IPFS [Code of Conduct](https://github.com/ipfs/community/blob/master/code-of-conduct.md).
|
||||
|
||||
### Want to hack on IPFS?
|
||||
|
||||
[](https://github.com/ipfs/community/blob/master/contributing.md)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
19
vendor/github.com/ipfs/go-datastore/autobatch/README.md
generated
vendored
Normal file
19
vendor/github.com/ipfs/go-datastore/autobatch/README.md
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
# autobatch
|
||||
|
||||
Autobatch is an implementation of
|
||||
[go-datastore](https://github.com/ipfs/go-datastore) that automatically batches
|
||||
together writes by holding puts in memory until a certain threshold is met.
|
||||
This can improve disk performance at the cost of memory in certain situations.
|
||||
|
||||
## Usage
|
||||
|
||||
Simply wrap your existing datastore in an autobatching layer like so:
|
||||
|
||||
```go
|
||||
bds := NewAutoBatching(basedstore, 128)
|
||||
```
|
||||
|
||||
And make all future calls to the autobatching object.
|
||||
|
||||
## License
|
||||
MIT
|
||||
174
vendor/github.com/ipfs/go-datastore/autobatch/autobatch.go
generated
vendored
Normal file
174
vendor/github.com/ipfs/go-datastore/autobatch/autobatch.go
generated
vendored
Normal file
@@ -0,0 +1,174 @@
|
||||
// Package autobatch provides a go-datastore implementation that
|
||||
// automatically batches together writes by holding puts in memory until
|
||||
// a certain threshold is met.
|
||||
package autobatch
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
ds "github.com/ipfs/go-datastore"
|
||||
dsq "github.com/ipfs/go-datastore/query"
|
||||
)
|
||||
|
||||
// Datastore implements a go-datastore.
|
||||
type Datastore struct {
|
||||
child ds.Batching
|
||||
|
||||
// TODO: discuss making ds.Batch implement the full ds.Datastore interface
|
||||
buffer map[ds.Key]op
|
||||
maxBufferEntries int
|
||||
}
|
||||
|
||||
var _ ds.Datastore = (*Datastore)(nil)
|
||||
var _ ds.PersistentDatastore = (*Datastore)(nil)
|
||||
|
||||
type op struct {
|
||||
delete bool
|
||||
value []byte
|
||||
}
|
||||
|
||||
// NewAutoBatching returns a new datastore that automatically
|
||||
// batches writes using the given Batching datastore. The size
|
||||
// of the memory pool is given by size.
|
||||
func NewAutoBatching(d ds.Batching, size int) *Datastore {
|
||||
return &Datastore{
|
||||
child: d,
|
||||
buffer: make(map[ds.Key]op, size),
|
||||
maxBufferEntries: size,
|
||||
}
|
||||
}
|
||||
|
||||
// Delete deletes a key/value
|
||||
func (d *Datastore) Delete(ctx context.Context, k ds.Key) error {
|
||||
d.buffer[k] = op{delete: true}
|
||||
if len(d.buffer) > d.maxBufferEntries {
|
||||
return d.Flush(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get retrieves a value given a key.
|
||||
func (d *Datastore) Get(ctx context.Context, k ds.Key) ([]byte, error) {
|
||||
o, ok := d.buffer[k]
|
||||
if ok {
|
||||
if o.delete {
|
||||
return nil, ds.ErrNotFound
|
||||
}
|
||||
return o.value, nil
|
||||
}
|
||||
|
||||
return d.child.Get(ctx, k)
|
||||
}
|
||||
|
||||
// Put stores a key/value.
|
||||
func (d *Datastore) Put(ctx context.Context, k ds.Key, val []byte) error {
|
||||
d.buffer[k] = op{value: val}
|
||||
if len(d.buffer) > d.maxBufferEntries {
|
||||
return d.Flush(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sync flushes all operations on keys at or under the prefix
|
||||
// from the current batch to the underlying datastore
|
||||
func (d *Datastore) Sync(ctx context.Context, prefix ds.Key) error {
|
||||
b, err := d.child.Batch(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for k, o := range d.buffer {
|
||||
if !(k.Equal(prefix) || k.IsDescendantOf(prefix)) {
|
||||
continue
|
||||
}
|
||||
|
||||
var err error
|
||||
if o.delete {
|
||||
err = b.Delete(ctx, k)
|
||||
} else {
|
||||
err = b.Put(ctx, k, o.value)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
delete(d.buffer, k)
|
||||
}
|
||||
|
||||
return b.Commit(ctx)
|
||||
}
|
||||
|
||||
// Flush flushes the current batch to the underlying datastore.
|
||||
func (d *Datastore) Flush(ctx context.Context) error {
|
||||
b, err := d.child.Batch(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for k, o := range d.buffer {
|
||||
var err error
|
||||
if o.delete {
|
||||
err = b.Delete(ctx, k)
|
||||
} else {
|
||||
err = b.Put(ctx, k, o.value)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// clear out buffer
|
||||
d.buffer = make(map[ds.Key]op, d.maxBufferEntries)
|
||||
|
||||
return b.Commit(ctx)
|
||||
}
|
||||
|
||||
// Has checks if a key is stored.
|
||||
func (d *Datastore) Has(ctx context.Context, k ds.Key) (bool, error) {
|
||||
o, ok := d.buffer[k]
|
||||
if ok {
|
||||
return !o.delete, nil
|
||||
}
|
||||
|
||||
return d.child.Has(ctx, k)
|
||||
}
|
||||
|
||||
// GetSize implements Datastore.GetSize
|
||||
func (d *Datastore) GetSize(ctx context.Context, k ds.Key) (int, error) {
|
||||
o, ok := d.buffer[k]
|
||||
if ok {
|
||||
if o.delete {
|
||||
return -1, ds.ErrNotFound
|
||||
}
|
||||
return len(o.value), nil
|
||||
}
|
||||
|
||||
return d.child.GetSize(ctx, k)
|
||||
}
|
||||
|
||||
// Query performs a query
|
||||
func (d *Datastore) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) {
|
||||
err := d.Flush(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return d.child.Query(ctx, q)
|
||||
}
|
||||
|
||||
// DiskUsage implements the PersistentDatastore interface.
|
||||
func (d *Datastore) DiskUsage(ctx context.Context) (uint64, error) {
|
||||
return ds.DiskUsage(ctx, d.child)
|
||||
}
|
||||
|
||||
func (d *Datastore) Close() error {
|
||||
ctx := context.Background()
|
||||
err1 := d.Flush(ctx)
|
||||
err2 := d.child.Close()
|
||||
if err1 != nil {
|
||||
return err1
|
||||
}
|
||||
if err2 != nil {
|
||||
return err2
|
||||
}
|
||||
return nil
|
||||
}
|
||||
248
vendor/github.com/ipfs/go-datastore/basic_ds.go
generated
vendored
Normal file
248
vendor/github.com/ipfs/go-datastore/basic_ds.go
generated
vendored
Normal file
@@ -0,0 +1,248 @@
|
||||
package datastore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
|
||||
dsq "github.com/ipfs/go-datastore/query"
|
||||
)
|
||||
|
||||
// Here are some basic datastore implementations.
|
||||
|
||||
// MapDatastore uses a standard Go map for internal storage.
|
||||
type MapDatastore struct {
|
||||
values map[Key][]byte
|
||||
}
|
||||
|
||||
var _ Datastore = (*MapDatastore)(nil)
|
||||
var _ Batching = (*MapDatastore)(nil)
|
||||
|
||||
// NewMapDatastore constructs a MapDatastore. It is _not_ thread-safe by
|
||||
// default, wrap using sync.MutexWrap if you need thread safety (the answer here
|
||||
// is usually yes).
|
||||
func NewMapDatastore() (d *MapDatastore) {
|
||||
return &MapDatastore{
|
||||
values: make(map[Key][]byte),
|
||||
}
|
||||
}
|
||||
|
||||
// Put implements Datastore.Put
|
||||
func (d *MapDatastore) Put(ctx context.Context, key Key, value []byte) (err error) {
|
||||
d.values[key] = value
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sync implements Datastore.Sync
|
||||
func (d *MapDatastore) Sync(ctx context.Context, prefix Key) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get implements Datastore.Get
|
||||
func (d *MapDatastore) Get(ctx context.Context, key Key) (value []byte, err error) {
|
||||
val, found := d.values[key]
|
||||
if !found {
|
||||
return nil, ErrNotFound
|
||||
}
|
||||
return val, nil
|
||||
}
|
||||
|
||||
// Has implements Datastore.Has
|
||||
func (d *MapDatastore) Has(ctx context.Context, key Key) (exists bool, err error) {
|
||||
_, found := d.values[key]
|
||||
return found, nil
|
||||
}
|
||||
|
||||
// GetSize implements Datastore.GetSize
|
||||
func (d *MapDatastore) GetSize(ctx context.Context, key Key) (size int, err error) {
|
||||
if v, found := d.values[key]; found {
|
||||
return len(v), nil
|
||||
}
|
||||
return -1, ErrNotFound
|
||||
}
|
||||
|
||||
// Delete implements Datastore.Delete
|
||||
func (d *MapDatastore) Delete(ctx context.Context, key Key) (err error) {
|
||||
delete(d.values, key)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Query implements Datastore.Query
|
||||
func (d *MapDatastore) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) {
|
||||
re := make([]dsq.Entry, 0, len(d.values))
|
||||
for k, v := range d.values {
|
||||
e := dsq.Entry{Key: k.String(), Size: len(v)}
|
||||
if !q.KeysOnly {
|
||||
e.Value = v
|
||||
}
|
||||
re = append(re, e)
|
||||
}
|
||||
r := dsq.ResultsWithEntries(q, re)
|
||||
r = dsq.NaiveQueryApply(q, r)
|
||||
return r, nil
|
||||
}
|
||||
|
||||
func (d *MapDatastore) Batch(ctx context.Context) (Batch, error) {
|
||||
return NewBasicBatch(d), nil
|
||||
}
|
||||
|
||||
func (d *MapDatastore) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// LogDatastore logs all accesses through the datastore.
|
||||
type LogDatastore struct {
|
||||
Name string
|
||||
child Datastore
|
||||
}
|
||||
|
||||
var _ Datastore = (*LogDatastore)(nil)
|
||||
var _ Batching = (*LogDatastore)(nil)
|
||||
var _ GCDatastore = (*LogDatastore)(nil)
|
||||
var _ PersistentDatastore = (*LogDatastore)(nil)
|
||||
var _ ScrubbedDatastore = (*LogDatastore)(nil)
|
||||
var _ CheckedDatastore = (*LogDatastore)(nil)
|
||||
var _ Shim = (*LogDatastore)(nil)
|
||||
|
||||
// Shim is a datastore which has a child.
|
||||
type Shim interface {
|
||||
Datastore
|
||||
|
||||
Children() []Datastore
|
||||
}
|
||||
|
||||
// NewLogDatastore constructs a log datastore.
|
||||
func NewLogDatastore(ds Datastore, name string) *LogDatastore {
|
||||
if len(name) < 1 {
|
||||
name = "LogDatastore"
|
||||
}
|
||||
return &LogDatastore{Name: name, child: ds}
|
||||
}
|
||||
|
||||
// Children implements Shim
|
||||
func (d *LogDatastore) Children() []Datastore {
|
||||
return []Datastore{d.child}
|
||||
}
|
||||
|
||||
// Put implements Datastore.Put
|
||||
func (d *LogDatastore) Put(ctx context.Context, key Key, value []byte) (err error) {
|
||||
log.Printf("%s: Put %s\n", d.Name, key)
|
||||
// log.Printf("%s: Put %s ```%s```", d.Name, key, value)
|
||||
return d.child.Put(ctx, key, value)
|
||||
}
|
||||
|
||||
// Sync implements Datastore.Sync
|
||||
func (d *LogDatastore) Sync(ctx context.Context, prefix Key) error {
|
||||
log.Printf("%s: Sync %s\n", d.Name, prefix)
|
||||
return d.child.Sync(ctx, prefix)
|
||||
}
|
||||
|
||||
// Get implements Datastore.Get
|
||||
func (d *LogDatastore) Get(ctx context.Context, key Key) (value []byte, err error) {
|
||||
log.Printf("%s: Get %s\n", d.Name, key)
|
||||
return d.child.Get(ctx, key)
|
||||
}
|
||||
|
||||
// Has implements Datastore.Has
|
||||
func (d *LogDatastore) Has(ctx context.Context, key Key) (exists bool, err error) {
|
||||
log.Printf("%s: Has %s\n", d.Name, key)
|
||||
return d.child.Has(ctx, key)
|
||||
}
|
||||
|
||||
// GetSize implements Datastore.GetSize
|
||||
func (d *LogDatastore) GetSize(ctx context.Context, key Key) (size int, err error) {
|
||||
log.Printf("%s: GetSize %s\n", d.Name, key)
|
||||
return d.child.GetSize(ctx, key)
|
||||
}
|
||||
|
||||
// Delete implements Datastore.Delete
|
||||
func (d *LogDatastore) Delete(ctx context.Context, key Key) (err error) {
|
||||
log.Printf("%s: Delete %s\n", d.Name, key)
|
||||
return d.child.Delete(ctx, key)
|
||||
}
|
||||
|
||||
// DiskUsage implements the PersistentDatastore interface.
|
||||
func (d *LogDatastore) DiskUsage(ctx context.Context) (uint64, error) {
|
||||
log.Printf("%s: DiskUsage\n", d.Name)
|
||||
return DiskUsage(ctx, d.child)
|
||||
}
|
||||
|
||||
// Query implements Datastore.Query
|
||||
func (d *LogDatastore) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) {
|
||||
log.Printf("%s: Query\n", d.Name)
|
||||
log.Printf("%s: q.Prefix: %s\n", d.Name, q.Prefix)
|
||||
log.Printf("%s: q.KeysOnly: %v\n", d.Name, q.KeysOnly)
|
||||
log.Printf("%s: q.Filters: %d\n", d.Name, len(q.Filters))
|
||||
log.Printf("%s: q.Orders: %d\n", d.Name, len(q.Orders))
|
||||
log.Printf("%s: q.Offset: %d\n", d.Name, q.Offset)
|
||||
|
||||
return d.child.Query(ctx, q)
|
||||
}
|
||||
|
||||
// LogBatch logs all accesses through the batch.
|
||||
type LogBatch struct {
|
||||
Name string
|
||||
child Batch
|
||||
}
|
||||
|
||||
var _ Batch = (*LogBatch)(nil)
|
||||
|
||||
func (d *LogDatastore) Batch(ctx context.Context) (Batch, error) {
|
||||
log.Printf("%s: Batch\n", d.Name)
|
||||
if bds, ok := d.child.(Batching); ok {
|
||||
b, err := bds.Batch(ctx)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &LogBatch{
|
||||
Name: d.Name,
|
||||
child: b,
|
||||
}, nil
|
||||
}
|
||||
return nil, ErrBatchUnsupported
|
||||
}
|
||||
|
||||
// Put implements Batch.Put
|
||||
func (d *LogBatch) Put(ctx context.Context, key Key, value []byte) (err error) {
|
||||
log.Printf("%s: BatchPut %s\n", d.Name, key)
|
||||
// log.Printf("%s: Put %s ```%s```", d.Name, key, value)
|
||||
return d.child.Put(ctx, key, value)
|
||||
}
|
||||
|
||||
// Delete implements Batch.Delete
|
||||
func (d *LogBatch) Delete(ctx context.Context, key Key) (err error) {
|
||||
log.Printf("%s: BatchDelete %s\n", d.Name, key)
|
||||
return d.child.Delete(ctx, key)
|
||||
}
|
||||
|
||||
// Commit implements Batch.Commit
|
||||
func (d *LogBatch) Commit(ctx context.Context) (err error) {
|
||||
log.Printf("%s: BatchCommit\n", d.Name)
|
||||
return d.child.Commit(ctx)
|
||||
}
|
||||
|
||||
func (d *LogDatastore) Close() error {
|
||||
log.Printf("%s: Close\n", d.Name)
|
||||
return d.child.Close()
|
||||
}
|
||||
|
||||
func (d *LogDatastore) Check(ctx context.Context) error {
|
||||
if c, ok := d.child.(CheckedDatastore); ok {
|
||||
return c.Check(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *LogDatastore) Scrub(ctx context.Context) error {
|
||||
if c, ok := d.child.(ScrubbedDatastore); ok {
|
||||
return c.Scrub(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *LogDatastore) CollectGarbage(ctx context.Context) error {
|
||||
if c, ok := d.child.(GCDatastore); ok {
|
||||
return c.CollectGarbage(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
53
vendor/github.com/ipfs/go-datastore/batch.go
generated
vendored
Normal file
53
vendor/github.com/ipfs/go-datastore/batch.go
generated
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
package datastore
|
||||
|
||||
import (
|
||||
"context"
|
||||
)
|
||||
|
||||
type op struct {
|
||||
delete bool
|
||||
value []byte
|
||||
}
|
||||
|
||||
// basicBatch implements the transaction interface for datastores who do
|
||||
// not have any sort of underlying transactional support
|
||||
type basicBatch struct {
|
||||
ops map[Key]op
|
||||
|
||||
target Datastore
|
||||
}
|
||||
|
||||
var _ Batch = (*basicBatch)(nil)
|
||||
|
||||
func NewBasicBatch(ds Datastore) Batch {
|
||||
return &basicBatch{
|
||||
ops: make(map[Key]op),
|
||||
target: ds,
|
||||
}
|
||||
}
|
||||
|
||||
func (bt *basicBatch) Put(ctx context.Context, key Key, val []byte) error {
|
||||
bt.ops[key] = op{value: val}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (bt *basicBatch) Delete(ctx context.Context, key Key) error {
|
||||
bt.ops[key] = op{delete: true}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (bt *basicBatch) Commit(ctx context.Context) error {
|
||||
var err error
|
||||
for k, op := range bt.ops {
|
||||
if op.delete {
|
||||
err = bt.target.Delete(ctx, k)
|
||||
} else {
|
||||
err = bt.target.Put(ctx, k, op.value)
|
||||
}
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
237
vendor/github.com/ipfs/go-datastore/datastore.go
generated
vendored
Normal file
237
vendor/github.com/ipfs/go-datastore/datastore.go
generated
vendored
Normal file
@@ -0,0 +1,237 @@
|
||||
package datastore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
query "github.com/ipfs/go-datastore/query"
|
||||
)
|
||||
|
||||
/*
|
||||
Datastore represents storage for any key-value pair.
|
||||
|
||||
Datastores are general enough to be backed by all kinds of different storage:
|
||||
in-memory caches, databases, a remote datastore, flat files on disk, etc.
|
||||
|
||||
The general idea is to wrap a more complicated storage facility in a simple,
|
||||
uniform interface, keeping the freedom of using the right tools for the job.
|
||||
In particular, a Datastore can aggregate other datastores in interesting ways,
|
||||
like sharded (to distribute load) or tiered access (caches before databases).
|
||||
|
||||
While Datastores should be written general enough to accept all sorts of
|
||||
values, some implementations will undoubtedly have to be specific (e.g. SQL
|
||||
databases where fields should be decomposed into columns), particularly to
|
||||
support queries efficiently. Moreover, certain datastores may enforce certain
|
||||
types of values (e.g. requiring an io.Reader, a specific struct, etc) or
|
||||
serialization formats (JSON, Protobufs, etc).
|
||||
|
||||
IMPORTANT: No Datastore should ever Panic! This is a cross-module interface,
|
||||
and thus it should behave predictably and handle exceptional conditions with
|
||||
proper error reporting. Thus, all Datastore calls may return errors, which
|
||||
should be checked by callers.
|
||||
*/
|
||||
type Datastore interface {
|
||||
Read
|
||||
Write
|
||||
// Sync guarantees that any Put or Delete calls under prefix that returned
|
||||
// before Sync(prefix) was called will be observed after Sync(prefix)
|
||||
// returns, even if the program crashes. If Put/Delete operations already
|
||||
// satisfy these requirements then Sync may be a no-op.
|
||||
//
|
||||
// If the prefix fails to Sync this method returns an error.
|
||||
Sync(ctx context.Context, prefix Key) error
|
||||
io.Closer
|
||||
}
|
||||
|
||||
// Write is the write-side of the Datastore interface.
|
||||
type Write interface {
|
||||
// Put stores the object `value` named by `key`.
|
||||
//
|
||||
// The generalized Datastore interface does not impose a value type,
|
||||
// allowing various datastore middleware implementations (which do not
|
||||
// handle the values directly) to be composed together.
|
||||
//
|
||||
// Ultimately, the lowest-level datastore will need to do some value checking
|
||||
// or risk getting incorrect values. It may also be useful to expose a more
|
||||
// type-safe interface to your application, and do the checking up-front.
|
||||
Put(ctx context.Context, key Key, value []byte) error
|
||||
|
||||
// Delete removes the value for given `key`. If the key is not in the
|
||||
// datastore, this method returns no error.
|
||||
Delete(ctx context.Context, key Key) error
|
||||
}
|
||||
|
||||
// Read is the read-side of the Datastore interface.
|
||||
type Read interface {
|
||||
// Get retrieves the object `value` named by `key`.
|
||||
// Get will return ErrNotFound if the key is not mapped to a value.
|
||||
Get(ctx context.Context, key Key) (value []byte, err error)
|
||||
|
||||
// Has returns whether the `key` is mapped to a `value`.
|
||||
// In some contexts, it may be much cheaper only to check for existence of
|
||||
// a value, rather than retrieving the value itself. (e.g. HTTP HEAD).
|
||||
// The default implementation is found in `GetBackedHas`.
|
||||
Has(ctx context.Context, key Key) (exists bool, err error)
|
||||
|
||||
// GetSize returns the size of the `value` named by `key`.
|
||||
// In some contexts, it may be much cheaper to only get the size of the
|
||||
// value rather than retrieving the value itself.
|
||||
GetSize(ctx context.Context, key Key) (size int, err error)
|
||||
|
||||
// Query searches the datastore and returns a query result. This function
|
||||
// may return before the query actually runs. To wait for the query:
|
||||
//
|
||||
// result, _ := ds.Query(q)
|
||||
//
|
||||
// // use the channel interface; result may come in at different times
|
||||
// for entry := range result.Next() { ... }
|
||||
//
|
||||
// // or wait for the query to be completely done
|
||||
// entries, _ := result.Rest()
|
||||
// for entry := range entries { ... }
|
||||
//
|
||||
Query(ctx context.Context, q query.Query) (query.Results, error)
|
||||
}
|
||||
|
||||
// Batching datastores support deferred, grouped updates to the database.
|
||||
// `Batch`es do NOT have transactional semantics: updates to the underlying
|
||||
// datastore are not guaranteed to occur in the same iota of time. Similarly,
|
||||
// batched updates will not be flushed to the underlying datastore until
|
||||
// `Commit` has been called. `Txn`s from a `TxnDatastore` have all the
|
||||
// capabilities of a `Batch`, but the reverse is NOT true.
|
||||
type Batching interface {
|
||||
Datastore
|
||||
BatchingFeature
|
||||
}
|
||||
|
||||
// ErrBatchUnsupported is returned if the by Batch if the Datastore doesn't
|
||||
// actually support batching.
|
||||
var ErrBatchUnsupported = errors.New("this datastore does not support batching")
|
||||
|
||||
// CheckedDatastore is an interface that should be implemented by datastores
|
||||
// which may need checking on-disk data integrity.
|
||||
type CheckedDatastore interface {
|
||||
Datastore
|
||||
CheckedFeature
|
||||
}
|
||||
|
||||
// ScrubbedDatastore is an interface that should be implemented by datastores
|
||||
// which want to provide a mechanism to check data integrity and/or
|
||||
// error correction.
|
||||
type ScrubbedDatastore interface {
|
||||
Datastore
|
||||
ScrubbedFeature
|
||||
}
|
||||
|
||||
// GCDatastore is an interface that should be implemented by datastores which
|
||||
// don't free disk space by just removing data from them.
|
||||
type GCDatastore interface {
|
||||
Datastore
|
||||
GCFeature
|
||||
}
|
||||
|
||||
// PersistentDatastore is an interface that should be implemented by datastores
|
||||
// which can report disk usage.
|
||||
type PersistentDatastore interface {
|
||||
Datastore
|
||||
PersistentFeature
|
||||
}
|
||||
|
||||
// DiskUsage checks if a Datastore is a
|
||||
// PersistentDatastore and returns its DiskUsage(),
|
||||
// otherwise returns 0.
|
||||
func DiskUsage(ctx context.Context, d Datastore) (uint64, error) {
|
||||
persDs, ok := d.(PersistentDatastore)
|
||||
if !ok {
|
||||
return 0, nil
|
||||
}
|
||||
return persDs.DiskUsage(ctx)
|
||||
}
|
||||
|
||||
// TTLDatastore is an interface that should be implemented by datastores that
|
||||
// support expiring entries.
|
||||
type TTLDatastore interface {
|
||||
Datastore
|
||||
TTL
|
||||
}
|
||||
|
||||
// Txn extends the Datastore type. Txns allow users to batch queries and
|
||||
// mutations to the Datastore into atomic groups, or transactions. Actions
|
||||
// performed on a transaction will not take hold until a successful call to
|
||||
// Commit has been made. Likewise, transactions can be aborted by calling
|
||||
// Discard before a successful Commit has been made.
|
||||
type Txn interface {
|
||||
Read
|
||||
Write
|
||||
|
||||
// Commit finalizes a transaction, attempting to commit it to the Datastore.
|
||||
// May return an error if the transaction has gone stale. The presence of an
|
||||
// error is an indication that the data was not committed to the Datastore.
|
||||
Commit(ctx context.Context) error
|
||||
// Discard throws away changes recorded in a transaction without committing
|
||||
// them to the underlying Datastore. Any calls made to Discard after Commit
|
||||
// has been successfully called will have no effect on the transaction and
|
||||
// state of the Datastore, making it safe to defer.
|
||||
Discard(ctx context.Context)
|
||||
}
|
||||
|
||||
// TxnDatastore is an interface that should be implemented by datastores that
|
||||
// support transactions.
|
||||
type TxnDatastore interface {
|
||||
Datastore
|
||||
TxnFeature
|
||||
}
|
||||
|
||||
// Errors
|
||||
|
||||
type dsError struct {
|
||||
error
|
||||
isNotFound bool
|
||||
}
|
||||
|
||||
func (e *dsError) NotFound() bool {
|
||||
return e.isNotFound
|
||||
}
|
||||
|
||||
// ErrNotFound is returned by Get and GetSize when a datastore does not map the
|
||||
// given key to a value.
|
||||
var ErrNotFound error = &dsError{error: errors.New("datastore: key not found"), isNotFound: true}
|
||||
|
||||
// GetBackedHas provides a default Datastore.Has implementation.
|
||||
// It exists so Datastore.Has implementations can use it, like so:
|
||||
//
|
||||
// func (*d SomeDatastore) Has(key Key) (exists bool, err error) {
|
||||
// return GetBackedHas(d, key)
|
||||
// }
|
||||
func GetBackedHas(ctx context.Context, ds Read, key Key) (bool, error) {
|
||||
_, err := ds.Get(ctx, key)
|
||||
switch err {
|
||||
case nil:
|
||||
return true, nil
|
||||
case ErrNotFound:
|
||||
return false, nil
|
||||
default:
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
|
||||
// GetBackedSize provides a default Datastore.GetSize implementation.
|
||||
// It exists so Datastore.GetSize implementations can use it, like so:
|
||||
//
|
||||
// func (*d SomeDatastore) GetSize(key Key) (size int, err error) {
|
||||
// return GetBackedSize(d, key)
|
||||
// }
|
||||
func GetBackedSize(ctx context.Context, ds Read, key Key) (int, error) {
|
||||
value, err := ds.Get(ctx, key)
|
||||
if err == nil {
|
||||
return len(value), nil
|
||||
}
|
||||
return -1, err
|
||||
}
|
||||
|
||||
type Batch interface {
|
||||
Write
|
||||
|
||||
Commit(ctx context.Context) error
|
||||
}
|
||||
132
vendor/github.com/ipfs/go-datastore/features.go
generated
vendored
Normal file
132
vendor/github.com/ipfs/go-datastore/features.go
generated
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
package datastore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"reflect"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
FeatureNameBatching = "Batching"
|
||||
FeatureNameChecked = "Checked"
|
||||
FeatureNameGC = "GC"
|
||||
FeatureNamePersistent = "Persistent"
|
||||
FeatureNameScrubbed = "Scrubbed"
|
||||
FeatureNameTTL = "TTL"
|
||||
FeatureNameTransaction = "Transaction"
|
||||
)
|
||||
|
||||
type BatchingFeature interface {
|
||||
Batch(ctx context.Context) (Batch, error)
|
||||
}
|
||||
|
||||
type CheckedFeature interface {
|
||||
Check(ctx context.Context) error
|
||||
}
|
||||
|
||||
type ScrubbedFeature interface {
|
||||
Scrub(ctx context.Context) error
|
||||
}
|
||||
|
||||
type GCFeature interface {
|
||||
CollectGarbage(ctx context.Context) error
|
||||
}
|
||||
|
||||
type PersistentFeature interface {
|
||||
// DiskUsage returns the space used by a datastore, in bytes.
|
||||
DiskUsage(ctx context.Context) (uint64, error)
|
||||
}
|
||||
|
||||
// TTL encapulates the methods that deal with entries with time-to-live.
|
||||
type TTL interface {
|
||||
PutWithTTL(ctx context.Context, key Key, value []byte, ttl time.Duration) error
|
||||
SetTTL(ctx context.Context, key Key, ttl time.Duration) error
|
||||
GetExpiration(ctx context.Context, key Key) (time.Time, error)
|
||||
}
|
||||
|
||||
type TxnFeature interface {
|
||||
NewTransaction(ctx context.Context, readOnly bool) (Txn, error)
|
||||
}
|
||||
|
||||
// Feature contains metadata about a datastore Feature.
|
||||
type Feature struct {
|
||||
Name string
|
||||
// Interface is the nil interface of the feature.
|
||||
Interface interface{}
|
||||
// DatastoreInterface is the nil interface of the feature's corresponding datastore interface.
|
||||
DatastoreInterface interface{}
|
||||
}
|
||||
|
||||
var featuresByName map[string]Feature
|
||||
|
||||
func init() {
|
||||
featuresByName = map[string]Feature{}
|
||||
for _, f := range Features() {
|
||||
featuresByName[f.Name] = f
|
||||
}
|
||||
}
|
||||
|
||||
// Features returns a list of all known datastore features.
|
||||
// This serves both to provide an authoritative list of features,
|
||||
// and to define a canonical ordering of features.
|
||||
func Features() []Feature {
|
||||
// for backwards compatibility, only append to this list
|
||||
return []Feature{
|
||||
{
|
||||
Name: FeatureNameBatching,
|
||||
Interface: (*BatchingFeature)(nil),
|
||||
DatastoreInterface: (*Batching)(nil),
|
||||
},
|
||||
{
|
||||
Name: FeatureNameChecked,
|
||||
Interface: (*CheckedFeature)(nil),
|
||||
DatastoreInterface: (*CheckedDatastore)(nil),
|
||||
},
|
||||
{
|
||||
Name: FeatureNameGC,
|
||||
Interface: (*GCFeature)(nil),
|
||||
DatastoreInterface: (*GCDatastore)(nil),
|
||||
},
|
||||
{
|
||||
Name: FeatureNamePersistent,
|
||||
Interface: (*PersistentFeature)(nil),
|
||||
DatastoreInterface: (*PersistentDatastore)(nil),
|
||||
},
|
||||
{
|
||||
Name: FeatureNameScrubbed,
|
||||
Interface: (*ScrubbedFeature)(nil),
|
||||
DatastoreInterface: (*ScrubbedDatastore)(nil),
|
||||
},
|
||||
{
|
||||
Name: FeatureNameTTL,
|
||||
Interface: (*TTL)(nil),
|
||||
DatastoreInterface: (*TTLDatastore)(nil),
|
||||
},
|
||||
{
|
||||
Name: FeatureNameTransaction,
|
||||
Interface: (*TxnFeature)(nil),
|
||||
DatastoreInterface: (*TxnDatastore)(nil),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// FeatureByName returns the feature with the given name, if known.
|
||||
func FeatureByName(name string) (Feature, bool) {
|
||||
feat, known := featuresByName[name]
|
||||
return feat, known
|
||||
}
|
||||
|
||||
// FeaturesForDatastore returns the features supported by the given datastore.
|
||||
func FeaturesForDatastore(dstore Datastore) (features []Feature) {
|
||||
if dstore == nil {
|
||||
return nil
|
||||
}
|
||||
dstoreType := reflect.TypeOf(dstore)
|
||||
for _, f := range Features() {
|
||||
fType := reflect.TypeOf(f.Interface).Elem()
|
||||
if dstoreType.Implements(fType) {
|
||||
features = append(features, f)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
309
vendor/github.com/ipfs/go-datastore/key.go
generated
vendored
Normal file
309
vendor/github.com/ipfs/go-datastore/key.go
generated
vendored
Normal file
@@ -0,0 +1,309 @@
|
||||
package datastore
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
dsq "github.com/ipfs/go-datastore/query"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
/*
|
||||
A Key represents the unique identifier of an object.
|
||||
Our Key scheme is inspired by file systems and Google App Engine key model.
|
||||
|
||||
Keys are meant to be unique across a system. Keys are hierarchical,
|
||||
incorporating more and more specific namespaces. Thus keys can be deemed
|
||||
'children' or 'ancestors' of other keys::
|
||||
|
||||
Key("/Comedy")
|
||||
Key("/Comedy/MontyPython")
|
||||
|
||||
Also, every namespace can be parametrized to embed relevant object
|
||||
information. For example, the Key `name` (most specific namespace) could
|
||||
include the object type::
|
||||
|
||||
Key("/Comedy/MontyPython/Actor:JohnCleese")
|
||||
Key("/Comedy/MontyPython/Sketch:CheeseShop")
|
||||
Key("/Comedy/MontyPython/Sketch:CheeseShop/Character:Mousebender")
|
||||
|
||||
*/
|
||||
type Key struct {
|
||||
string
|
||||
}
|
||||
|
||||
// NewKey constructs a key from string. it will clean the value.
|
||||
func NewKey(s string) Key {
|
||||
k := Key{s}
|
||||
k.Clean()
|
||||
return k
|
||||
}
|
||||
|
||||
// RawKey creates a new Key without safety checking the input. Use with care.
|
||||
func RawKey(s string) Key {
|
||||
// accept an empty string and fix it to avoid special cases
|
||||
// elsewhere
|
||||
if len(s) == 0 {
|
||||
return Key{"/"}
|
||||
}
|
||||
|
||||
// perform a quick sanity check that the key is in the correct
|
||||
// format, if it is not then it is a programmer error and it is
|
||||
// okay to panic
|
||||
if len(s) == 0 || s[0] != '/' || (len(s) > 1 && s[len(s)-1] == '/') {
|
||||
panic("invalid datastore key: " + s)
|
||||
}
|
||||
|
||||
return Key{s}
|
||||
}
|
||||
|
||||
// KeyWithNamespaces constructs a key out of a namespace slice.
|
||||
func KeyWithNamespaces(ns []string) Key {
|
||||
return NewKey(strings.Join(ns, "/"))
|
||||
}
|
||||
|
||||
// Clean up a Key, using path.Clean.
|
||||
func (k *Key) Clean() {
|
||||
switch {
|
||||
case len(k.string) == 0:
|
||||
k.string = "/"
|
||||
case k.string[0] == '/':
|
||||
k.string = path.Clean(k.string)
|
||||
default:
|
||||
k.string = path.Clean("/" + k.string)
|
||||
}
|
||||
}
|
||||
|
||||
// Strings is the string value of Key
|
||||
func (k Key) String() string {
|
||||
return k.string
|
||||
}
|
||||
|
||||
// Bytes returns the string value of Key as a []byte
|
||||
func (k Key) Bytes() []byte {
|
||||
return []byte(k.string)
|
||||
}
|
||||
|
||||
// Equal checks equality of two keys
|
||||
func (k Key) Equal(k2 Key) bool {
|
||||
return k.string == k2.string
|
||||
}
|
||||
|
||||
// Less checks whether this key is sorted lower than another.
|
||||
func (k Key) Less(k2 Key) bool {
|
||||
list1 := k.List()
|
||||
list2 := k2.List()
|
||||
for i, c1 := range list1 {
|
||||
if len(list2) < (i + 1) {
|
||||
return false
|
||||
}
|
||||
|
||||
c2 := list2[i]
|
||||
if c1 < c2 {
|
||||
return true
|
||||
} else if c1 > c2 {
|
||||
return false
|
||||
}
|
||||
// c1 == c2, continue
|
||||
}
|
||||
|
||||
// list1 is shorter or exactly the same.
|
||||
return len(list1) < len(list2)
|
||||
}
|
||||
|
||||
// List returns the `list` representation of this Key.
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").List()
|
||||
// ["Comedy", "MontyPythong", "Actor:JohnCleese"]
|
||||
func (k Key) List() []string {
|
||||
return strings.Split(k.string, "/")[1:]
|
||||
}
|
||||
|
||||
// Reverse returns the reverse of this Key.
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").Reverse()
|
||||
// NewKey("/Actor:JohnCleese/MontyPython/Comedy")
|
||||
func (k Key) Reverse() Key {
|
||||
l := k.List()
|
||||
r := make([]string, len(l))
|
||||
for i, e := range l {
|
||||
r[len(l)-i-1] = e
|
||||
}
|
||||
return KeyWithNamespaces(r)
|
||||
}
|
||||
|
||||
// Namespaces returns the `namespaces` making up this Key.
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").Namespaces()
|
||||
// ["Comedy", "MontyPython", "Actor:JohnCleese"]
|
||||
func (k Key) Namespaces() []string {
|
||||
return k.List()
|
||||
}
|
||||
|
||||
// BaseNamespace returns the "base" namespace of this key (path.Base(filename))
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").BaseNamespace()
|
||||
// "Actor:JohnCleese"
|
||||
func (k Key) BaseNamespace() string {
|
||||
n := k.Namespaces()
|
||||
return n[len(n)-1]
|
||||
}
|
||||
|
||||
// Type returns the "type" of this key (value of last namespace).
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").Type()
|
||||
// "Actor"
|
||||
func (k Key) Type() string {
|
||||
return NamespaceType(k.BaseNamespace())
|
||||
}
|
||||
|
||||
// Name returns the "name" of this key (field of last namespace).
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").Name()
|
||||
// "JohnCleese"
|
||||
func (k Key) Name() string {
|
||||
return NamespaceValue(k.BaseNamespace())
|
||||
}
|
||||
|
||||
// Instance returns an "instance" of this type key (appends value to namespace).
|
||||
// NewKey("/Comedy/MontyPython/Actor").Instance("JohnClesse")
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese")
|
||||
func (k Key) Instance(s string) Key {
|
||||
return NewKey(k.string + ":" + s)
|
||||
}
|
||||
|
||||
// Path returns the "path" of this key (parent + type).
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").Path()
|
||||
// NewKey("/Comedy/MontyPython/Actor")
|
||||
func (k Key) Path() Key {
|
||||
s := k.Parent().string + "/" + NamespaceType(k.BaseNamespace())
|
||||
return NewKey(s)
|
||||
}
|
||||
|
||||
// Parent returns the `parent` Key of this Key.
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese").Parent()
|
||||
// NewKey("/Comedy/MontyPython")
|
||||
func (k Key) Parent() Key {
|
||||
n := k.List()
|
||||
if len(n) == 1 {
|
||||
return RawKey("/")
|
||||
}
|
||||
return NewKey(strings.Join(n[:len(n)-1], "/"))
|
||||
}
|
||||
|
||||
// Child returns the `child` Key of this Key.
|
||||
// NewKey("/Comedy/MontyPython").Child(NewKey("Actor:JohnCleese"))
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese")
|
||||
func (k Key) Child(k2 Key) Key {
|
||||
switch {
|
||||
case k.string == "/":
|
||||
return k2
|
||||
case k2.string == "/":
|
||||
return k
|
||||
default:
|
||||
return RawKey(k.string + k2.string)
|
||||
}
|
||||
}
|
||||
|
||||
// ChildString returns the `child` Key of this Key -- string helper.
|
||||
// NewKey("/Comedy/MontyPython").ChildString("Actor:JohnCleese")
|
||||
// NewKey("/Comedy/MontyPython/Actor:JohnCleese")
|
||||
func (k Key) ChildString(s string) Key {
|
||||
return NewKey(k.string + "/" + s)
|
||||
}
|
||||
|
||||
// IsAncestorOf returns whether this key is a prefix of `other`
|
||||
// NewKey("/Comedy").IsAncestorOf("/Comedy/MontyPython")
|
||||
// true
|
||||
func (k Key) IsAncestorOf(other Key) bool {
|
||||
// equivalent to HasPrefix(other, k.string + "/")
|
||||
|
||||
if len(other.string) <= len(k.string) {
|
||||
// We're not long enough to be a child.
|
||||
return false
|
||||
}
|
||||
|
||||
if k.string == "/" {
|
||||
// We're the root and the other key is longer.
|
||||
return true
|
||||
}
|
||||
|
||||
// "other" starts with /k.string/
|
||||
return other.string[len(k.string)] == '/' && other.string[:len(k.string)] == k.string
|
||||
}
|
||||
|
||||
// IsDescendantOf returns whether this key contains another as a prefix.
|
||||
// NewKey("/Comedy/MontyPython").IsDescendantOf("/Comedy")
|
||||
// true
|
||||
func (k Key) IsDescendantOf(other Key) bool {
|
||||
return other.IsAncestorOf(k)
|
||||
}
|
||||
|
||||
// IsTopLevel returns whether this key has only one namespace.
|
||||
func (k Key) IsTopLevel() bool {
|
||||
return len(k.List()) == 1
|
||||
}
|
||||
|
||||
// MarshalJSON implements the json.Marshaler interface,
|
||||
// keys are represented as JSON strings
|
||||
func (k Key) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(k.String())
|
||||
}
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface,
|
||||
// keys will parse any value specified as a key to a string
|
||||
func (k *Key) UnmarshalJSON(data []byte) error {
|
||||
var key string
|
||||
if err := json.Unmarshal(data, &key); err != nil {
|
||||
return err
|
||||
}
|
||||
*k = NewKey(key)
|
||||
return nil
|
||||
}
|
||||
|
||||
// RandomKey returns a randomly (uuid) generated key.
|
||||
// RandomKey()
|
||||
// NewKey("/f98719ea086343f7b71f32ea9d9d521d")
|
||||
func RandomKey() Key {
|
||||
return NewKey(strings.Replace(uuid.New().String(), "-", "", -1))
|
||||
}
|
||||
|
||||
/*
|
||||
A Key Namespace is like a path element.
|
||||
A namespace can optionally include a type (delimited by ':')
|
||||
|
||||
> NamespaceValue("Song:PhilosopherSong")
|
||||
PhilosopherSong
|
||||
> NamespaceType("Song:PhilosopherSong")
|
||||
Song
|
||||
> NamespaceType("Music:Song:PhilosopherSong")
|
||||
Music:Song
|
||||
*/
|
||||
|
||||
// NamespaceType is the first component of a namespace. `foo` in `foo:bar`
|
||||
func NamespaceType(namespace string) string {
|
||||
parts := strings.Split(namespace, ":")
|
||||
if len(parts) < 2 {
|
||||
return ""
|
||||
}
|
||||
return strings.Join(parts[0:len(parts)-1], ":")
|
||||
}
|
||||
|
||||
// NamespaceValue returns the last component of a namespace. `baz` in `f:b:baz`
|
||||
func NamespaceValue(namespace string) string {
|
||||
parts := strings.Split(namespace, ":")
|
||||
return parts[len(parts)-1]
|
||||
}
|
||||
|
||||
// KeySlice attaches the methods of sort.Interface to []Key,
|
||||
// sorting in increasing order.
|
||||
type KeySlice []Key
|
||||
|
||||
func (p KeySlice) Len() int { return len(p) }
|
||||
func (p KeySlice) Less(i, j int) bool { return p[i].Less(p[j]) }
|
||||
func (p KeySlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
|
||||
// EntryKeys
|
||||
func EntryKeys(e []dsq.Entry) []Key {
|
||||
ks := make([]Key, len(e))
|
||||
for i, e := range e {
|
||||
ks[i] = NewKey(e.Key)
|
||||
}
|
||||
return ks
|
||||
}
|
||||
120
vendor/github.com/ipfs/go-datastore/null_ds.go
generated
vendored
Normal file
120
vendor/github.com/ipfs/go-datastore/null_ds.go
generated
vendored
Normal file
@@ -0,0 +1,120 @@
|
||||
package datastore
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
dsq "github.com/ipfs/go-datastore/query"
|
||||
)
|
||||
|
||||
// NullDatastore stores nothing, but conforms to the API.
|
||||
// Useful to test with.
|
||||
type NullDatastore struct {
|
||||
}
|
||||
|
||||
var _ Datastore = (*NullDatastore)(nil)
|
||||
var _ Batching = (*NullDatastore)(nil)
|
||||
var _ ScrubbedDatastore = (*NullDatastore)(nil)
|
||||
var _ CheckedDatastore = (*NullDatastore)(nil)
|
||||
var _ PersistentDatastore = (*NullDatastore)(nil)
|
||||
var _ GCDatastore = (*NullDatastore)(nil)
|
||||
var _ TxnDatastore = (*NullDatastore)(nil)
|
||||
|
||||
// NewNullDatastore constructs a null datastoe
|
||||
func NewNullDatastore() *NullDatastore {
|
||||
return &NullDatastore{}
|
||||
}
|
||||
|
||||
// Put implements Datastore.Put
|
||||
func (d *NullDatastore) Put(ctx context.Context, key Key, value []byte) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sync implements Datastore.Sync
|
||||
func (d *NullDatastore) Sync(ctx context.Context, prefix Key) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get implements Datastore.Get
|
||||
func (d *NullDatastore) Get(ctx context.Context, key Key) (value []byte, err error) {
|
||||
return nil, ErrNotFound
|
||||
}
|
||||
|
||||
// Has implements Datastore.Has
|
||||
func (d *NullDatastore) Has(ctx context.Context, key Key) (exists bool, err error) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Has implements Datastore.GetSize
|
||||
func (d *NullDatastore) GetSize(ctx context.Context, key Key) (size int, err error) {
|
||||
return -1, ErrNotFound
|
||||
}
|
||||
|
||||
// Delete implements Datastore.Delete
|
||||
func (d *NullDatastore) Delete(ctx context.Context, key Key) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) Scrub(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) Check(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Query implements Datastore.Query
|
||||
func (d *NullDatastore) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) {
|
||||
return dsq.ResultsWithEntries(q, nil), nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) Batch(ctx context.Context) (Batch, error) {
|
||||
return NewBasicBatch(d), nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) CollectGarbage(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) DiskUsage(ctx context.Context) (uint64, error) {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *NullDatastore) NewTransaction(ctx context.Context, readOnly bool) (Txn, error) {
|
||||
return &nullTxn{}, nil
|
||||
}
|
||||
|
||||
type nullTxn struct{}
|
||||
|
||||
func (t *nullTxn) Get(ctx context.Context, key Key) (value []byte, err error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) Has(ctx context.Context, key Key) (exists bool, err error) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) GetSize(ctx context.Context, key Key) (size int, err error) {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) {
|
||||
return dsq.ResultsWithEntries(q, nil), nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) Put(ctx context.Context, key Key, value []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) Delete(ctx context.Context, key Key) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) Commit(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *nullTxn) Discard(ctx context.Context) {}
|
||||
102
vendor/github.com/ipfs/go-datastore/query/filter.go
generated
vendored
Normal file
102
vendor/github.com/ipfs/go-datastore/query/filter.go
generated
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
package query
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Filter is an object that tests ResultEntries
|
||||
type Filter interface {
|
||||
// Filter returns whether an entry passes the filter
|
||||
Filter(e Entry) bool
|
||||
}
|
||||
|
||||
// Op is a comparison operator
|
||||
type Op string
|
||||
|
||||
var (
|
||||
Equal = Op("==")
|
||||
NotEqual = Op("!=")
|
||||
GreaterThan = Op(">")
|
||||
GreaterThanOrEqual = Op(">=")
|
||||
LessThan = Op("<")
|
||||
LessThanOrEqual = Op("<=")
|
||||
)
|
||||
|
||||
// FilterValueCompare is used to signal to datastores they
|
||||
// should apply internal comparisons. unfortunately, there
|
||||
// is no way to apply comparisons* to interface{} types in
|
||||
// Go, so if the datastore doesnt have a special way to
|
||||
// handle these comparisons, you must provided the
|
||||
// TypedFilter to actually do filtering.
|
||||
//
|
||||
// [*] other than == and !=, which use reflect.DeepEqual.
|
||||
type FilterValueCompare struct {
|
||||
Op Op
|
||||
Value []byte
|
||||
}
|
||||
|
||||
func (f FilterValueCompare) Filter(e Entry) bool {
|
||||
cmp := bytes.Compare(e.Value, f.Value)
|
||||
switch f.Op {
|
||||
case Equal:
|
||||
return cmp == 0
|
||||
case NotEqual:
|
||||
return cmp != 0
|
||||
case LessThan:
|
||||
return cmp < 0
|
||||
case LessThanOrEqual:
|
||||
return cmp <= 0
|
||||
case GreaterThan:
|
||||
return cmp > 0
|
||||
case GreaterThanOrEqual:
|
||||
return cmp >= 0
|
||||
default:
|
||||
panic(fmt.Errorf("unknown operation: %s", f.Op))
|
||||
}
|
||||
}
|
||||
|
||||
func (f FilterValueCompare) String() string {
|
||||
return fmt.Sprintf("VALUE %s %q", f.Op, string(f.Value))
|
||||
}
|
||||
|
||||
type FilterKeyCompare struct {
|
||||
Op Op
|
||||
Key string
|
||||
}
|
||||
|
||||
func (f FilterKeyCompare) Filter(e Entry) bool {
|
||||
switch f.Op {
|
||||
case Equal:
|
||||
return e.Key == f.Key
|
||||
case NotEqual:
|
||||
return e.Key != f.Key
|
||||
case GreaterThan:
|
||||
return e.Key > f.Key
|
||||
case GreaterThanOrEqual:
|
||||
return e.Key >= f.Key
|
||||
case LessThan:
|
||||
return e.Key < f.Key
|
||||
case LessThanOrEqual:
|
||||
return e.Key <= f.Key
|
||||
default:
|
||||
panic(fmt.Errorf("unknown op '%s'", f.Op))
|
||||
}
|
||||
}
|
||||
|
||||
func (f FilterKeyCompare) String() string {
|
||||
return fmt.Sprintf("KEY %s %q", f.Op, f.Key)
|
||||
}
|
||||
|
||||
type FilterKeyPrefix struct {
|
||||
Prefix string
|
||||
}
|
||||
|
||||
func (f FilterKeyPrefix) Filter(e Entry) bool {
|
||||
return strings.HasPrefix(e.Key, f.Prefix)
|
||||
}
|
||||
|
||||
func (f FilterKeyPrefix) String() string {
|
||||
return fmt.Sprintf("PREFIX(%q)", f.Prefix)
|
||||
}
|
||||
94
vendor/github.com/ipfs/go-datastore/query/order.go
generated
vendored
Normal file
94
vendor/github.com/ipfs/go-datastore/query/order.go
generated
vendored
Normal file
@@ -0,0 +1,94 @@
|
||||
package query
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Order is an object used to order objects
|
||||
type Order interface {
|
||||
Compare(a, b Entry) int
|
||||
}
|
||||
|
||||
// OrderByFunction orders the results based on the result of the given function.
|
||||
type OrderByFunction func(a, b Entry) int
|
||||
|
||||
func (o OrderByFunction) Compare(a, b Entry) int {
|
||||
return o(a, b)
|
||||
}
|
||||
|
||||
func (OrderByFunction) String() string {
|
||||
return "FN"
|
||||
}
|
||||
|
||||
// OrderByValue is used to signal to datastores they should apply internal
|
||||
// orderings.
|
||||
type OrderByValue struct{}
|
||||
|
||||
func (o OrderByValue) Compare(a, b Entry) int {
|
||||
return bytes.Compare(a.Value, b.Value)
|
||||
}
|
||||
|
||||
func (OrderByValue) String() string {
|
||||
return "VALUE"
|
||||
}
|
||||
|
||||
// OrderByValueDescending is used to signal to datastores they
|
||||
// should apply internal orderings.
|
||||
type OrderByValueDescending struct{}
|
||||
|
||||
func (o OrderByValueDescending) Compare(a, b Entry) int {
|
||||
return -bytes.Compare(a.Value, b.Value)
|
||||
}
|
||||
|
||||
func (OrderByValueDescending) String() string {
|
||||
return "desc(VALUE)"
|
||||
}
|
||||
|
||||
// OrderByKey
|
||||
type OrderByKey struct{}
|
||||
|
||||
func (o OrderByKey) Compare(a, b Entry) int {
|
||||
return strings.Compare(a.Key, b.Key)
|
||||
}
|
||||
|
||||
func (OrderByKey) String() string {
|
||||
return "KEY"
|
||||
}
|
||||
|
||||
// OrderByKeyDescending
|
||||
type OrderByKeyDescending struct{}
|
||||
|
||||
func (o OrderByKeyDescending) Compare(a, b Entry) int {
|
||||
return -strings.Compare(a.Key, b.Key)
|
||||
}
|
||||
|
||||
func (OrderByKeyDescending) String() string {
|
||||
return "desc(KEY)"
|
||||
}
|
||||
|
||||
// Less returns true if a comes before b with the requested orderings.
|
||||
func Less(orders []Order, a, b Entry) bool {
|
||||
for _, cmp := range orders {
|
||||
switch cmp.Compare(a, b) {
|
||||
case 0:
|
||||
case -1:
|
||||
return true
|
||||
case 1:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// This gives us a *stable* sort for free. We don't care
|
||||
// preserving the order from the underlying datastore
|
||||
// because it's undefined.
|
||||
return a.Key < b.Key
|
||||
}
|
||||
|
||||
// Sort sorts the given entries using the given orders.
|
||||
func Sort(orders []Order, entries []Entry) {
|
||||
sort.Slice(entries, func(i int, j int) bool {
|
||||
return Less(orders, entries[i], entries[j])
|
||||
})
|
||||
}
|
||||
426
vendor/github.com/ipfs/go-datastore/query/query.go
generated
vendored
Normal file
426
vendor/github.com/ipfs/go-datastore/query/query.go
generated
vendored
Normal file
@@ -0,0 +1,426 @@
|
||||
package query
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
goprocess "github.com/jbenet/goprocess"
|
||||
)
|
||||
|
||||
/*
|
||||
Query represents storage for any key-value pair.
|
||||
|
||||
tl;dr:
|
||||
|
||||
queries are supported across datastores.
|
||||
Cheap on top of relational dbs, and expensive otherwise.
|
||||
Pick the right tool for the job!
|
||||
|
||||
In addition to the key-value store get and set semantics, datastore
|
||||
provides an interface to retrieve multiple records at a time through
|
||||
the use of queries. The datastore Query model gleans a common set of
|
||||
operations performed when querying. To avoid pasting here years of
|
||||
database research, let’s summarize the operations datastore supports.
|
||||
|
||||
Query Operations, applied in-order:
|
||||
|
||||
* prefix - scope the query to a given path prefix
|
||||
* filters - select a subset of values by applying constraints
|
||||
* orders - sort the results by applying sort conditions, hierarchically.
|
||||
* offset - skip a number of results (for efficient pagination)
|
||||
* limit - impose a numeric limit on the number of results
|
||||
|
||||
Datastore combines these operations into a simple Query class that allows
|
||||
applications to define their constraints in a simple, generic, way without
|
||||
introducing datastore specific calls, languages, etc.
|
||||
|
||||
However, take heed: not all datastores support efficiently performing these
|
||||
operations. Pick a datastore based on your needs. If you need efficient look-ups,
|
||||
go for a simple key/value store. If you need efficient queries, consider an SQL
|
||||
backed datastore.
|
||||
|
||||
Notes:
|
||||
|
||||
* Prefix: When a query filters by prefix, it selects keys that are strict
|
||||
children of the prefix. For example, a prefix "/foo" would select "/foo/bar"
|
||||
but not "/foobar" or "/foo",
|
||||
* Orders: Orders are applied hierarchically. Results are sorted by the first
|
||||
ordering, then entries equal under the first ordering are sorted with the
|
||||
second ordering, etc.
|
||||
* Limits & Offset: Limits and offsets are applied after everything else.
|
||||
*/
|
||||
type Query struct {
|
||||
Prefix string // namespaces the query to results whose keys have Prefix
|
||||
Filters []Filter // filter results. apply sequentially
|
||||
Orders []Order // order results. apply hierarchically
|
||||
Limit int // maximum number of results
|
||||
Offset int // skip given number of results
|
||||
KeysOnly bool // return only keys.
|
||||
ReturnExpirations bool // return expirations (see TTLDatastore)
|
||||
ReturnsSizes bool // always return sizes. If not set, datastore impl can return
|
||||
// // it anyway if it doesn't involve a performance cost. If KeysOnly
|
||||
// // is not set, Size should always be set.
|
||||
}
|
||||
|
||||
// String returns a string representation of the Query for debugging/validation
|
||||
// purposes. Do not use it for SQL queries.
|
||||
func (q Query) String() string {
|
||||
s := "SELECT keys"
|
||||
if !q.KeysOnly {
|
||||
s += ",vals"
|
||||
}
|
||||
if q.ReturnExpirations {
|
||||
s += ",exps"
|
||||
}
|
||||
|
||||
s += " "
|
||||
|
||||
if q.Prefix != "" {
|
||||
s += fmt.Sprintf("FROM %q ", q.Prefix)
|
||||
}
|
||||
|
||||
if len(q.Filters) > 0 {
|
||||
s += fmt.Sprintf("FILTER [%s", q.Filters[0])
|
||||
for _, f := range q.Filters[1:] {
|
||||
s += fmt.Sprintf(", %s", f)
|
||||
}
|
||||
s += "] "
|
||||
}
|
||||
|
||||
if len(q.Orders) > 0 {
|
||||
s += fmt.Sprintf("ORDER [%s", q.Orders[0])
|
||||
for _, f := range q.Orders[1:] {
|
||||
s += fmt.Sprintf(", %s", f)
|
||||
}
|
||||
s += "] "
|
||||
}
|
||||
|
||||
if q.Offset > 0 {
|
||||
s += fmt.Sprintf("OFFSET %d ", q.Offset)
|
||||
}
|
||||
|
||||
if q.Limit > 0 {
|
||||
s += fmt.Sprintf("LIMIT %d ", q.Limit)
|
||||
}
|
||||
// Will always end with a space, strip it.
|
||||
return s[:len(s)-1]
|
||||
}
|
||||
|
||||
// Entry is a query result entry.
|
||||
type Entry struct {
|
||||
Key string // cant be ds.Key because circular imports ...!!!
|
||||
Value []byte // Will be nil if KeysOnly has been passed.
|
||||
Expiration time.Time // Entry expiration timestamp if requested and supported (see TTLDatastore).
|
||||
Size int // Might be -1 if the datastore doesn't support listing the size with KeysOnly
|
||||
// // or if ReturnsSizes is not set
|
||||
}
|
||||
|
||||
// Result is a special entry that includes an error, so that the client
|
||||
// may be warned about internal errors. If Error is non-nil, Entry must be
|
||||
// empty.
|
||||
type Result struct {
|
||||
Entry
|
||||
|
||||
Error error
|
||||
}
|
||||
|
||||
// Results is a set of Query results. This is the interface for clients.
|
||||
// Example:
|
||||
//
|
||||
// qr, _ := myds.Query(q)
|
||||
// for r := range qr.Next() {
|
||||
// if r.Error != nil {
|
||||
// // handle.
|
||||
// break
|
||||
// }
|
||||
//
|
||||
// fmt.Println(r.Entry.Key, r.Entry.Value)
|
||||
// }
|
||||
//
|
||||
// or, wait on all results at once:
|
||||
//
|
||||
// qr, _ := myds.Query(q)
|
||||
// es, _ := qr.Rest()
|
||||
// for _, e := range es {
|
||||
// fmt.Println(e.Key, e.Value)
|
||||
// }
|
||||
//
|
||||
type Results interface {
|
||||
Query() Query // the query these Results correspond to
|
||||
Next() <-chan Result // returns a channel to wait for the next result
|
||||
NextSync() (Result, bool) // blocks and waits to return the next result, second parameter returns false when results are exhausted
|
||||
Rest() ([]Entry, error) // waits till processing finishes, returns all entries at once.
|
||||
Close() error // client may call Close to signal early exit
|
||||
|
||||
// Process returns a goprocess.Process associated with these results.
|
||||
// most users will not need this function (Close is all they want),
|
||||
// but it's here in case you want to connect the results to other
|
||||
// goprocess-friendly things.
|
||||
Process() goprocess.Process
|
||||
}
|
||||
|
||||
// results implements Results
|
||||
type results struct {
|
||||
query Query
|
||||
proc goprocess.Process
|
||||
res <-chan Result
|
||||
}
|
||||
|
||||
func (r *results) Next() <-chan Result {
|
||||
return r.res
|
||||
}
|
||||
|
||||
func (r *results) NextSync() (Result, bool) {
|
||||
val, ok := <-r.res
|
||||
return val, ok
|
||||
}
|
||||
|
||||
func (r *results) Rest() ([]Entry, error) {
|
||||
var es []Entry
|
||||
for e := range r.res {
|
||||
if e.Error != nil {
|
||||
return es, e.Error
|
||||
}
|
||||
es = append(es, e.Entry)
|
||||
}
|
||||
<-r.proc.Closed() // wait till the processing finishes.
|
||||
return es, nil
|
||||
}
|
||||
|
||||
func (r *results) Process() goprocess.Process {
|
||||
return r.proc
|
||||
}
|
||||
|
||||
func (r *results) Close() error {
|
||||
return r.proc.Close()
|
||||
}
|
||||
|
||||
func (r *results) Query() Query {
|
||||
return r.query
|
||||
}
|
||||
|
||||
// ResultBuilder is what implementors use to construct results
|
||||
// Implementors of datastores and their clients must respect the
|
||||
// Process of the Request:
|
||||
//
|
||||
// * clients must call r.Process().Close() on an early exit, so
|
||||
// implementations can reclaim resources.
|
||||
// * if the Entries are read to completion (channel closed), Process
|
||||
// should be closed automatically.
|
||||
// * datastores must respect <-Process.Closing(), which intermediates
|
||||
// an early close signal from the client.
|
||||
//
|
||||
type ResultBuilder struct {
|
||||
Query Query
|
||||
Process goprocess.Process
|
||||
Output chan Result
|
||||
}
|
||||
|
||||
// Results returns a Results to to this builder.
|
||||
func (rb *ResultBuilder) Results() Results {
|
||||
return &results{
|
||||
query: rb.Query,
|
||||
proc: rb.Process,
|
||||
res: rb.Output,
|
||||
}
|
||||
}
|
||||
|
||||
const NormalBufSize = 1
|
||||
const KeysOnlyBufSize = 128
|
||||
|
||||
func NewResultBuilder(q Query) *ResultBuilder {
|
||||
bufSize := NormalBufSize
|
||||
if q.KeysOnly {
|
||||
bufSize = KeysOnlyBufSize
|
||||
}
|
||||
b := &ResultBuilder{
|
||||
Query: q,
|
||||
Output: make(chan Result, bufSize),
|
||||
}
|
||||
b.Process = goprocess.WithTeardown(func() error {
|
||||
close(b.Output)
|
||||
return nil
|
||||
})
|
||||
return b
|
||||
}
|
||||
|
||||
// ResultsWithChan returns a Results object from a channel
|
||||
// of Result entries.
|
||||
//
|
||||
// DEPRECATED: This iterator is impossible to cancel correctly. Canceling it
|
||||
// will leave anything trying to write to the result channel hanging.
|
||||
func ResultsWithChan(q Query, res <-chan Result) Results {
|
||||
return ResultsWithProcess(q, func(worker goprocess.Process, out chan<- Result) {
|
||||
for {
|
||||
select {
|
||||
case <-worker.Closing(): // client told us to close early
|
||||
return
|
||||
case e, more := <-res:
|
||||
if !more {
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case out <- e:
|
||||
case <-worker.Closing(): // client told us to close early
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// ResultsWithProcess returns a Results object with the results generated by the
|
||||
// passed subprocess.
|
||||
func ResultsWithProcess(q Query, proc func(goprocess.Process, chan<- Result)) Results {
|
||||
b := NewResultBuilder(q)
|
||||
|
||||
// go consume all the entries and add them to the results.
|
||||
b.Process.Go(func(worker goprocess.Process) {
|
||||
proc(worker, b.Output)
|
||||
})
|
||||
|
||||
go b.Process.CloseAfterChildren() //nolint
|
||||
return b.Results()
|
||||
}
|
||||
|
||||
// ResultsWithEntries returns a Results object from a list of entries
|
||||
func ResultsWithEntries(q Query, res []Entry) Results {
|
||||
i := 0
|
||||
return ResultsFromIterator(q, Iterator{
|
||||
Next: func() (Result, bool) {
|
||||
if i >= len(res) {
|
||||
return Result{}, false
|
||||
}
|
||||
next := res[i]
|
||||
i++
|
||||
return Result{Entry: next}, true
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func ResultsReplaceQuery(r Results, q Query) Results {
|
||||
switch r := r.(type) {
|
||||
case *results:
|
||||
// note: not using field names to make sure all fields are copied
|
||||
return &results{q, r.proc, r.res}
|
||||
case *resultsIter:
|
||||
// note: not using field names to make sure all fields are copied
|
||||
lr := r.legacyResults
|
||||
if lr != nil {
|
||||
lr = &results{q, lr.proc, lr.res}
|
||||
}
|
||||
return &resultsIter{q, r.next, r.close, lr}
|
||||
default:
|
||||
panic("unknown results type")
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// ResultFromIterator provides an alternative way to to construct
|
||||
// results without the use of channels.
|
||||
//
|
||||
|
||||
func ResultsFromIterator(q Query, iter Iterator) Results {
|
||||
if iter.Close == nil {
|
||||
iter.Close = noopClose
|
||||
}
|
||||
return &resultsIter{
|
||||
query: q,
|
||||
next: iter.Next,
|
||||
close: iter.Close,
|
||||
}
|
||||
}
|
||||
|
||||
func noopClose() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type Iterator struct {
|
||||
Next func() (Result, bool)
|
||||
Close func() error // note: might be called more than once
|
||||
}
|
||||
|
||||
type resultsIter struct {
|
||||
query Query
|
||||
next func() (Result, bool)
|
||||
close func() error
|
||||
legacyResults *results
|
||||
}
|
||||
|
||||
func (r *resultsIter) Next() <-chan Result {
|
||||
r.useLegacyResults()
|
||||
return r.legacyResults.Next()
|
||||
}
|
||||
|
||||
func (r *resultsIter) NextSync() (Result, bool) {
|
||||
if r.legacyResults != nil {
|
||||
return r.legacyResults.NextSync()
|
||||
} else {
|
||||
res, ok := r.next()
|
||||
if !ok {
|
||||
r.close()
|
||||
}
|
||||
return res, ok
|
||||
}
|
||||
}
|
||||
|
||||
func (r *resultsIter) Rest() ([]Entry, error) {
|
||||
var es []Entry
|
||||
for {
|
||||
e, ok := r.NextSync()
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
if e.Error != nil {
|
||||
return es, e.Error
|
||||
}
|
||||
es = append(es, e.Entry)
|
||||
}
|
||||
return es, nil
|
||||
}
|
||||
|
||||
func (r *resultsIter) Process() goprocess.Process {
|
||||
r.useLegacyResults()
|
||||
return r.legacyResults.Process()
|
||||
}
|
||||
|
||||
func (r *resultsIter) Close() error {
|
||||
if r.legacyResults != nil {
|
||||
return r.legacyResults.Close()
|
||||
} else {
|
||||
return r.close()
|
||||
}
|
||||
}
|
||||
|
||||
func (r *resultsIter) Query() Query {
|
||||
return r.query
|
||||
}
|
||||
|
||||
func (r *resultsIter) useLegacyResults() {
|
||||
if r.legacyResults != nil {
|
||||
return
|
||||
}
|
||||
|
||||
b := NewResultBuilder(r.query)
|
||||
|
||||
// go consume all the entries and add them to the results.
|
||||
b.Process.Go(func(worker goprocess.Process) {
|
||||
defer r.close()
|
||||
for {
|
||||
e, ok := r.next()
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
select {
|
||||
case b.Output <- e:
|
||||
case <-worker.Closing(): // client told us to close early
|
||||
return
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
go b.Process.CloseAfterChildren() //nolint
|
||||
|
||||
r.legacyResults = b.Results().(*results)
|
||||
}
|
||||
158
vendor/github.com/ipfs/go-datastore/query/query_impl.go
generated
vendored
Normal file
158
vendor/github.com/ipfs/go-datastore/query/query_impl.go
generated
vendored
Normal file
@@ -0,0 +1,158 @@
|
||||
package query
|
||||
|
||||
import (
|
||||
"path"
|
||||
|
||||
goprocess "github.com/jbenet/goprocess"
|
||||
)
|
||||
|
||||
// NaiveFilter applies a filter to the results.
|
||||
func NaiveFilter(qr Results, filter Filter) Results {
|
||||
return ResultsFromIterator(qr.Query(), Iterator{
|
||||
Next: func() (Result, bool) {
|
||||
for {
|
||||
e, ok := qr.NextSync()
|
||||
if !ok {
|
||||
return Result{}, false
|
||||
}
|
||||
if e.Error != nil || filter.Filter(e.Entry) {
|
||||
return e, true
|
||||
}
|
||||
}
|
||||
},
|
||||
Close: func() error {
|
||||
return qr.Close()
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// NaiveLimit truncates the results to a given int limit
|
||||
func NaiveLimit(qr Results, limit int) Results {
|
||||
if limit == 0 {
|
||||
// 0 means no limit
|
||||
return qr
|
||||
}
|
||||
closed := false
|
||||
return ResultsFromIterator(qr.Query(), Iterator{
|
||||
Next: func() (Result, bool) {
|
||||
if limit == 0 {
|
||||
if !closed {
|
||||
closed = true
|
||||
err := qr.Close()
|
||||
if err != nil {
|
||||
return Result{Error: err}, true
|
||||
}
|
||||
}
|
||||
return Result{}, false
|
||||
}
|
||||
limit--
|
||||
return qr.NextSync()
|
||||
},
|
||||
Close: func() error {
|
||||
if closed {
|
||||
return nil
|
||||
}
|
||||
closed = true
|
||||
return qr.Close()
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// NaiveOffset skips a given number of results
|
||||
func NaiveOffset(qr Results, offset int) Results {
|
||||
return ResultsFromIterator(qr.Query(), Iterator{
|
||||
Next: func() (Result, bool) {
|
||||
for ; offset > 0; offset-- {
|
||||
res, ok := qr.NextSync()
|
||||
if !ok || res.Error != nil {
|
||||
return res, ok
|
||||
}
|
||||
}
|
||||
return qr.NextSync()
|
||||
},
|
||||
Close: func() error {
|
||||
return qr.Close()
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// NaiveOrder reorders results according to given orders.
|
||||
// WARNING: this is the only non-stream friendly operation!
|
||||
func NaiveOrder(qr Results, orders ...Order) Results {
|
||||
// Short circuit.
|
||||
if len(orders) == 0 {
|
||||
return qr
|
||||
}
|
||||
|
||||
return ResultsWithProcess(qr.Query(), func(worker goprocess.Process, out chan<- Result) {
|
||||
defer qr.Close()
|
||||
var entries []Entry
|
||||
collect:
|
||||
for {
|
||||
select {
|
||||
case <-worker.Closing():
|
||||
return
|
||||
case e, ok := <-qr.Next():
|
||||
if !ok {
|
||||
break collect
|
||||
}
|
||||
if e.Error != nil {
|
||||
out <- e
|
||||
continue
|
||||
}
|
||||
entries = append(entries, e.Entry)
|
||||
}
|
||||
}
|
||||
|
||||
Sort(orders, entries)
|
||||
|
||||
for _, e := range entries {
|
||||
select {
|
||||
case <-worker.Closing():
|
||||
return
|
||||
case out <- Result{Entry: e}:
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func NaiveQueryApply(q Query, qr Results) Results {
|
||||
if q.Prefix != "" {
|
||||
// Clean the prefix as a key and append / so a prefix of /bar
|
||||
// only finds /bar/baz, not /barbaz.
|
||||
prefix := q.Prefix
|
||||
if len(prefix) == 0 {
|
||||
prefix = "/"
|
||||
} else {
|
||||
if prefix[0] != '/' {
|
||||
prefix = "/" + prefix
|
||||
}
|
||||
prefix = path.Clean(prefix)
|
||||
}
|
||||
// If the prefix is empty, ignore it.
|
||||
if prefix != "/" {
|
||||
qr = NaiveFilter(qr, FilterKeyPrefix{prefix + "/"})
|
||||
}
|
||||
}
|
||||
for _, f := range q.Filters {
|
||||
qr = NaiveFilter(qr, f)
|
||||
}
|
||||
if len(q.Orders) > 0 {
|
||||
qr = NaiveOrder(qr, q.Orders...)
|
||||
}
|
||||
if q.Offset != 0 {
|
||||
qr = NaiveOffset(qr, q.Offset)
|
||||
}
|
||||
if q.Limit != 0 {
|
||||
qr = NaiveLimit(qr, q.Limit)
|
||||
}
|
||||
return qr
|
||||
}
|
||||
|
||||
func ResultEntriesFrom(keys []string, vals [][]byte) []Entry {
|
||||
re := make([]Entry, len(keys))
|
||||
for i, k := range keys {
|
||||
re[i] = Entry{Key: k, Size: len(vals[i]), Value: vals[i]}
|
||||
}
|
||||
return re
|
||||
}
|
||||
185
vendor/github.com/ipfs/go-datastore/sync/sync.go
generated
vendored
Normal file
185
vendor/github.com/ipfs/go-datastore/sync/sync.go
generated
vendored
Normal file
@@ -0,0 +1,185 @@
|
||||
package sync
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
ds "github.com/ipfs/go-datastore"
|
||||
dsq "github.com/ipfs/go-datastore/query"
|
||||
)
|
||||
|
||||
// MutexDatastore contains a child datastore and a mutex.
|
||||
// used for coarse sync
|
||||
type MutexDatastore struct {
|
||||
sync.RWMutex
|
||||
|
||||
child ds.Datastore
|
||||
}
|
||||
|
||||
var _ ds.Datastore = (*MutexDatastore)(nil)
|
||||
var _ ds.Batching = (*MutexDatastore)(nil)
|
||||
var _ ds.Shim = (*MutexDatastore)(nil)
|
||||
var _ ds.PersistentDatastore = (*MutexDatastore)(nil)
|
||||
var _ ds.CheckedDatastore = (*MutexDatastore)(nil)
|
||||
var _ ds.ScrubbedDatastore = (*MutexDatastore)(nil)
|
||||
var _ ds.GCDatastore = (*MutexDatastore)(nil)
|
||||
|
||||
// MutexWrap constructs a datastore with a coarse lock around the entire
|
||||
// datastore, for every single operation.
|
||||
func MutexWrap(d ds.Datastore) *MutexDatastore {
|
||||
return &MutexDatastore{child: d}
|
||||
}
|
||||
|
||||
// Children implements Shim
|
||||
func (d *MutexDatastore) Children() []ds.Datastore {
|
||||
return []ds.Datastore{d.child}
|
||||
}
|
||||
|
||||
// Put implements Datastore.Put
|
||||
func (d *MutexDatastore) Put(ctx context.Context, key ds.Key, value []byte) (err error) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
return d.child.Put(ctx, key, value)
|
||||
}
|
||||
|
||||
// Sync implements Datastore.Sync
|
||||
func (d *MutexDatastore) Sync(ctx context.Context, prefix ds.Key) error {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
return d.child.Sync(ctx, prefix)
|
||||
}
|
||||
|
||||
// Get implements Datastore.Get
|
||||
func (d *MutexDatastore) Get(ctx context.Context, key ds.Key) (value []byte, err error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
return d.child.Get(ctx, key)
|
||||
}
|
||||
|
||||
// Has implements Datastore.Has
|
||||
func (d *MutexDatastore) Has(ctx context.Context, key ds.Key) (exists bool, err error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
return d.child.Has(ctx, key)
|
||||
}
|
||||
|
||||
// GetSize implements Datastore.GetSize
|
||||
func (d *MutexDatastore) GetSize(ctx context.Context, key ds.Key) (size int, err error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
return d.child.GetSize(ctx, key)
|
||||
}
|
||||
|
||||
// Delete implements Datastore.Delete
|
||||
func (d *MutexDatastore) Delete(ctx context.Context, key ds.Key) (err error) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
return d.child.Delete(ctx, key)
|
||||
}
|
||||
|
||||
// Query implements Datastore.Query
|
||||
func (d *MutexDatastore) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
// Apply the entire query while locked. Non-sync datastores may not
|
||||
// allow concurrent queries.
|
||||
|
||||
results, err := d.child.Query(ctx, q)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
entries, err1 := results.Rest()
|
||||
err2 := results.Close()
|
||||
switch {
|
||||
case err1 != nil:
|
||||
return nil, err1
|
||||
case err2 != nil:
|
||||
return nil, err2
|
||||
}
|
||||
return dsq.ResultsWithEntries(q, entries), nil
|
||||
}
|
||||
|
||||
func (d *MutexDatastore) Batch(ctx context.Context) (ds.Batch, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
bds, ok := d.child.(ds.Batching)
|
||||
if !ok {
|
||||
return nil, ds.ErrBatchUnsupported
|
||||
}
|
||||
|
||||
b, err := bds.Batch(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &syncBatch{
|
||||
batch: b,
|
||||
mds: d,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *MutexDatastore) Close() error {
|
||||
d.RWMutex.Lock()
|
||||
defer d.RWMutex.Unlock()
|
||||
return d.child.Close()
|
||||
}
|
||||
|
||||
// DiskUsage implements the PersistentDatastore interface.
|
||||
func (d *MutexDatastore) DiskUsage(ctx context.Context) (uint64, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
return ds.DiskUsage(ctx, d.child)
|
||||
}
|
||||
|
||||
type syncBatch struct {
|
||||
batch ds.Batch
|
||||
mds *MutexDatastore
|
||||
}
|
||||
|
||||
var _ ds.Batch = (*syncBatch)(nil)
|
||||
|
||||
func (b *syncBatch) Put(ctx context.Context, key ds.Key, val []byte) error {
|
||||
b.mds.Lock()
|
||||
defer b.mds.Unlock()
|
||||
return b.batch.Put(ctx, key, val)
|
||||
}
|
||||
|
||||
func (b *syncBatch) Delete(ctx context.Context, key ds.Key) error {
|
||||
b.mds.Lock()
|
||||
defer b.mds.Unlock()
|
||||
return b.batch.Delete(ctx, key)
|
||||
}
|
||||
|
||||
func (b *syncBatch) Commit(ctx context.Context) error {
|
||||
b.mds.Lock()
|
||||
defer b.mds.Unlock()
|
||||
return b.batch.Commit(ctx)
|
||||
}
|
||||
|
||||
func (d *MutexDatastore) Check(ctx context.Context) error {
|
||||
if c, ok := d.child.(ds.CheckedDatastore); ok {
|
||||
d.RWMutex.Lock()
|
||||
defer d.RWMutex.Unlock()
|
||||
return c.Check(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *MutexDatastore) Scrub(ctx context.Context) error {
|
||||
if c, ok := d.child.(ds.ScrubbedDatastore); ok {
|
||||
d.RWMutex.Lock()
|
||||
defer d.RWMutex.Unlock()
|
||||
return c.Scrub(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *MutexDatastore) CollectGarbage(ctx context.Context) error {
|
||||
if c, ok := d.child.(ds.GCDatastore); ok {
|
||||
d.RWMutex.Lock()
|
||||
defer d.RWMutex.Unlock()
|
||||
return c.CollectGarbage(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
3
vendor/github.com/ipfs/go-datastore/version.json
generated
vendored
Normal file
3
vendor/github.com/ipfs/go-datastore/version.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"version": "v0.6.0"
|
||||
}
|
||||
21
vendor/github.com/ipfs/go-log/LICENSE
generated
vendored
Normal file
21
vendor/github.com/ipfs/go-log/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Juan Batiz-Benet
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
79
vendor/github.com/ipfs/go-log/README.md
generated
vendored
Normal file
79
vendor/github.com/ipfs/go-log/README.md
generated
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
# go-log
|
||||
|
||||
[](http://ipn.io)
|
||||
[](http://ipfs.io/)
|
||||
[](http://webchat.freenode.net/?channels=%23ipfs)
|
||||
[](https://github.com/RichardLitt/standard-readme)
|
||||
[](https://godoc.org/github.com/ipfs/go-log)
|
||||
[](https://circleci.com/gh/ipfs/go-log)
|
||||
|
||||
<!---[](https://coveralls.io/github/ipfs/go-log?branch=master)--->
|
||||
|
||||
|
||||
> The logging library used by go-ipfs
|
||||
|
||||
It currently uses a modified version of [go-logging](https://github.com/whyrusleeping/go-logging) to implement the standard printf-style log output.
|
||||
|
||||
## Install
|
||||
|
||||
```sh
|
||||
go get github.com/ipfs/go-log
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Once the package is imported under the name `logging`, an instance of `EventLogger` can be created like so:
|
||||
|
||||
```go
|
||||
var log = logging.Logger("subsystem name")
|
||||
```
|
||||
|
||||
It can then be used to emit log messages, either plain printf-style messages at six standard levels or structured messages using `Start`, `StartFromParentState`, `Finish` and `FinishWithErr` methods.
|
||||
|
||||
## Example
|
||||
|
||||
```go
|
||||
func (s *Session) GetBlock(ctx context.Context, c *cid.Cid) (blk blocks.Block, err error) {
|
||||
|
||||
// Starts Span called "Session.GetBlock", associates with `ctx`
|
||||
ctx = log.Start(ctx, "Session.GetBlock")
|
||||
|
||||
// defer so `blk` and `err` can be evaluated after call
|
||||
defer func() {
|
||||
// tag span associated with `ctx`
|
||||
log.SetTags(ctx, map[string]interface{}{
|
||||
"cid": c,
|
||||
"block", blk,
|
||||
})
|
||||
// if err is non-nil tag the span with an error
|
||||
log.FinishWithErr(ctx, err)
|
||||
}()
|
||||
|
||||
if shouldStartSomething() {
|
||||
// log message on span associated with `ctx`
|
||||
log.LogKV(ctx, "startSomething", true)
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
## Tracing
|
||||
|
||||
`go-log` wraps the [opentracing-go](https://github.com/opentracing/opentracing-go) methods - `StartSpan`, `Finish`, `LogKV`, and `SetTag`.
|
||||
|
||||
`go-log` implements its own tracer - `loggabletracer` - based on the [basictracer-go](https://github.com/opentracing/basictracer-go) implementation. If there is an active [`WriterGroup`](https://github.com/ipfs/go-log/blob/master/writer/option.go) the `loggabletracer` will [record](https://github.com/ipfs/go-log/blob/master/tracer/recorder.go) span data to the `WriterGroup`. An example of this can be seen in the [`log tail`](https://github.com/ipfs/go-ipfs/blob/master/core/commands/log.go) command of `go-ipfs`.
|
||||
|
||||
Third party tracers may be used by calling `opentracing.SetGlobalTracer()` with your desired tracing implementation. An example of this can be seen using the [`go-jaeger-plugin`](https://github.com/ipfs/go-jaeger-plugin) and the `go-ipfs` [tracer plugin](https://github.com/ipfs/go-ipfs/blob/master/plugin/tracer.go)
|
||||
|
||||
## Contribute
|
||||
|
||||
Feel free to join in. All welcome. Open an [issue](https://github.com/ipfs/go-log/issues)!
|
||||
|
||||
This repository falls under the IPFS [Code of Conduct](https://github.com/ipfs/community/blob/master/code-of-conduct.md).
|
||||
|
||||
### Want to hack on IPFS?
|
||||
|
||||
[](https://github.com/ipfs/community/blob/master/contributing.md)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
38
vendor/github.com/ipfs/go-log/context.go
generated
vendored
Normal file
38
vendor/github.com/ipfs/go-log/context.go
generated
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
)
|
||||
|
||||
type key int
|
||||
|
||||
const metadataKey key = 0
|
||||
|
||||
// ContextWithLoggable returns a derived context which contains the provided
|
||||
// Loggable. Any Events logged with the derived context will include the
|
||||
// provided Loggable.
|
||||
func ContextWithLoggable(ctx context.Context, l Loggable) context.Context {
|
||||
existing, err := MetadataFromContext(ctx)
|
||||
if err != nil {
|
||||
// context does not contain meta. just set the new metadata
|
||||
child := context.WithValue(ctx, metadataKey, Metadata(l.Loggable()))
|
||||
return child
|
||||
}
|
||||
|
||||
merged := DeepMerge(existing, l.Loggable())
|
||||
child := context.WithValue(ctx, metadataKey, merged)
|
||||
return child
|
||||
}
|
||||
|
||||
// MetadataFromContext extracts Matadata from a given context's value.
|
||||
func MetadataFromContext(ctx context.Context) (Metadata, error) {
|
||||
value := ctx.Value(metadataKey)
|
||||
if value != nil {
|
||||
metadata, ok := value.(Metadata)
|
||||
if ok {
|
||||
return metadata, nil
|
||||
}
|
||||
}
|
||||
return nil, errors.New("context contains no metadata")
|
||||
}
|
||||
7
vendor/github.com/ipfs/go-log/entry.go
generated
vendored
Normal file
7
vendor/github.com/ipfs/go-log/entry.go
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
package log
|
||||
|
||||
type entry struct {
|
||||
loggables []Loggable
|
||||
system string
|
||||
event string
|
||||
}
|
||||
30
vendor/github.com/ipfs/go-log/levels.go
generated
vendored
Normal file
30
vendor/github.com/ipfs/go-log/levels.go
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
log2 "github.com/ipfs/go-log/v2"
|
||||
)
|
||||
|
||||
// LogLevel represents a log severity level. Use the package variables as an
|
||||
// enum.
|
||||
type LogLevel = log2.LogLevel
|
||||
|
||||
var (
|
||||
LevelDebug = log2.LevelDebug
|
||||
LevelInfo = log2.LevelInfo
|
||||
LevelWarn = log2.LevelWarn
|
||||
LevelError = log2.LevelError
|
||||
LevelDPanic = log2.LevelDPanic
|
||||
LevelPanic = log2.LevelPanic
|
||||
LevelFatal = log2.LevelFatal
|
||||
)
|
||||
|
||||
// LevelFromString parses a string-based level and returns the corresponding
|
||||
// LogLevel.
|
||||
//
|
||||
// Supported strings are: DEBUG, INFO, WARN, ERROR, DPANIC, PANIC, FATAL, and
|
||||
// their lower-case forms.
|
||||
//
|
||||
// The returned LogLevel must be discarded if error is not nil.
|
||||
func LevelFromString(level string) (LogLevel, error) {
|
||||
return log2.LevelFromString(level)
|
||||
}
|
||||
420
vendor/github.com/ipfs/go-log/log.go
generated
vendored
Normal file
420
vendor/github.com/ipfs/go-log/log.go
generated
vendored
Normal file
@@ -0,0 +1,420 @@
|
||||
// Package log is the logging library used by IPFS
|
||||
// (https://github.com/ipfs/go-ipfs). It uses a modified version of
|
||||
// https://godoc.org/github.com/whyrusleeping/go-logging .
|
||||
package log
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"path"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
log2 "github.com/ipfs/go-log/v2"
|
||||
writer "github.com/ipfs/go-log/writer"
|
||||
|
||||
opentrace "github.com/opentracing/opentracing-go"
|
||||
otExt "github.com/opentracing/opentracing-go/ext"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
var log = Logger("eventlog")
|
||||
|
||||
// StandardLogger provides API compatibility with standard printf loggers
|
||||
// eg. go-logging
|
||||
type StandardLogger interface {
|
||||
log2.StandardLogger
|
||||
// Deprecated use Warn
|
||||
Warning(args ...interface{})
|
||||
// Deprecated use Warnf
|
||||
Warningf(format string, args ...interface{})
|
||||
}
|
||||
|
||||
// EventLogger extends the StandardLogger interface to allow for log items
|
||||
// containing structured metadata
|
||||
type EventLogger interface {
|
||||
StandardLogger
|
||||
|
||||
// Event merges structured data from the provided inputs into a single
|
||||
// machine-readable log event.
|
||||
//
|
||||
// If the context contains metadata, a copy of this is used as the base
|
||||
// metadata accumulator.
|
||||
//
|
||||
// If one or more loggable objects are provided, these are deep-merged into base blob.
|
||||
//
|
||||
// Next, the event name is added to the blob under the key "event". If
|
||||
// the key "event" already exists, it will be over-written.
|
||||
//
|
||||
// Finally the timestamp and package name are added to the accumulator and
|
||||
// the metadata is logged.
|
||||
// DEPRECATED
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
Event(ctx context.Context, event string, m ...Loggable)
|
||||
|
||||
// DEPRECATED
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
EventBegin(ctx context.Context, event string, m ...Loggable) *EventInProgress
|
||||
|
||||
// Start starts an opentracing span with `name`, using
|
||||
// any Span found within `ctx` as a ChildOfRef. If no such parent could be
|
||||
// found, Start creates a root (parentless) Span.
|
||||
//
|
||||
// The return value is a context.Context object built around the
|
||||
// returned Span.
|
||||
//
|
||||
// Example usage:
|
||||
//
|
||||
// SomeFunction(ctx context.Context, ...) {
|
||||
// ctx := log.Start(ctx, "SomeFunction")
|
||||
// defer log.Finish(ctx)
|
||||
// ...
|
||||
// }
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
Start(ctx context.Context, name string) context.Context
|
||||
|
||||
// StartFromParentState starts an opentracing span with `name`, using
|
||||
// any Span found within `ctx` as a ChildOfRef. If no such parent could be
|
||||
// found, StartSpanFromParentState creates a root (parentless) Span.
|
||||
//
|
||||
// StartFromParentState will attempt to deserialize a SpanContext from `parent`,
|
||||
// using any Span found within to continue the trace
|
||||
//
|
||||
// The return value is a context.Context object built around the
|
||||
// returned Span.
|
||||
//
|
||||
// An error is returned when `parent` cannot be deserialized to a SpanContext
|
||||
//
|
||||
// Example usage:
|
||||
//
|
||||
// SomeFunction(ctx context.Context, bParent []byte) {
|
||||
// ctx := log.StartFromParentState(ctx, "SomeFunction", bParent)
|
||||
// defer log.Finish(ctx)
|
||||
// ...
|
||||
// }
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
StartFromParentState(ctx context.Context, name string, parent []byte) (context.Context, error)
|
||||
|
||||
// Finish completes the span associated with `ctx`.
|
||||
//
|
||||
// Finish() must be the last call made to any span instance, and to do
|
||||
// otherwise leads to undefined behavior.
|
||||
// Finish will do its best to notify (log) when used in correctly
|
||||
// .e.g called twice, or called on a spanless `ctx`
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
Finish(ctx context.Context)
|
||||
|
||||
// FinishWithErr completes the span associated with `ctx` and also calls
|
||||
// SetErr if `err` is non-nil
|
||||
//
|
||||
// FinishWithErr() must be the last call made to any span instance, and to do
|
||||
// otherwise leads to undefined behavior.
|
||||
// FinishWithErr will do its best to notify (log) when used in correctly
|
||||
// .e.g called twice, or called on a spanless `ctx`
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
FinishWithErr(ctx context.Context, err error)
|
||||
|
||||
// SetErr tags the span associated with `ctx` to reflect an error occured, and
|
||||
// logs the value `err` under key `error`.
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
SetErr(ctx context.Context, err error)
|
||||
|
||||
// LogKV records key:value logging data about an event stored in `ctx`
|
||||
// Eexample:
|
||||
// log.LogKV(
|
||||
// "error", "resolve failure",
|
||||
// "type", "cache timeout",
|
||||
// "waited.millis", 1500)
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
LogKV(ctx context.Context, alternatingKeyValues ...interface{})
|
||||
|
||||
// SetTag tags key `k` and value `v` on the span associated with `ctx`
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
SetTag(ctx context.Context, key string, value interface{})
|
||||
|
||||
// SetTags tags keys from the `tags` maps on the span associated with `ctx`
|
||||
// Example:
|
||||
// log.SetTags(ctx, map[string]{
|
||||
// "type": bizStruct,
|
||||
// "request": req,
|
||||
// })
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
SetTags(ctx context.Context, tags map[string]interface{})
|
||||
|
||||
// SerializeContext takes the SpanContext instance stored in `ctx` and Seralizes
|
||||
// it to bytes. An error is returned if the `ctx` cannot be serialized to
|
||||
// a bytes array
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
SerializeContext(ctx context.Context) ([]byte, error)
|
||||
}
|
||||
|
||||
var _ EventLogger = Logger("test-logger")
|
||||
|
||||
// Logger retrieves an event logger by name
|
||||
func Logger(system string) *ZapEventLogger {
|
||||
if len(system) == 0 {
|
||||
setuplog := Logger("setup-logger")
|
||||
setuplog.Error("Missing name parameter")
|
||||
system = "undefined"
|
||||
}
|
||||
logger := log2.Logger(system)
|
||||
return &ZapEventLogger{system: system, SugaredLogger: logger.SugaredLogger}
|
||||
}
|
||||
|
||||
// ZapEventLogger implements the EventLogger and wraps a go-logging Logger
|
||||
type ZapEventLogger struct {
|
||||
zap.SugaredLogger
|
||||
|
||||
system string
|
||||
// TODO add log-level
|
||||
}
|
||||
|
||||
// Deprecated: use Warn
|
||||
func (el *ZapEventLogger) Warning(args ...interface{}) {
|
||||
el.Warn(args...)
|
||||
}
|
||||
|
||||
// Deprecated: use Warnf
|
||||
func (el *ZapEventLogger) Warningf(format string, args ...interface{}) {
|
||||
el.Warnf(format, args...)
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) Start(ctx context.Context, operationName string) context.Context {
|
||||
span, ctx := opentrace.StartSpanFromContext(ctx, operationName)
|
||||
span.SetTag("system", el.system)
|
||||
return ctx
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) StartFromParentState(ctx context.Context, operationName string, parent []byte) (context.Context, error) {
|
||||
sc, err := deserializeContext(parent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
//TODO RPCServerOption is probably not the best tag, as this is likely from a peer
|
||||
span, ctx := opentrace.StartSpanFromContext(ctx, operationName, otExt.RPCServerOption(sc))
|
||||
span.SetTag("system", el.system)
|
||||
return ctx, nil
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) SerializeContext(ctx context.Context) ([]byte, error) {
|
||||
gTracer := opentrace.GlobalTracer()
|
||||
b := make([]byte, 0)
|
||||
carrier := bytes.NewBuffer(b)
|
||||
span := opentrace.SpanFromContext(ctx)
|
||||
if err := gTracer.Inject(span.Context(), opentrace.Binary, carrier); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return carrier.Bytes(), nil
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) LogKV(ctx context.Context, alternatingKeyValues ...interface{}) {
|
||||
span := opentrace.SpanFromContext(ctx)
|
||||
if span == nil {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
log.Errorf("LogKV with no Span in context called on %s:%d", path.Base(file), line)
|
||||
return
|
||||
}
|
||||
span.LogKV(alternatingKeyValues...)
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) SetTag(ctx context.Context, k string, v interface{}) {
|
||||
span := opentrace.SpanFromContext(ctx)
|
||||
if span == nil {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
log.Errorf("SetTag with no Span in context called on %s:%d", path.Base(file), line)
|
||||
return
|
||||
}
|
||||
span.SetTag(k, v)
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) SetTags(ctx context.Context, tags map[string]interface{}) {
|
||||
span := opentrace.SpanFromContext(ctx)
|
||||
if span == nil {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
log.Errorf("SetTags with no Span in context called on %s:%d", path.Base(file), line)
|
||||
return
|
||||
}
|
||||
for k, v := range tags {
|
||||
span.SetTag(k, v)
|
||||
}
|
||||
}
|
||||
|
||||
func (el *ZapEventLogger) setErr(ctx context.Context, err error, skip int) {
|
||||
span := opentrace.SpanFromContext(ctx)
|
||||
if span == nil {
|
||||
_, file, line, _ := runtime.Caller(skip)
|
||||
log.Errorf("SetErr with no Span in context called on %s:%d", path.Base(file), line)
|
||||
return
|
||||
}
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
|
||||
otExt.Error.Set(span, true)
|
||||
span.LogKV("error", err.Error())
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) SetErr(ctx context.Context, err error) {
|
||||
el.setErr(ctx, err, 1)
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) Finish(ctx context.Context) {
|
||||
span := opentrace.SpanFromContext(ctx)
|
||||
if span == nil {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
log.Errorf("Finish with no Span in context called on %s:%d", path.Base(file), line)
|
||||
return
|
||||
}
|
||||
span.Finish()
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) FinishWithErr(ctx context.Context, err error) {
|
||||
el.setErr(ctx, err, 2)
|
||||
el.Finish(ctx)
|
||||
}
|
||||
|
||||
func deserializeContext(bCtx []byte) (opentrace.SpanContext, error) {
|
||||
gTracer := opentrace.GlobalTracer()
|
||||
carrier := bytes.NewReader(bCtx)
|
||||
spanContext, err := gTracer.Extract(opentrace.Binary, carrier)
|
||||
if err != nil {
|
||||
log.Warning("Failed to deserialize context %s", err)
|
||||
return nil, err
|
||||
}
|
||||
return spanContext, nil
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) EventBegin(ctx context.Context, event string, metadata ...Loggable) *EventInProgress {
|
||||
ctx = el.Start(ctx, event)
|
||||
|
||||
for _, m := range metadata {
|
||||
for l, v := range m.Loggable() {
|
||||
el.LogKV(ctx, l, v)
|
||||
}
|
||||
}
|
||||
|
||||
eip := &EventInProgress{}
|
||||
eip.doneFunc = func(additional []Loggable) {
|
||||
// anything added during the operation
|
||||
// e.g. deprecated methods event.Append(...) or event.SetError(...)
|
||||
for _, m := range eip.loggables {
|
||||
for l, v := range m.Loggable() {
|
||||
el.LogKV(ctx, l, v)
|
||||
}
|
||||
}
|
||||
el.Finish(ctx)
|
||||
}
|
||||
return eip
|
||||
}
|
||||
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (el *ZapEventLogger) Event(ctx context.Context, event string, metadata ...Loggable) {
|
||||
|
||||
// short circuit if theres nothing to write to
|
||||
if !writer.WriterGroup.Active() {
|
||||
return
|
||||
}
|
||||
|
||||
// Collect loggables for later logging
|
||||
var loggables []Loggable
|
||||
|
||||
// get any existing metadata from the context
|
||||
existing, err := MetadataFromContext(ctx)
|
||||
if err != nil {
|
||||
existing = Metadata{}
|
||||
}
|
||||
loggables = append(loggables, existing)
|
||||
loggables = append(loggables, metadata...)
|
||||
|
||||
e := entry{
|
||||
loggables: loggables,
|
||||
system: el.system,
|
||||
event: event,
|
||||
}
|
||||
|
||||
accum := Metadata{}
|
||||
for _, loggable := range e.loggables {
|
||||
accum = DeepMerge(accum, loggable.Loggable())
|
||||
}
|
||||
|
||||
// apply final attributes to reserved keys
|
||||
// TODO accum["level"] = level
|
||||
accum["event"] = e.event
|
||||
accum["system"] = e.system
|
||||
accum["time"] = FormatRFC3339(time.Now())
|
||||
|
||||
var buf bytes.Buffer
|
||||
encoder := json.NewEncoder(&buf)
|
||||
encoder.SetEscapeHTML(false)
|
||||
err = encoder.Encode(accum)
|
||||
if err != nil {
|
||||
el.Errorf("ERROR FORMATTING EVENT ENTRY: %s", err)
|
||||
return
|
||||
}
|
||||
|
||||
_, _ = writer.WriterGroup.Write(buf.Bytes())
|
||||
}
|
||||
|
||||
// DEPRECATED
|
||||
// EventInProgress represent and event which is happening
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
type EventInProgress struct {
|
||||
loggables []Loggable
|
||||
doneFunc func([]Loggable)
|
||||
}
|
||||
|
||||
// DEPRECATED use `LogKV` or `SetTag`
|
||||
// Append adds loggables to be included in the call to Done
|
||||
func (eip *EventInProgress) Append(l Loggable) {
|
||||
eip.loggables = append(eip.loggables, l)
|
||||
}
|
||||
|
||||
// DEPRECATED use `SetError(ctx, error)`
|
||||
// SetError includes the provided error
|
||||
func (eip *EventInProgress) SetError(err error) {
|
||||
eip.loggables = append(eip.loggables, LoggableMap{
|
||||
"error": err.Error(),
|
||||
})
|
||||
}
|
||||
|
||||
// Done creates a new Event entry that includes the duration and appended
|
||||
// loggables.
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (eip *EventInProgress) Done() {
|
||||
eip.doneFunc(eip.loggables) // create final event with extra data
|
||||
}
|
||||
|
||||
// DEPRECATED use `FinishWithErr`
|
||||
// DoneWithErr creates a new Event entry that includes the duration and appended
|
||||
// loggables. DoneWithErr accepts an error, if err is non-nil, it is set on
|
||||
// the EventInProgress. Otherwise the logic is the same as the `Done()` method
|
||||
func (eip *EventInProgress) DoneWithErr(err error) {
|
||||
if err != nil {
|
||||
eip.SetError(err)
|
||||
}
|
||||
eip.doneFunc(eip.loggables)
|
||||
}
|
||||
|
||||
// Close is an alias for done
|
||||
// Deprecated: Stop using go-log for event logging
|
||||
func (eip *EventInProgress) Close() error {
|
||||
eip.Done()
|
||||
return nil
|
||||
}
|
||||
|
||||
// FormatRFC3339 returns the given time in UTC with RFC3999Nano format.
|
||||
func FormatRFC3339(t time.Time) string {
|
||||
return t.UTC().Format(time.RFC3339Nano)
|
||||
}
|
||||
42
vendor/github.com/ipfs/go-log/loggable.go
generated
vendored
Normal file
42
vendor/github.com/ipfs/go-log/loggable.go
generated
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
package log
|
||||
|
||||
// Loggable describes objects that can be marshalled into Metadata for logging
|
||||
type Loggable interface {
|
||||
Loggable() map[string]interface{}
|
||||
}
|
||||
|
||||
// LoggableMap is just a generic map keyed by string. It
|
||||
// implements the Loggable interface.
|
||||
type LoggableMap map[string]interface{}
|
||||
|
||||
// Loggable implements the Loggable interface for LoggableMap
|
||||
func (l LoggableMap) Loggable() map[string]interface{} {
|
||||
return l
|
||||
}
|
||||
|
||||
// LoggableF converts a func into a Loggable
|
||||
type LoggableF func() map[string]interface{}
|
||||
|
||||
// Loggable implements the Loggable interface by running
|
||||
// the LoggableF function.
|
||||
func (l LoggableF) Loggable() map[string]interface{} {
|
||||
return l()
|
||||
}
|
||||
|
||||
// Deferred returns a LoggableF where the execution of the
|
||||
// provided function is deferred.
|
||||
func Deferred(key string, f func() string) Loggable {
|
||||
function := func() map[string]interface{} {
|
||||
return map[string]interface{}{
|
||||
key: f(),
|
||||
}
|
||||
}
|
||||
return LoggableF(function)
|
||||
}
|
||||
|
||||
// Pair returns a Loggable where key is paired to Loggable.
|
||||
func Pair(key string, l Loggable) Loggable {
|
||||
return LoggableMap{
|
||||
key: l,
|
||||
}
|
||||
}
|
||||
77
vendor/github.com/ipfs/go-log/metadata.go
generated
vendored
Normal file
77
vendor/github.com/ipfs/go-log/metadata.go
generated
vendored
Normal file
@@ -0,0 +1,77 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Metadata is a convenience type for generic maps
|
||||
type Metadata map[string]interface{}
|
||||
|
||||
// DeepMerge merges the second Metadata parameter into the first.
|
||||
// Nested Metadata are merged recursively. Primitives are over-written.
|
||||
func DeepMerge(b, a Metadata) Metadata {
|
||||
out := Metadata{}
|
||||
for k, v := range b {
|
||||
out[k] = v
|
||||
}
|
||||
for k, v := range a {
|
||||
|
||||
maybe, err := Metadatify(v)
|
||||
if err != nil {
|
||||
// if the new value is not meta. just overwrite the dest vaue
|
||||
if out[k] != nil {
|
||||
log.Debugf("Overwriting key: %s, old: %s, new: %s", k, out[k], v)
|
||||
}
|
||||
out[k] = v
|
||||
continue
|
||||
}
|
||||
|
||||
// it is meta. What about dest?
|
||||
outv, exists := out[k]
|
||||
if !exists {
|
||||
// the new value is meta, but there's no dest value. just write it
|
||||
out[k] = v
|
||||
continue
|
||||
}
|
||||
|
||||
outMetadataValue, err := Metadatify(outv)
|
||||
if err != nil {
|
||||
// the new value is meta and there's a dest value, but the dest
|
||||
// value isn't meta. just overwrite
|
||||
out[k] = v
|
||||
continue
|
||||
}
|
||||
|
||||
// both are meta. merge them.
|
||||
out[k] = DeepMerge(outMetadataValue, maybe)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// Loggable implements the Loggable interface.
|
||||
func (m Metadata) Loggable() map[string]interface{} {
|
||||
// NB: method defined on value to avoid de-referencing nil Metadata
|
||||
return m
|
||||
}
|
||||
|
||||
// JsonString returns the marshaled JSON string for the metadata.
|
||||
func (m Metadata) JsonString() (string, error) {
|
||||
// NB: method defined on value
|
||||
b, err := json.Marshal(m)
|
||||
return string(b), err
|
||||
}
|
||||
|
||||
// Metadatify converts maps into Metadata.
|
||||
func Metadatify(i interface{}) (Metadata, error) {
|
||||
value := reflect.ValueOf(i)
|
||||
if value.Kind() == reflect.Map {
|
||||
m := map[string]interface{}{}
|
||||
for _, k := range value.MapKeys() {
|
||||
m[k.String()] = value.MapIndex(k).Interface()
|
||||
}
|
||||
return Metadata(m), nil
|
||||
}
|
||||
return nil, errors.New("is not a map")
|
||||
}
|
||||
68
vendor/github.com/ipfs/go-log/oldlog.go
generated
vendored
Normal file
68
vendor/github.com/ipfs/go-log/oldlog.go
generated
vendored
Normal file
@@ -0,0 +1,68 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
tracer "github.com/ipfs/go-log/tracer"
|
||||
lwriter "github.com/ipfs/go-log/writer"
|
||||
"os"
|
||||
|
||||
opentrace "github.com/opentracing/opentracing-go"
|
||||
|
||||
log2 "github.com/ipfs/go-log/v2"
|
||||
)
|
||||
|
||||
func init() {
|
||||
SetupLogging()
|
||||
}
|
||||
|
||||
// Logging environment variables
|
||||
const (
|
||||
envTracingFile = "GOLOG_TRACING_FILE" // /path/to/file
|
||||
)
|
||||
|
||||
func SetupLogging() {
|
||||
// We're importing V2. Given that we setup logging on init, we should be
|
||||
// fine skipping the rest of the initialization.
|
||||
|
||||
// TracerPlugins are instantiated after this, so use loggable tracer
|
||||
// by default, if a TracerPlugin is added it will override this
|
||||
lgblRecorder := tracer.NewLoggableRecorder()
|
||||
lgblTracer := tracer.New(lgblRecorder)
|
||||
opentrace.SetGlobalTracer(lgblTracer)
|
||||
|
||||
if tracingfp := os.Getenv(envTracingFile); len(tracingfp) > 0 {
|
||||
f, err := os.Create(tracingfp)
|
||||
if err != nil {
|
||||
log.Error("failed to create tracing file: %s", tracingfp)
|
||||
} else {
|
||||
lwriter.WriterGroup.AddWriter(f)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetDebugLogging calls SetAllLoggers with logging.DEBUG
|
||||
func SetDebugLogging() {
|
||||
log2.SetDebugLogging()
|
||||
}
|
||||
|
||||
// SetAllLoggers changes the logging level of all loggers to lvl
|
||||
func SetAllLoggers(lvl LogLevel) {
|
||||
log2.SetAllLoggers(lvl)
|
||||
}
|
||||
|
||||
// SetLogLevel changes the log level of a specific subsystem
|
||||
// name=="*" changes all subsystems
|
||||
func SetLogLevel(name, level string) error {
|
||||
return log2.SetLogLevel(name, level)
|
||||
}
|
||||
|
||||
// SetLogLevelRegex sets all loggers to level `l` that match expression `e`.
|
||||
// An error is returned if `e` fails to compile.
|
||||
func SetLogLevelRegex(e, l string) error {
|
||||
return log2.SetLogLevelRegex(e, l)
|
||||
}
|
||||
|
||||
// GetSubsystems returns a slice containing the
|
||||
// names of the current loggers
|
||||
func GetSubsystems() []string {
|
||||
return log2.GetSubsystems()
|
||||
}
|
||||
41
vendor/github.com/ipfs/go-log/package.json
generated
vendored
Normal file
41
vendor/github.com/ipfs/go-log/package.json
generated
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"bugs": {
|
||||
"url": "https://github.com/ipfs/go-log"
|
||||
},
|
||||
"gx": {
|
||||
"dvcsimport": "github.com/ipfs/go-log"
|
||||
},
|
||||
"gxDependencies": [
|
||||
{
|
||||
"author": "whyrusleeping",
|
||||
"hash": "QmcaSwFc5RBg8yCq54QURwEU4nwjfCpjbpmaAm4VbdGLKv",
|
||||
"name": "go-logging",
|
||||
"version": "0.0.0"
|
||||
},
|
||||
{
|
||||
"author": "frist",
|
||||
"hash": "QmWLWmRVSiagqP15jczsGME1qpob6HDbtbHAY2he9W5iUo",
|
||||
"name": "opentracing-go",
|
||||
"version": "0.0.3"
|
||||
},
|
||||
{
|
||||
"author": "mattn",
|
||||
"hash": "QmTsHcKgTQ4VeYZd8eKYpTXeLW7KNwkRD9wjnrwsV2sToq",
|
||||
"name": "go-colorable",
|
||||
"version": "0.2.0"
|
||||
},
|
||||
{
|
||||
"author": "whyrusleeping",
|
||||
"hash": "QmddjPSGZb3ieihSseFeCfVRpZzcqczPNsD2DvarSwnjJB",
|
||||
"name": "gogo-protobuf",
|
||||
"version": "1.2.1"
|
||||
}
|
||||
],
|
||||
"gxVersion": "0.12.1",
|
||||
"language": "go",
|
||||
"license": "",
|
||||
"name": "go-log",
|
||||
"releaseCmd": "git commit -a -m \"gx publish $VERSION\"",
|
||||
"version": "1.5.9"
|
||||
}
|
||||
|
||||
21
vendor/github.com/ipfs/go-log/tracer/LICENSE
generated
vendored
Normal file
21
vendor/github.com/ipfs/go-log/tracer/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 The OpenTracing Authors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
42
vendor/github.com/ipfs/go-log/tracer/context.go
generated
vendored
Normal file
42
vendor/github.com/ipfs/go-log/tracer/context.go
generated
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
package loggabletracer
|
||||
|
||||
// SpanContext holds the basic Span metadata.
|
||||
type SpanContext struct {
|
||||
// A probabilistically unique identifier for a [multi-span] trace.
|
||||
TraceID uint64
|
||||
|
||||
// A probabilistically unique identifier for a span.
|
||||
SpanID uint64
|
||||
|
||||
// Whether the trace is sampled.
|
||||
Sampled bool
|
||||
|
||||
// The span's associated baggage.
|
||||
Baggage map[string]string // initialized on first use
|
||||
}
|
||||
|
||||
// ForeachBaggageItem belongs to the opentracing.SpanContext interface
|
||||
func (c SpanContext) ForeachBaggageItem(handler func(k, v string) bool) {
|
||||
for k, v := range c.Baggage {
|
||||
if !handler(k, v) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WithBaggageItem returns an entirely new loggabletracer SpanContext with the
|
||||
// given key:value baggage pair set.
|
||||
func (c SpanContext) WithBaggageItem(key, val string) SpanContext {
|
||||
var newBaggage map[string]string
|
||||
if c.Baggage == nil {
|
||||
newBaggage = map[string]string{key: val}
|
||||
} else {
|
||||
newBaggage = make(map[string]string, len(c.Baggage)+1)
|
||||
for k, v := range c.Baggage {
|
||||
newBaggage[k] = v
|
||||
}
|
||||
newBaggage[key] = val
|
||||
}
|
||||
// Use positional parameters so the compiler will help catch new fields.
|
||||
return SpanContext{c.TraceID, c.SpanID, c.Sampled, newBaggage}
|
||||
}
|
||||
78
vendor/github.com/ipfs/go-log/tracer/debug.go
generated
vendored
Normal file
78
vendor/github.com/ipfs/go-log/tracer/debug.go
generated
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
)
|
||||
|
||||
const debugGoroutineIDTag = "_initial_goroutine"
|
||||
|
||||
type errAssertionFailed struct {
|
||||
span *spanImpl
|
||||
msg string
|
||||
}
|
||||
|
||||
// Error implements the error interface.
|
||||
func (err *errAssertionFailed) Error() string {
|
||||
return fmt.Sprintf("%s:\n%+v", err.msg, err.span)
|
||||
}
|
||||
|
||||
func (s *spanImpl) Lock() {
|
||||
s.Mutex.Lock()
|
||||
s.maybeAssertSanityLocked()
|
||||
}
|
||||
|
||||
func (s *spanImpl) maybeAssertSanityLocked() {
|
||||
if s.tracer == nil {
|
||||
s.Mutex.Unlock()
|
||||
panic(&errAssertionFailed{span: s, msg: "span used after call to Finish()"})
|
||||
}
|
||||
if s.tracer.options.DebugAssertSingleGoroutine {
|
||||
startID := curGoroutineID()
|
||||
curID, ok := s.raw.Tags[debugGoroutineIDTag].(uint64)
|
||||
if !ok {
|
||||
// This is likely invoked in the context of the SetTag which sets
|
||||
// debugGoroutineTag.
|
||||
return
|
||||
}
|
||||
if startID != curID {
|
||||
s.Mutex.Unlock()
|
||||
panic(&errAssertionFailed{
|
||||
span: s,
|
||||
msg: fmt.Sprintf("span started on goroutine %d, but now running on %d", startID, curID),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var goroutineSpace = []byte("goroutine ")
|
||||
var littleBuf = sync.Pool{
|
||||
New: func() interface{} {
|
||||
buf := make([]byte, 64)
|
||||
return &buf
|
||||
},
|
||||
}
|
||||
|
||||
// Credit to @bradfitz:
|
||||
// https://github.com/golang/net/blob/master/http2/gotrack.go#L51
|
||||
func curGoroutineID() uint64 {
|
||||
bp := littleBuf.Get().(*[]byte)
|
||||
defer littleBuf.Put(bp)
|
||||
b := *bp
|
||||
b = b[:runtime.Stack(b, false)]
|
||||
// Parse the 4707 out of "goroutine 4707 ["
|
||||
b = bytes.TrimPrefix(b, goroutineSpace)
|
||||
i := bytes.IndexByte(b, ' ')
|
||||
if i < 0 {
|
||||
panic(fmt.Sprintf("No space found in %q", b))
|
||||
}
|
||||
b = b[:i]
|
||||
n, err := strconv.ParseUint(string(b), 10, 64)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("Failed to parse goroutine ID out of %q: %v", b, err))
|
||||
}
|
||||
return n
|
||||
}
|
||||
62
vendor/github.com/ipfs/go-log/tracer/event.go
generated
vendored
Normal file
62
vendor/github.com/ipfs/go-log/tracer/event.go
generated
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
package loggabletracer
|
||||
|
||||
import "github.com/opentracing/opentracing-go"
|
||||
|
||||
// A SpanEvent is emitted when a mutating command is called on a Span.
|
||||
type SpanEvent interface{}
|
||||
|
||||
// EventCreate is emitted when a Span is created.
|
||||
type EventCreate struct{ OperationName string }
|
||||
|
||||
// EventTag is received when SetTag is called.
|
||||
type EventTag struct {
|
||||
Key string
|
||||
Value interface{}
|
||||
}
|
||||
|
||||
// EventBaggage is received when SetBaggageItem is called.
|
||||
type EventBaggage struct {
|
||||
Key, Value string
|
||||
}
|
||||
|
||||
// EventLogFields is received when LogFields or LogKV is called.
|
||||
type EventLogFields opentracing.LogRecord
|
||||
|
||||
// EventLog is received when Log (or one of its derivatives) is called.
|
||||
//
|
||||
// DEPRECATED
|
||||
type EventLog opentracing.LogData
|
||||
|
||||
// EventFinish is received when Finish is called.
|
||||
type EventFinish RawSpan
|
||||
|
||||
func (s *spanImpl) onCreate(opName string) {
|
||||
if s.event != nil {
|
||||
s.event(EventCreate{OperationName: opName})
|
||||
}
|
||||
}
|
||||
func (s *spanImpl) onTag(key string, value interface{}) {
|
||||
if s.event != nil {
|
||||
s.event(EventTag{Key: key, Value: value})
|
||||
}
|
||||
}
|
||||
func (s *spanImpl) onLog(ld opentracing.LogData) {
|
||||
if s.event != nil {
|
||||
s.event(EventLog(ld))
|
||||
}
|
||||
}
|
||||
func (s *spanImpl) onLogFields(lr opentracing.LogRecord) {
|
||||
if s.event != nil {
|
||||
s.event(EventLogFields(lr))
|
||||
}
|
||||
}
|
||||
func (s *spanImpl) onBaggage(key, value string) {
|
||||
if s.event != nil {
|
||||
s.event(EventBaggage{Key: key, Value: value})
|
||||
}
|
||||
}
|
||||
func (s *spanImpl) onFinish(sp RawSpan) {
|
||||
if s.event != nil {
|
||||
s.event(EventFinish(sp))
|
||||
}
|
||||
}
|
||||
61
vendor/github.com/ipfs/go-log/tracer/propagation.go
generated
vendored
Normal file
61
vendor/github.com/ipfs/go-log/tracer/propagation.go
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
package loggabletracer
|
||||
|
||||
import opentracing "github.com/opentracing/opentracing-go"
|
||||
|
||||
type accessorPropagator struct {
|
||||
tracer *LoggableTracer
|
||||
}
|
||||
|
||||
// DelegatingCarrier is a flexible carrier interface which can be implemented
|
||||
// by types which have a means of storing the trace metadata and already know
|
||||
// how to serialize themselves (for example, protocol buffers).
|
||||
type DelegatingCarrier interface {
|
||||
SetState(traceID, spanID uint64, sampled bool)
|
||||
State() (traceID, spanID uint64, sampled bool)
|
||||
SetBaggageItem(key, value string)
|
||||
GetBaggage(func(key, value string))
|
||||
}
|
||||
|
||||
func (p *accessorPropagator) Inject(
|
||||
spanContext opentracing.SpanContext,
|
||||
carrier interface{},
|
||||
) error {
|
||||
dc, ok := carrier.(DelegatingCarrier)
|
||||
if !ok || dc == nil {
|
||||
return opentracing.ErrInvalidCarrier
|
||||
}
|
||||
sc, ok := spanContext.(SpanContext)
|
||||
if !ok {
|
||||
return opentracing.ErrInvalidSpanContext
|
||||
}
|
||||
dc.SetState(sc.TraceID, sc.SpanID, sc.Sampled)
|
||||
for k, v := range sc.Baggage {
|
||||
dc.SetBaggageItem(k, v)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *accessorPropagator) Extract(
|
||||
carrier interface{},
|
||||
) (opentracing.SpanContext, error) {
|
||||
dc, ok := carrier.(DelegatingCarrier)
|
||||
if !ok || dc == nil {
|
||||
return nil, opentracing.ErrInvalidCarrier
|
||||
}
|
||||
|
||||
traceID, spanID, sampled := dc.State()
|
||||
sc := SpanContext{
|
||||
TraceID: traceID,
|
||||
SpanID: spanID,
|
||||
Sampled: sampled,
|
||||
Baggage: nil,
|
||||
}
|
||||
dc.GetBaggage(func(k, v string) {
|
||||
if sc.Baggage == nil {
|
||||
sc.Baggage = map[string]string{}
|
||||
}
|
||||
sc.Baggage[k] = v
|
||||
})
|
||||
|
||||
return sc, nil
|
||||
}
|
||||
178
vendor/github.com/ipfs/go-log/tracer/propagation_ot.go
generated
vendored
Normal file
178
vendor/github.com/ipfs/go-log/tracer/propagation_ot.go
generated
vendored
Normal file
@@ -0,0 +1,178 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"github.com/ipfs/go-log/tracer/wire"
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
)
|
||||
|
||||
type textMapPropagator struct {
|
||||
}
|
||||
type binaryPropagator struct {
|
||||
}
|
||||
|
||||
const (
|
||||
prefixTracerState = "ot-tracer-"
|
||||
prefixBaggage = "ot-baggage-"
|
||||
|
||||
tracerStateFieldCount = 3
|
||||
fieldNameTraceID = prefixTracerState + "traceid"
|
||||
fieldNameSpanID = prefixTracerState + "spanid"
|
||||
fieldNameSampled = prefixTracerState + "sampled"
|
||||
)
|
||||
|
||||
func (p *textMapPropagator) Inject(
|
||||
spanContext opentracing.SpanContext,
|
||||
opaqueCarrier interface{},
|
||||
) error {
|
||||
sc, ok := spanContext.(SpanContext)
|
||||
if !ok {
|
||||
return opentracing.ErrInvalidSpanContext
|
||||
}
|
||||
carrier, ok := opaqueCarrier.(opentracing.TextMapWriter)
|
||||
if !ok {
|
||||
return opentracing.ErrInvalidCarrier
|
||||
}
|
||||
carrier.Set(fieldNameTraceID, strconv.FormatUint(sc.TraceID, 16))
|
||||
carrier.Set(fieldNameSpanID, strconv.FormatUint(sc.SpanID, 16))
|
||||
carrier.Set(fieldNameSampled, strconv.FormatBool(sc.Sampled))
|
||||
|
||||
for k, v := range sc.Baggage {
|
||||
carrier.Set(prefixBaggage+k, v)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *textMapPropagator) Extract(
|
||||
opaqueCarrier interface{},
|
||||
) (opentracing.SpanContext, error) {
|
||||
carrier, ok := opaqueCarrier.(opentracing.TextMapReader)
|
||||
if !ok {
|
||||
return nil, opentracing.ErrInvalidCarrier
|
||||
}
|
||||
requiredFieldCount := 0
|
||||
var traceID, spanID uint64
|
||||
var sampled bool
|
||||
var err error
|
||||
decodedBaggage := make(map[string]string)
|
||||
err = carrier.ForeachKey(func(k, v string) error {
|
||||
switch strings.ToLower(k) {
|
||||
case fieldNameTraceID:
|
||||
traceID, err = strconv.ParseUint(v, 16, 64)
|
||||
if err != nil {
|
||||
return opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
case fieldNameSpanID:
|
||||
spanID, err = strconv.ParseUint(v, 16, 64)
|
||||
if err != nil {
|
||||
return opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
case fieldNameSampled:
|
||||
sampled, err = strconv.ParseBool(v)
|
||||
if err != nil {
|
||||
return opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
default:
|
||||
lowercaseK := strings.ToLower(k)
|
||||
if strings.HasPrefix(lowercaseK, prefixBaggage) {
|
||||
decodedBaggage[strings.TrimPrefix(lowercaseK, prefixBaggage)] = v
|
||||
}
|
||||
// Balance off the requiredFieldCount++ just below...
|
||||
requiredFieldCount--
|
||||
}
|
||||
requiredFieldCount++
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if requiredFieldCount < tracerStateFieldCount {
|
||||
if requiredFieldCount == 0 {
|
||||
return nil, opentracing.ErrSpanContextNotFound
|
||||
}
|
||||
return nil, opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
|
||||
return SpanContext{
|
||||
TraceID: traceID,
|
||||
SpanID: spanID,
|
||||
Sampled: sampled,
|
||||
Baggage: decodedBaggage,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (p *binaryPropagator) Inject(
|
||||
spanContext opentracing.SpanContext,
|
||||
opaqueCarrier interface{},
|
||||
) error {
|
||||
sc, ok := spanContext.(SpanContext)
|
||||
if !ok {
|
||||
return opentracing.ErrInvalidSpanContext
|
||||
}
|
||||
carrier, ok := opaqueCarrier.(io.Writer)
|
||||
if !ok {
|
||||
return opentracing.ErrInvalidCarrier
|
||||
}
|
||||
|
||||
state := wire.TracerState{}
|
||||
state.TraceId = sc.TraceID
|
||||
state.SpanId = sc.SpanID
|
||||
state.Sampled = sc.Sampled
|
||||
state.BaggageItems = sc.Baggage
|
||||
|
||||
b, err := proto.Marshal(&state)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Write the length of the marshalled binary to the writer.
|
||||
length := uint32(len(b))
|
||||
if err := binary.Write(carrier, binary.BigEndian, &length); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = carrier.Write(b)
|
||||
return err
|
||||
}
|
||||
|
||||
func (p *binaryPropagator) Extract(
|
||||
opaqueCarrier interface{},
|
||||
) (opentracing.SpanContext, error) {
|
||||
carrier, ok := opaqueCarrier.(io.Reader)
|
||||
if !ok {
|
||||
return nil, opentracing.ErrInvalidCarrier
|
||||
}
|
||||
|
||||
// Read the length of marshalled binary. io.ReadAll isn't that performant
|
||||
// since it keeps resizing the underlying buffer as it encounters more bytes
|
||||
// to read. By reading the length, we can allocate a fixed sized buf and read
|
||||
// the exact amount of bytes into it.
|
||||
var length uint32
|
||||
if err := binary.Read(carrier, binary.BigEndian, &length); err != nil {
|
||||
return nil, opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
buf := make([]byte, length)
|
||||
if n, err := carrier.Read(buf); err != nil {
|
||||
if n > 0 {
|
||||
return nil, opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
return nil, opentracing.ErrSpanContextNotFound
|
||||
}
|
||||
|
||||
ctx := wire.TracerState{}
|
||||
if err := proto.Unmarshal(buf, &ctx); err != nil {
|
||||
return nil, opentracing.ErrSpanContextCorrupted
|
||||
}
|
||||
|
||||
return SpanContext{
|
||||
TraceID: ctx.TraceId,
|
||||
SpanID: ctx.SpanId,
|
||||
Sampled: ctx.Sampled,
|
||||
Baggage: ctx.BaggageItems,
|
||||
}, nil
|
||||
}
|
||||
34
vendor/github.com/ipfs/go-log/tracer/raw.go
generated
vendored
Normal file
34
vendor/github.com/ipfs/go-log/tracer/raw.go
generated
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
)
|
||||
|
||||
// RawSpan encapsulates all state associated with a (finished) Span.
|
||||
type RawSpan struct {
|
||||
// Those recording the RawSpan should also record the contents of its
|
||||
// SpanContext.
|
||||
Context SpanContext
|
||||
|
||||
// The SpanID of this SpanContext's first intra-trace reference (i.e.,
|
||||
// "parent"), or 0 if there is no parent.
|
||||
ParentSpanID uint64
|
||||
|
||||
// The name of the "operation" this span is an instance of. (Called a "span
|
||||
// name" in some implementations)
|
||||
Operation string
|
||||
|
||||
// We store <start, duration> rather than <start, end> so that only
|
||||
// one of the timestamps has global clock uncertainty issues.
|
||||
Start time.Time
|
||||
Duration time.Duration
|
||||
|
||||
// Essentially an extension mechanism. Can be used for many purposes,
|
||||
// not to be enumerated here.
|
||||
Tags opentracing.Tags
|
||||
|
||||
// The span's "microlog".
|
||||
Logs []opentracing.LogRecord
|
||||
}
|
||||
103
vendor/github.com/ipfs/go-log/tracer/recorder.go
generated
vendored
Normal file
103
vendor/github.com/ipfs/go-log/tracer/recorder.go
generated
vendored
Normal file
@@ -0,0 +1,103 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
writer "github.com/ipfs/go-log/writer"
|
||||
opentrace "github.com/opentracing/opentracing-go"
|
||||
)
|
||||
|
||||
// A SpanRecorder handles all of the `RawSpan` data generated via an
|
||||
// associated `Tracer` (see `NewStandardTracer`) instance. It also names
|
||||
// the containing process and provides access to a straightforward tag map.
|
||||
type SpanRecorder interface {
|
||||
// Implementations must determine whether and where to store `span`.
|
||||
RecordSpan(span RawSpan)
|
||||
}
|
||||
|
||||
type LoggableSpanRecorder struct{}
|
||||
|
||||
// NewLoggableRecorder creates new LoggableSpanRecorder
|
||||
func NewLoggableRecorder() *LoggableSpanRecorder {
|
||||
return new(LoggableSpanRecorder)
|
||||
}
|
||||
|
||||
// Loggable Representation of a span, treated as an event log
|
||||
type LoggableSpan struct {
|
||||
TraceID uint64 `json:"TraceID"`
|
||||
SpanID uint64 `json:"SpanID"`
|
||||
ParentSpanID uint64 `json:"ParentSpanID"`
|
||||
Operation string `json:"Operation"`
|
||||
Start time.Time `json:"Start"`
|
||||
Duration time.Duration `json:"Duration"`
|
||||
Tags opentrace.Tags `json:"Tags"`
|
||||
Logs []SpanLog `json:"Logs"`
|
||||
}
|
||||
|
||||
type SpanLog struct {
|
||||
Timestamp time.Time `json:"Timestamp"`
|
||||
Field []SpanField `json:"Fields"`
|
||||
}
|
||||
|
||||
type SpanField struct {
|
||||
Key string `json:"Key"`
|
||||
Value string `json:"Value"`
|
||||
}
|
||||
|
||||
// RecordSpan implements the respective method of SpanRecorder.
|
||||
func (r *LoggableSpanRecorder) RecordSpan(span RawSpan) {
|
||||
// short circuit if theres nothing to write to
|
||||
if !writer.WriterGroup.Active() {
|
||||
return
|
||||
}
|
||||
|
||||
sl := make([]SpanLog, len(span.Logs))
|
||||
for i := range span.Logs {
|
||||
sl[i].Timestamp = span.Logs[i].Timestamp
|
||||
sf := make([]SpanField, len(span.Logs[i].Fields))
|
||||
sl[i].Field = sf
|
||||
for j := range span.Logs[i].Fields {
|
||||
sf[j].Key = span.Logs[i].Fields[j].Key()
|
||||
sf[j].Value = fmt.Sprint(span.Logs[i].Fields[j].Value())
|
||||
}
|
||||
}
|
||||
|
||||
tags := make(map[string]interface{}, len(span.Tags))
|
||||
for k, v := range span.Tags {
|
||||
switch vt := v.(type) {
|
||||
case bool, string, int, int8, int16, int32, int64, uint, uint8, uint16, uint64:
|
||||
tags[k] = v
|
||||
case []byte:
|
||||
base64.StdEncoding.EncodeToString(vt)
|
||||
default:
|
||||
tags[k] = fmt.Sprint(v)
|
||||
}
|
||||
}
|
||||
|
||||
spanlog := &LoggableSpan{
|
||||
TraceID: span.Context.TraceID,
|
||||
SpanID: span.Context.SpanID,
|
||||
ParentSpanID: span.ParentSpanID,
|
||||
Operation: span.Operation,
|
||||
Start: span.Start,
|
||||
Duration: span.Duration,
|
||||
Tags: tags,
|
||||
Logs: sl,
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
encoder := json.NewEncoder(&buf)
|
||||
encoder.SetEscapeHTML(false)
|
||||
err := encoder.Encode(spanlog)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR FORMATTING SPAN ENTRY: %s\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
_, _ = writer.WriterGroup.Write(buf.Bytes())
|
||||
}
|
||||
274
vendor/github.com/ipfs/go-log/tracer/span.go
generated
vendored
Normal file
274
vendor/github.com/ipfs/go-log/tracer/span.go
generated
vendored
Normal file
@@ -0,0 +1,274 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
"github.com/opentracing/opentracing-go/ext"
|
||||
"github.com/opentracing/opentracing-go/log"
|
||||
)
|
||||
|
||||
// Span provides access to the essential details of the span, for use
|
||||
// by loggabletracer consumers. These methods may only be called prior
|
||||
// to (*opentracing.Span).Finish().
|
||||
type Span interface {
|
||||
opentracing.Span
|
||||
|
||||
// Operation names the work done by this span instance
|
||||
Operation() string
|
||||
|
||||
// Start indicates when the span began
|
||||
Start() time.Time
|
||||
}
|
||||
|
||||
// Implements the `Span` interface. Created via LoggableTracer (see
|
||||
// `loggabletracer.New()`).
|
||||
type spanImpl struct {
|
||||
tracer *LoggableTracer
|
||||
event func(SpanEvent)
|
||||
sync.Mutex // protects the fields below
|
||||
raw RawSpan
|
||||
// The number of logs dropped because of MaxLogsPerSpan.
|
||||
numDroppedLogs int
|
||||
}
|
||||
|
||||
var spanPool = &sync.Pool{New: func() interface{} {
|
||||
return &spanImpl{}
|
||||
}}
|
||||
|
||||
func (s *spanImpl) reset() {
|
||||
s.tracer, s.event = nil, nil
|
||||
// Note: Would like to do the following, but then the consumer of RawSpan
|
||||
// (the recorder) needs to make sure that they're not holding on to the
|
||||
// baggage or logs when they return (i.e. they need to copy if they care):
|
||||
//
|
||||
// logs, baggage := s.raw.Logs[:0], s.raw.Baggage
|
||||
// for k := range baggage {
|
||||
// delete(baggage, k)
|
||||
// }
|
||||
// s.raw.Logs, s.raw.Baggage = logs, baggage
|
||||
//
|
||||
// That's likely too much to ask for. But there is some magic we should
|
||||
// be able to do with `runtime.SetFinalizer` to reclaim that memory into
|
||||
// a buffer pool when GC considers them unreachable, which should ease
|
||||
// some of the load. Hard to say how quickly that would be in practice
|
||||
// though.
|
||||
s.raw = RawSpan{
|
||||
Context: SpanContext{},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *spanImpl) SetOperationName(operationName string) opentracing.Span {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
s.raw.Operation = operationName
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *spanImpl) trim() bool {
|
||||
return !s.raw.Context.Sampled && s.tracer.options.TrimUnsampledSpans
|
||||
}
|
||||
|
||||
func (s *spanImpl) SetTag(key string, value interface{}) opentracing.Span {
|
||||
defer s.onTag(key, value)
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
if key == string(ext.SamplingPriority) {
|
||||
if v, ok := value.(uint16); ok {
|
||||
s.raw.Context.Sampled = v != 0
|
||||
return s
|
||||
}
|
||||
}
|
||||
if s.trim() {
|
||||
return s
|
||||
}
|
||||
|
||||
if s.raw.Tags == nil {
|
||||
s.raw.Tags = opentracing.Tags{}
|
||||
}
|
||||
s.raw.Tags[key] = value
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *spanImpl) LogKV(keyValues ...interface{}) {
|
||||
fields, err := log.InterleavedKVToFields(keyValues...)
|
||||
if err != nil {
|
||||
s.LogFields(log.Error(err), log.String("function", "LogKV"))
|
||||
return
|
||||
}
|
||||
s.LogFields(fields...)
|
||||
}
|
||||
|
||||
func (s *spanImpl) appendLog(lr opentracing.LogRecord) {
|
||||
maxLogs := s.tracer.options.MaxLogsPerSpan
|
||||
if maxLogs == 0 || len(s.raw.Logs) < maxLogs {
|
||||
s.raw.Logs = append(s.raw.Logs, lr)
|
||||
return
|
||||
}
|
||||
|
||||
// We have too many logs. We don't touch the first numOld logs; we treat the
|
||||
// rest as a circular buffer and overwrite the oldest log among those.
|
||||
numOld := (maxLogs - 1) / 2
|
||||
numNew := maxLogs - numOld
|
||||
s.raw.Logs[numOld+s.numDroppedLogs%numNew] = lr
|
||||
s.numDroppedLogs++
|
||||
}
|
||||
|
||||
func (s *spanImpl) LogFields(fields ...log.Field) {
|
||||
lr := opentracing.LogRecord{
|
||||
Fields: fields,
|
||||
}
|
||||
defer s.onLogFields(lr)
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
if s.trim() || s.tracer.options.DropAllLogs {
|
||||
return
|
||||
}
|
||||
if lr.Timestamp.IsZero() {
|
||||
lr.Timestamp = time.Now()
|
||||
}
|
||||
s.appendLog(lr)
|
||||
}
|
||||
|
||||
func (s *spanImpl) LogEvent(event string) {
|
||||
s.Log(opentracing.LogData{
|
||||
Event: event,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *spanImpl) LogEventWithPayload(event string, payload interface{}) {
|
||||
s.Log(opentracing.LogData{
|
||||
Event: event,
|
||||
Payload: payload,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *spanImpl) Log(ld opentracing.LogData) {
|
||||
defer s.onLog(ld)
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
if s.trim() || s.tracer.options.DropAllLogs {
|
||||
return
|
||||
}
|
||||
|
||||
if ld.Timestamp.IsZero() {
|
||||
ld.Timestamp = time.Now()
|
||||
}
|
||||
|
||||
s.appendLog(ld.ToLogRecord())
|
||||
}
|
||||
|
||||
func (s *spanImpl) Finish() {
|
||||
s.FinishWithOptions(opentracing.FinishOptions{})
|
||||
}
|
||||
|
||||
// rotateLogBuffer rotates the records in the buffer: records 0 to pos-1 move at
|
||||
// the end (i.e. pos circular left shifts).
|
||||
func rotateLogBuffer(buf []opentracing.LogRecord, pos int) {
|
||||
// This algorithm is described in:
|
||||
// http://www.cplusplus.com/reference/algorithm/rotate
|
||||
for first, middle, next := 0, pos, pos; first != middle; {
|
||||
buf[first], buf[next] = buf[next], buf[first]
|
||||
first++
|
||||
next++
|
||||
if next == len(buf) {
|
||||
next = middle
|
||||
} else if first == middle {
|
||||
middle = next
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *spanImpl) FinishWithOptions(opts opentracing.FinishOptions) {
|
||||
finishTime := opts.FinishTime
|
||||
if finishTime.IsZero() {
|
||||
finishTime = time.Now()
|
||||
}
|
||||
duration := finishTime.Sub(s.raw.Start)
|
||||
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
for _, lr := range opts.LogRecords {
|
||||
s.appendLog(lr)
|
||||
}
|
||||
for _, ld := range opts.BulkLogData {
|
||||
s.appendLog(ld.ToLogRecord())
|
||||
}
|
||||
|
||||
if s.numDroppedLogs > 0 {
|
||||
// We dropped some log events, which means that we used part of Logs as a
|
||||
// circular buffer (see appendLog). De-circularize it.
|
||||
numOld := (len(s.raw.Logs) - 1) / 2
|
||||
numNew := len(s.raw.Logs) - numOld
|
||||
rotateLogBuffer(s.raw.Logs[numOld:], s.numDroppedLogs%numNew)
|
||||
|
||||
// Replace the log in the middle (the oldest "new" log) with information
|
||||
// about the dropped logs. This means that we are effectively dropping one
|
||||
// more "new" log.
|
||||
numDropped := s.numDroppedLogs + 1
|
||||
s.raw.Logs[numOld] = opentracing.LogRecord{
|
||||
// Keep the timestamp of the last dropped event.
|
||||
Timestamp: s.raw.Logs[numOld].Timestamp,
|
||||
Fields: []log.Field{
|
||||
log.String("event", "dropped Span logs"),
|
||||
log.Int("dropped_log_count", numDropped),
|
||||
log.String("component", "loggabletracer"),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
s.raw.Duration = duration
|
||||
|
||||
s.onFinish(s.raw)
|
||||
s.tracer.options.Recorder.RecordSpan(s.raw)
|
||||
|
||||
// Last chance to get options before the span is possibly reset.
|
||||
poolEnabled := s.tracer.options.EnableSpanPool
|
||||
if s.tracer.options.DebugAssertUseAfterFinish {
|
||||
// This makes it much more likely to catch a panic on any subsequent
|
||||
// operation since s.tracer is accessed on every call to `Lock`.
|
||||
// We don't call `reset()` here to preserve the logs in the Span
|
||||
// which are printed when the assertion triggers.
|
||||
s.tracer = nil
|
||||
}
|
||||
|
||||
if poolEnabled {
|
||||
spanPool.Put(s)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *spanImpl) Tracer() opentracing.Tracer {
|
||||
return s.tracer
|
||||
}
|
||||
|
||||
func (s *spanImpl) Context() opentracing.SpanContext {
|
||||
return s.raw.Context
|
||||
}
|
||||
|
||||
func (s *spanImpl) SetBaggageItem(key, val string) opentracing.Span {
|
||||
s.onBaggage(key, val)
|
||||
if s.trim() {
|
||||
return s
|
||||
}
|
||||
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
s.raw.Context = s.raw.Context.WithBaggageItem(key, val)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *spanImpl) BaggageItem(key string) string {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
return s.raw.Context.Baggage[key]
|
||||
}
|
||||
|
||||
func (s *spanImpl) Operation() string {
|
||||
return s.raw.Operation
|
||||
}
|
||||
|
||||
func (s *spanImpl) Start() time.Time {
|
||||
return s.raw.Start
|
||||
}
|
||||
280
vendor/github.com/ipfs/go-log/tracer/tracer.go
generated
vendored
Normal file
280
vendor/github.com/ipfs/go-log/tracer/tracer.go
generated
vendored
Normal file
@@ -0,0 +1,280 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
writer "github.com/ipfs/go-log/writer"
|
||||
opentracing "github.com/opentracing/opentracing-go"
|
||||
)
|
||||
|
||||
// Tracer extends the opentracing.Tracer interface with methods to
|
||||
// probe implementation state, for use by loggabletracer consumers.
|
||||
type Tracer interface {
|
||||
opentracing.Tracer
|
||||
|
||||
// Options gets the Options used in New() or NewWithOptions().
|
||||
Options() Options
|
||||
}
|
||||
|
||||
// Options allows creating a customized Tracer via NewWithOptions. The object
|
||||
// must not be updated when there is an active tracer using it.
|
||||
type Options struct {
|
||||
// ShouldSample is a function which is called when creating a new Span and
|
||||
// determines whether that Span is sampled. The randomized TraceID is supplied
|
||||
// to allow deterministic sampling decisions to be made across different nodes.
|
||||
// For example,
|
||||
//
|
||||
// func(traceID uint64) { return traceID % 64 == 0 }
|
||||
//
|
||||
// samples every 64th trace on average.
|
||||
ShouldSample func(traceID uint64) bool
|
||||
// TrimUnsampledSpans turns potentially expensive operations on unsampled
|
||||
// Spans into no-ops. More precisely, tags and log events are silently
|
||||
// discarded. If NewSpanEventListener is set, the callbacks will still fire.
|
||||
TrimUnsampledSpans bool
|
||||
// Recorder receives Spans which have been finished.
|
||||
Recorder SpanRecorder
|
||||
// NewSpanEventListener can be used to enhance the tracer by effectively
|
||||
// attaching external code to trace events. See NetTraceIntegrator for a
|
||||
// practical example, and event.go for the list of possible events.
|
||||
NewSpanEventListener func() func(SpanEvent)
|
||||
// DropAllLogs turns log events on all Spans into no-ops.
|
||||
// If NewSpanEventListener is set, the callbacks will still fire.
|
||||
DropAllLogs bool
|
||||
// MaxLogsPerSpan limits the number of Logs in a span (if set to a nonzero
|
||||
// value). If a span has more logs than this value, logs are dropped as
|
||||
// necessary (and replaced with a log describing how many were dropped).
|
||||
//
|
||||
// About half of the MaxLogPerSpan logs kept are the oldest logs, and about
|
||||
// half are the newest logs.
|
||||
//
|
||||
// If NewSpanEventListener is set, the callbacks will still fire for all log
|
||||
// events. This value is ignored if DropAllLogs is true.
|
||||
MaxLogsPerSpan int
|
||||
// DebugAssertSingleGoroutine internally records the ID of the goroutine
|
||||
// creating each Span and verifies that no operation is carried out on
|
||||
// it on a different goroutine.
|
||||
// Provided strictly for development purposes.
|
||||
// Passing Spans between goroutine without proper synchronization often
|
||||
// results in use-after-Finish() errors. For a simple example, consider the
|
||||
// following pseudocode:
|
||||
//
|
||||
// func (s *Server) Handle(req http.Request) error {
|
||||
// sp := s.StartSpan("server")
|
||||
// defer sp.Finish()
|
||||
// wait := s.queueProcessing(opentracing.ContextWithSpan(context.Background(), sp), req)
|
||||
// select {
|
||||
// case resp := <-wait:
|
||||
// return resp.Error
|
||||
// case <-time.After(10*time.Second):
|
||||
// sp.LogEvent("timed out waiting for processing")
|
||||
// return ErrTimedOut
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// This looks reasonable at first, but a request which spends more than ten
|
||||
// seconds in the queue is abandoned by the main goroutine and its trace
|
||||
// finished, leading to use-after-finish when the request is finally
|
||||
// processed. Note also that even joining on to a finished Span via
|
||||
// StartSpanWithOptions constitutes an illegal operation.
|
||||
//
|
||||
// Code bases which do not require (or decide they do not want) Spans to
|
||||
// be passed across goroutine boundaries can run with this flag enabled in
|
||||
// tests to increase their chances of spotting wrong-doers.
|
||||
DebugAssertSingleGoroutine bool
|
||||
// DebugAssertUseAfterFinish is provided strictly for development purposes.
|
||||
// When set, it attempts to exacerbate issues emanating from use of Spans
|
||||
// after calling Finish by running additional assertions.
|
||||
DebugAssertUseAfterFinish bool
|
||||
// EnableSpanPool enables the use of a pool, so that the tracer reuses spans
|
||||
// after Finish has been called on it. Adds a slight performance gain as it
|
||||
// reduces allocations. However, if you have any use-after-finish race
|
||||
// conditions the code may panic.
|
||||
EnableSpanPool bool
|
||||
}
|
||||
|
||||
// DefaultOptions returns an Options object with a 1 in 64 sampling rate and
|
||||
// all options disabled. A Recorder needs to be set manually before using the
|
||||
// returned object with a Tracer.
|
||||
func DefaultOptions() Options {
|
||||
return Options{
|
||||
ShouldSample: func(traceID uint64) bool { return traceID%64 == 0 },
|
||||
MaxLogsPerSpan: 100,
|
||||
}
|
||||
}
|
||||
|
||||
// NewWithOptions creates a customized Tracer.
|
||||
func NewWithOptions(opts Options) opentracing.Tracer {
|
||||
rval := &LoggableTracer{options: opts}
|
||||
rval.accessorPropagator = &accessorPropagator{rval}
|
||||
return rval
|
||||
}
|
||||
|
||||
// New creates and returns a standard Tracer which defers completed Spans to
|
||||
// `recorder`.
|
||||
// Spans created by this Tracer support the ext.SamplingPriority tag: Setting
|
||||
// ext.SamplingPriority causes the Span to be Sampled from that point on.
|
||||
func New(recorder SpanRecorder) opentracing.Tracer {
|
||||
opts := DefaultOptions()
|
||||
opts.Recorder = recorder
|
||||
return NewWithOptions(opts)
|
||||
}
|
||||
|
||||
// Implements the `Tracer` interface.
|
||||
type LoggableTracer struct {
|
||||
options Options
|
||||
textPropagator *textMapPropagator
|
||||
binaryPropagator *binaryPropagator
|
||||
accessorPropagator *accessorPropagator
|
||||
}
|
||||
|
||||
func (t *LoggableTracer) StartSpan(
|
||||
operationName string,
|
||||
opts ...opentracing.StartSpanOption,
|
||||
) opentracing.Span {
|
||||
|
||||
if !writer.WriterGroup.Active() {
|
||||
return opentracing.NoopTracer.StartSpan(opentracing.NoopTracer{}, operationName)
|
||||
}
|
||||
|
||||
sso := opentracing.StartSpanOptions{}
|
||||
for _, o := range opts {
|
||||
o.Apply(&sso)
|
||||
}
|
||||
return t.StartSpanWithOptions(operationName, sso)
|
||||
}
|
||||
|
||||
func (t *LoggableTracer) getSpan() *spanImpl {
|
||||
if t.options.EnableSpanPool {
|
||||
sp := spanPool.Get().(*spanImpl)
|
||||
sp.reset()
|
||||
return sp
|
||||
}
|
||||
return &spanImpl{}
|
||||
}
|
||||
|
||||
func (t *LoggableTracer) StartSpanWithOptions(
|
||||
operationName string,
|
||||
opts opentracing.StartSpanOptions,
|
||||
) opentracing.Span {
|
||||
if !writer.WriterGroup.Active() {
|
||||
return opentracing.NoopTracer.StartSpan(opentracing.NoopTracer{}, operationName)
|
||||
}
|
||||
// Start time.
|
||||
startTime := opts.StartTime
|
||||
if startTime.IsZero() {
|
||||
startTime = time.Now()
|
||||
}
|
||||
|
||||
// Tags.
|
||||
tags := opts.Tags
|
||||
|
||||
// Build the new span. This is the only allocation: We'll return this as
|
||||
// an opentracing.Span.
|
||||
sp := t.getSpan()
|
||||
// Look for a parent in the list of References.
|
||||
//
|
||||
// TODO: would be nice if loggabletracer did something with all
|
||||
// References, not just the first one.
|
||||
ReferencesLoop:
|
||||
for _, ref := range opts.References {
|
||||
switch ref.Type {
|
||||
case opentracing.ChildOfRef,
|
||||
opentracing.FollowsFromRef:
|
||||
|
||||
refCtx, ok := ref.ReferencedContext.(SpanContext)
|
||||
if !ok {
|
||||
// Could be a noopSpanContext
|
||||
// Ignore that parent.
|
||||
continue
|
||||
}
|
||||
sp.raw.Context.TraceID = refCtx.TraceID
|
||||
sp.raw.Context.SpanID = randomID()
|
||||
sp.raw.Context.Sampled = refCtx.Sampled
|
||||
sp.raw.ParentSpanID = refCtx.SpanID
|
||||
|
||||
if l := len(refCtx.Baggage); l > 0 {
|
||||
sp.raw.Context.Baggage = make(map[string]string, l)
|
||||
for k, v := range refCtx.Baggage {
|
||||
sp.raw.Context.Baggage[k] = v
|
||||
}
|
||||
}
|
||||
break ReferencesLoop
|
||||
}
|
||||
}
|
||||
if sp.raw.Context.TraceID == 0 {
|
||||
// No parent Span found; allocate new trace and span ids and determine
|
||||
// the Sampled status.
|
||||
sp.raw.Context.TraceID, sp.raw.Context.SpanID = randomID2()
|
||||
sp.raw.Context.Sampled = t.options.ShouldSample(sp.raw.Context.TraceID)
|
||||
}
|
||||
|
||||
return t.startSpanInternal(
|
||||
sp,
|
||||
operationName,
|
||||
startTime,
|
||||
tags,
|
||||
)
|
||||
}
|
||||
|
||||
func (t *LoggableTracer) startSpanInternal(
|
||||
sp *spanImpl,
|
||||
operationName string,
|
||||
startTime time.Time,
|
||||
tags opentracing.Tags,
|
||||
) opentracing.Span {
|
||||
sp.tracer = t
|
||||
if t.options.NewSpanEventListener != nil {
|
||||
sp.event = t.options.NewSpanEventListener()
|
||||
}
|
||||
sp.raw.Operation = operationName
|
||||
sp.raw.Start = startTime
|
||||
sp.raw.Duration = -1
|
||||
sp.raw.Tags = tags
|
||||
if t.options.DebugAssertSingleGoroutine {
|
||||
sp.SetTag(debugGoroutineIDTag, curGoroutineID())
|
||||
}
|
||||
defer sp.onCreate(operationName)
|
||||
return sp
|
||||
}
|
||||
|
||||
type delegatorType struct{}
|
||||
|
||||
// Delegator is the format to use for DelegatingCarrier.
|
||||
var Delegator delegatorType
|
||||
|
||||
func (t *LoggableTracer) Inject(sc opentracing.SpanContext, format interface{}, carrier interface{}) error {
|
||||
if !writer.WriterGroup.Active() {
|
||||
return opentracing.NoopTracer.Inject(opentracing.NoopTracer{}, sc, format, carrier)
|
||||
}
|
||||
switch format {
|
||||
case opentracing.TextMap, opentracing.HTTPHeaders:
|
||||
return t.textPropagator.Inject(sc, carrier)
|
||||
case opentracing.Binary:
|
||||
return t.binaryPropagator.Inject(sc, carrier)
|
||||
}
|
||||
if _, ok := format.(delegatorType); ok {
|
||||
return t.accessorPropagator.Inject(sc, carrier)
|
||||
}
|
||||
return opentracing.ErrUnsupportedFormat
|
||||
}
|
||||
|
||||
func (t *LoggableTracer) Extract(format interface{}, carrier interface{}) (opentracing.SpanContext, error) {
|
||||
if !writer.WriterGroup.Active() {
|
||||
return opentracing.NoopTracer.Extract(opentracing.NoopTracer{}, format, carrier)
|
||||
}
|
||||
switch format {
|
||||
case opentracing.TextMap, opentracing.HTTPHeaders:
|
||||
return t.textPropagator.Extract(carrier)
|
||||
case opentracing.Binary:
|
||||
return t.binaryPropagator.Extract(carrier)
|
||||
}
|
||||
if _, ok := format.(delegatorType); ok {
|
||||
return t.accessorPropagator.Extract(carrier)
|
||||
}
|
||||
return nil, opentracing.ErrUnsupportedFormat
|
||||
}
|
||||
|
||||
func (t *LoggableTracer) Options() Options {
|
||||
return t.options
|
||||
}
|
||||
25
vendor/github.com/ipfs/go-log/tracer/util.go
generated
vendored
Normal file
25
vendor/github.com/ipfs/go-log/tracer/util.go
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
package loggabletracer
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
seededIDGen = rand.New(rand.NewSource(time.Now().UnixNano()))
|
||||
// The golang rand generators are *not* intrinsically thread-safe.
|
||||
seededIDLock sync.Mutex
|
||||
)
|
||||
|
||||
func randomID() uint64 {
|
||||
seededIDLock.Lock()
|
||||
defer seededIDLock.Unlock()
|
||||
return uint64(seededIDGen.Int63())
|
||||
}
|
||||
|
||||
func randomID2() (uint64, uint64) {
|
||||
seededIDLock.Lock()
|
||||
defer seededIDLock.Unlock()
|
||||
return uint64(seededIDGen.Int63()), uint64(seededIDGen.Int63())
|
||||
}
|
||||
6
vendor/github.com/ipfs/go-log/tracer/wire/Makefile
generated
vendored
Normal file
6
vendor/github.com/ipfs/go-log/tracer/wire/Makefile
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
pbgos := $(patsubst %.proto,%.pb.go,$(wildcard *.proto))
|
||||
|
||||
all: $(pbgos)
|
||||
|
||||
%.pb.go: %.proto
|
||||
protoc --gogofaster_out=. --proto_path=$(GOPATH)/src:. $<
|
||||
40
vendor/github.com/ipfs/go-log/tracer/wire/carrier.go
generated
vendored
Normal file
40
vendor/github.com/ipfs/go-log/tracer/wire/carrier.go
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
package wire
|
||||
|
||||
// ProtobufCarrier is a DelegatingCarrier that uses protocol buffers as the
|
||||
// the underlying datastructure. The reason for implementing DelagatingCarrier
|
||||
// is to allow for end users to serialize the underlying protocol buffers using
|
||||
// jsonpb or any other serialization forms they want.
|
||||
type ProtobufCarrier TracerState
|
||||
|
||||
// SetState set's the tracer state.
|
||||
func (p *ProtobufCarrier) SetState(traceID, spanID uint64, sampled bool) {
|
||||
p.TraceId = traceID
|
||||
p.SpanId = spanID
|
||||
p.Sampled = sampled
|
||||
}
|
||||
|
||||
// State returns the tracer state.
|
||||
func (p *ProtobufCarrier) State() (traceID, spanID uint64, sampled bool) {
|
||||
traceID = p.TraceId
|
||||
spanID = p.SpanId
|
||||
sampled = p.Sampled
|
||||
return traceID, spanID, sampled
|
||||
}
|
||||
|
||||
// SetBaggageItem sets a baggage item.
|
||||
func (p *ProtobufCarrier) SetBaggageItem(key, value string) {
|
||||
if p.BaggageItems == nil {
|
||||
p.BaggageItems = map[string]string{key: value}
|
||||
return
|
||||
}
|
||||
|
||||
p.BaggageItems[key] = value
|
||||
}
|
||||
|
||||
// GetBaggage iterates over each baggage item and executes the callback with
|
||||
// the key:value pair.
|
||||
func (p *ProtobufCarrier) GetBaggage(f func(k, v string)) {
|
||||
for k, v := range p.BaggageItems {
|
||||
f(k, v)
|
||||
}
|
||||
}
|
||||
6
vendor/github.com/ipfs/go-log/tracer/wire/gen.go
generated
vendored
Normal file
6
vendor/github.com/ipfs/go-log/tracer/wire/gen.go
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
package wire
|
||||
|
||||
//go:generate protoc --gogofaster_out=$GOPATH/src/github.com/ipfs/go-log/tracer/wire wire.proto
|
||||
|
||||
// Run `go get github.com/gogo/protobuf/protoc-gen-gogofaster` to install the
|
||||
// gogofaster generator binary.
|
||||
528
vendor/github.com/ipfs/go-log/tracer/wire/wire.pb.go
generated
vendored
Normal file
528
vendor/github.com/ipfs/go-log/tracer/wire/wire.pb.go
generated
vendored
Normal file
@@ -0,0 +1,528 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: wire.proto
|
||||
|
||||
package wire
|
||||
|
||||
import (
|
||||
encoding_binary "encoding/binary"
|
||||
fmt "fmt"
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type TracerState struct {
|
||||
TraceId uint64 `protobuf:"fixed64,1,opt,name=trace_id,json=traceId,proto3" json:"trace_id,omitempty"`
|
||||
SpanId uint64 `protobuf:"fixed64,2,opt,name=span_id,json=spanId,proto3" json:"span_id,omitempty"`
|
||||
Sampled bool `protobuf:"varint,3,opt,name=sampled,proto3" json:"sampled,omitempty"`
|
||||
BaggageItems map[string]string `protobuf:"bytes,4,rep,name=baggage_items,json=baggageItems,proto3" json:"baggage_items,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
}
|
||||
|
||||
func (m *TracerState) Reset() { *m = TracerState{} }
|
||||
func (m *TracerState) String() string { return proto.CompactTextString(m) }
|
||||
func (*TracerState) ProtoMessage() {}
|
||||
func (*TracerState) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_f2dcdddcdf68d8e0, []int{0}
|
||||
}
|
||||
func (m *TracerState) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *TracerState) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_TracerState.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *TracerState) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_TracerState.Merge(m, src)
|
||||
}
|
||||
func (m *TracerState) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *TracerState) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_TracerState.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_TracerState proto.InternalMessageInfo
|
||||
|
||||
func (m *TracerState) GetTraceId() uint64 {
|
||||
if m != nil {
|
||||
return m.TraceId
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *TracerState) GetSpanId() uint64 {
|
||||
if m != nil {
|
||||
return m.SpanId
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *TracerState) GetSampled() bool {
|
||||
if m != nil {
|
||||
return m.Sampled
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *TracerState) GetBaggageItems() map[string]string {
|
||||
if m != nil {
|
||||
return m.BaggageItems
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*TracerState)(nil), "loggabletracer.wire.TracerState")
|
||||
proto.RegisterMapType((map[string]string)(nil), "loggabletracer.wire.TracerState.BaggageItemsEntry")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("wire.proto", fileDescriptor_f2dcdddcdf68d8e0) }
|
||||
|
||||
var fileDescriptor_f2dcdddcdf68d8e0 = []byte{
|
||||
// 250 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x2a, 0xcf, 0x2c, 0x4a,
|
||||
0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x12, 0xce, 0xc9, 0x4f, 0x4f, 0x4f, 0x4c, 0xca, 0x49,
|
||||
0x2d, 0x29, 0x4a, 0x4c, 0x4e, 0x2d, 0xd2, 0x03, 0x49, 0x29, 0x7d, 0x65, 0xe4, 0xe2, 0x0e, 0x01,
|
||||
0xf3, 0x83, 0x4b, 0x12, 0x4b, 0x52, 0x85, 0x24, 0xb9, 0x38, 0xc0, 0xd2, 0xf1, 0x99, 0x29, 0x12,
|
||||
0x8c, 0x0a, 0x8c, 0x1a, 0x6c, 0x41, 0xec, 0x60, 0xbe, 0x67, 0x8a, 0x90, 0x38, 0x17, 0x7b, 0x71,
|
||||
0x41, 0x62, 0x1e, 0x48, 0x86, 0x09, 0x2c, 0xc3, 0x06, 0xe2, 0x7a, 0xa6, 0x08, 0x49, 0x70, 0xb1,
|
||||
0x17, 0x27, 0xe6, 0x16, 0xe4, 0xa4, 0xa6, 0x48, 0x30, 0x2b, 0x30, 0x6a, 0x70, 0x04, 0xc1, 0xb8,
|
||||
0x42, 0xe1, 0x5c, 0xbc, 0x49, 0x89, 0xe9, 0xe9, 0x89, 0xe9, 0xa9, 0xf1, 0x99, 0x25, 0xa9, 0xb9,
|
||||
0xc5, 0x12, 0x2c, 0x0a, 0xcc, 0x1a, 0xdc, 0x46, 0x46, 0x7a, 0x58, 0x9c, 0xa2, 0x87, 0xe4, 0x0c,
|
||||
0x3d, 0x27, 0x88, 0x2e, 0x4f, 0x90, 0x26, 0xd7, 0xbc, 0x92, 0xa2, 0xca, 0x20, 0x9e, 0x24, 0x24,
|
||||
0x21, 0x29, 0x7b, 0x2e, 0x41, 0x0c, 0x25, 0x42, 0x02, 0x5c, 0xcc, 0xd9, 0xa9, 0x95, 0x60, 0x67,
|
||||
0x73, 0x06, 0x81, 0x98, 0x42, 0x22, 0x5c, 0xac, 0x65, 0x89, 0x39, 0xa5, 0xa9, 0x60, 0x07, 0x73,
|
||||
0x06, 0x41, 0x38, 0x56, 0x4c, 0x16, 0x8c, 0x4e, 0x72, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24,
|
||||
0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0xe3, 0x84, 0xc7, 0x72, 0x0c, 0x17, 0x1e, 0xcb, 0x31, 0xdc, 0x78,
|
||||
0x2c, 0xc7, 0x10, 0xc5, 0x02, 0x72, 0x4c, 0x12, 0x1b, 0x38, 0xcc, 0x8c, 0x01, 0x01, 0x00, 0x00,
|
||||
0xff, 0xff, 0xe4, 0x48, 0xf4, 0xf8, 0x41, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *TracerState) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *TracerState) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *TracerState) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if len(m.BaggageItems) > 0 {
|
||||
for k := range m.BaggageItems {
|
||||
v := m.BaggageItems[k]
|
||||
baseI := i
|
||||
i -= len(v)
|
||||
copy(dAtA[i:], v)
|
||||
i = encodeVarintWire(dAtA, i, uint64(len(v)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
i -= len(k)
|
||||
copy(dAtA[i:], k)
|
||||
i = encodeVarintWire(dAtA, i, uint64(len(k)))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
i = encodeVarintWire(dAtA, i, uint64(baseI-i))
|
||||
i--
|
||||
dAtA[i] = 0x22
|
||||
}
|
||||
}
|
||||
if m.Sampled {
|
||||
i--
|
||||
if m.Sampled {
|
||||
dAtA[i] = 1
|
||||
} else {
|
||||
dAtA[i] = 0
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
if m.SpanId != 0 {
|
||||
i -= 8
|
||||
encoding_binary.LittleEndian.PutUint64(dAtA[i:], uint64(m.SpanId))
|
||||
i--
|
||||
dAtA[i] = 0x11
|
||||
}
|
||||
if m.TraceId != 0 {
|
||||
i -= 8
|
||||
encoding_binary.LittleEndian.PutUint64(dAtA[i:], uint64(m.TraceId))
|
||||
i--
|
||||
dAtA[i] = 0x9
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintWire(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovWire(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *TracerState) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.TraceId != 0 {
|
||||
n += 9
|
||||
}
|
||||
if m.SpanId != 0 {
|
||||
n += 9
|
||||
}
|
||||
if m.Sampled {
|
||||
n += 2
|
||||
}
|
||||
if len(m.BaggageItems) > 0 {
|
||||
for k, v := range m.BaggageItems {
|
||||
_ = k
|
||||
_ = v
|
||||
mapEntrySize := 1 + len(k) + sovWire(uint64(len(k))) + 1 + len(v) + sovWire(uint64(len(v)))
|
||||
n += mapEntrySize + 1 + sovWire(uint64(mapEntrySize))
|
||||
}
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovWire(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozWire(x uint64) (n int) {
|
||||
return sovWire(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *TracerState) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: TracerState: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: TracerState: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 1 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field TraceId", wireType)
|
||||
}
|
||||
m.TraceId = 0
|
||||
if (iNdEx + 8) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.TraceId = uint64(encoding_binary.LittleEndian.Uint64(dAtA[iNdEx:]))
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
if wireType != 1 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field SpanId", wireType)
|
||||
}
|
||||
m.SpanId = 0
|
||||
if (iNdEx + 8) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.SpanId = uint64(encoding_binary.LittleEndian.Uint64(dAtA[iNdEx:]))
|
||||
iNdEx += 8
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Sampled", wireType)
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
v |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
m.Sampled = bool(v != 0)
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field BaggageItems", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.BaggageItems == nil {
|
||||
m.BaggageItems = make(map[string]string)
|
||||
}
|
||||
var mapkey string
|
||||
var mapvalue string
|
||||
for iNdEx < postIndex {
|
||||
entryPreIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
if fieldNum == 1 {
|
||||
var stringLenmapkey uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLenmapkey |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLenmapkey := int(stringLenmapkey)
|
||||
if intStringLenmapkey < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
postStringIndexmapkey := iNdEx + intStringLenmapkey
|
||||
if postStringIndexmapkey < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
if postStringIndexmapkey > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
|
||||
iNdEx = postStringIndexmapkey
|
||||
} else if fieldNum == 2 {
|
||||
var stringLenmapvalue uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLenmapvalue |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLenmapvalue := int(stringLenmapvalue)
|
||||
if intStringLenmapvalue < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
postStringIndexmapvalue := iNdEx + intStringLenmapvalue
|
||||
if postStringIndexmapvalue < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
if postStringIndexmapvalue > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])
|
||||
iNdEx = postStringIndexmapvalue
|
||||
} else {
|
||||
iNdEx = entryPreIndex
|
||||
skippy, err := skipWire(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
if (iNdEx + skippy) > postIndex {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
m.BaggageItems[mapkey] = mapvalue
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipWire(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLengthWire
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipWire(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowWire
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthWire
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupWire
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthWire
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthWire = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowWire = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupWire = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
||||
10
vendor/github.com/ipfs/go-log/tracer/wire/wire.proto
generated
vendored
Normal file
10
vendor/github.com/ipfs/go-log/tracer/wire/wire.proto
generated
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
syntax = "proto3";
|
||||
package loggabletracer.wire;
|
||||
option go_package = "wire";
|
||||
|
||||
message TracerState {
|
||||
fixed64 trace_id = 1;
|
||||
fixed64 span_id = 2;
|
||||
bool sampled = 3;
|
||||
map<string, string> baggage_items = 4;
|
||||
}
|
||||
21
vendor/github.com/ipfs/go-log/v2/LICENSE
generated
vendored
Normal file
21
vendor/github.com/ipfs/go-log/v2/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Juan Batiz-Benet
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
137
vendor/github.com/ipfs/go-log/v2/README.md
generated
vendored
Normal file
137
vendor/github.com/ipfs/go-log/v2/README.md
generated
vendored
Normal file
@@ -0,0 +1,137 @@
|
||||
# go-log
|
||||
|
||||
[](https://protocol.ai)
|
||||
[](https://ipfs.io/)
|
||||
[](https://pkg.go.dev/github.com/ipfs/go-log/v2)
|
||||
|
||||
> The logging library used by go-ipfs
|
||||
|
||||
go-log wraps [zap](https://github.com/uber-go/zap) to provide a logging facade. go-log manages logging
|
||||
instances and allows for their levels to be controlled individually.
|
||||
|
||||
## Install
|
||||
|
||||
```sh
|
||||
go get github.com/ipfs/go-log
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Once the package is imported under the name `logging`, an instance of `EventLogger` can be created like so:
|
||||
|
||||
```go
|
||||
var log = logging.Logger("subsystem name")
|
||||
```
|
||||
|
||||
It can then be used to emit log messages in plain printf-style messages at seven standard levels:
|
||||
|
||||
Levels may be set for all loggers:
|
||||
|
||||
```go
|
||||
lvl, err := logging.LevelFromString("error")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
logging.SetAllLoggers(lvl)
|
||||
```
|
||||
|
||||
or individually:
|
||||
|
||||
```go
|
||||
err := logging.SetLogLevel("net:pubsub", "info")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
or by regular expression:
|
||||
|
||||
```go
|
||||
err := logging.SetLogLevelRegex("net:.*", "info")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
This package can be configured through various environment variables.
|
||||
|
||||
#### `GOLOG_LOG_LEVEL`
|
||||
|
||||
Specifies the log-level, both globally and on a per-subsystem basis.
|
||||
|
||||
For example, the following will set the global minimum log level to `error`, but reduce the minimum
|
||||
log level for `subsystem1` to `info` and reduce the minimum log level for `subsystem2` to debug.
|
||||
|
||||
```bash
|
||||
export GOLOG_LOG_LEVEL="error,subsystem1=info,subsystem2=debug"
|
||||
```
|
||||
|
||||
`IPFS_LOGGING` is a deprecated alias for this environment variable.
|
||||
|
||||
#### `GOLOG_FILE`
|
||||
|
||||
Specifies that logs should be written to the specified file. If this option is _not_ specified, logs are written to standard error.
|
||||
|
||||
```bash
|
||||
export GOLOG_FILE="/path/to/my/file.log"
|
||||
```
|
||||
|
||||
#### `GOLOG_OUTPUT`
|
||||
|
||||
Specifies where logging output should be written. Can take one or more of the following values, combined with `+`:
|
||||
|
||||
- `stdout` -- write logs to standard out.
|
||||
- `stderr` -- write logs to standard error.
|
||||
- `file` -- write logs to the file specified by `GOLOG_FILE`
|
||||
|
||||
For example, if you want to log to both a file and standard error:
|
||||
|
||||
```bash
|
||||
export GOLOG_FILE="/path/to/my/file.log"
|
||||
export GOLOG_OUTPUT="stderr+file"
|
||||
```
|
||||
|
||||
Setting _only_ `GOLOG_FILE` will prevent logs from being written to standard error.
|
||||
|
||||
#### `GOLOG_LOG_FMT`
|
||||
|
||||
Specifies the log message format. It supports the following values:
|
||||
|
||||
- `color` -- human readable, colorized (ANSI) output
|
||||
- `nocolor` -- human readable, plain-text output.
|
||||
- `json` -- structured JSON.
|
||||
|
||||
For example, to log structured JSON (for easier parsing):
|
||||
|
||||
```bash
|
||||
export GOLOG_LOG_FMT="json"
|
||||
```
|
||||
|
||||
The logging format defaults to `color` when the output is a terminal, and `nocolor` otherwise.
|
||||
|
||||
`IPFS_LOGGING_FMT` is a deprecated alias for this environment variable.
|
||||
|
||||
#### `GOLOG_LOG_LABELS`
|
||||
|
||||
Specifies a set of labels that should be added to all log messages as comma-separated key-value
|
||||
pairs. For example, the following add `{"app": "example_app", "dc": "sjc-1"}` to every log entry.
|
||||
|
||||
```bash
|
||||
export GOLOG_LOG_LABELS="app=example_app,dc=sjc-1"
|
||||
```
|
||||
|
||||
## Contribute
|
||||
|
||||
Feel free to join in. All welcome. Open an [issue](https://github.com/ipfs/go-log/issues)!
|
||||
|
||||
This repository falls under the IPFS [Code of Conduct](https://github.com/ipfs/community/blob/master/code-of-conduct.md).
|
||||
|
||||
### Want to hack on IPFS?
|
||||
|
||||
[](https://github.com/ipfs/community/blob/master/CONTRIBUTING.md)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
121
vendor/github.com/ipfs/go-log/v2/core.go
generated
vendored
Normal file
121
vendor/github.com/ipfs/go-log/v2/core.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sync"
|
||||
|
||||
"go.uber.org/multierr"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
var _ zapcore.Core = (*lockedMultiCore)(nil)
|
||||
|
||||
type lockedMultiCore struct {
|
||||
mu sync.RWMutex // guards mutations to cores slice
|
||||
cores []zapcore.Core
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) With(fields []zapcore.Field) zapcore.Core {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
sub := &lockedMultiCore{
|
||||
cores: make([]zapcore.Core, len(l.cores)),
|
||||
}
|
||||
for i := range l.cores {
|
||||
sub.cores[i] = l.cores[i].With(fields)
|
||||
}
|
||||
return sub
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) Enabled(lvl zapcore.Level) bool {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
for i := range l.cores {
|
||||
if l.cores[i].Enabled(lvl) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) Check(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
for i := range l.cores {
|
||||
ce = l.cores[i].Check(ent, ce)
|
||||
}
|
||||
return ce
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) Write(ent zapcore.Entry, fields []zapcore.Field) error {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
var err error
|
||||
for i := range l.cores {
|
||||
err = multierr.Append(err, l.cores[i].Write(ent, fields))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) Sync() error {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
var err error
|
||||
for i := range l.cores {
|
||||
err = multierr.Append(err, l.cores[i].Sync())
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) AddCore(core zapcore.Core) {
|
||||
l.mu.Lock()
|
||||
defer l.mu.Unlock()
|
||||
|
||||
l.cores = append(l.cores, core)
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) DeleteCore(core zapcore.Core) {
|
||||
l.mu.Lock()
|
||||
defer l.mu.Unlock()
|
||||
|
||||
w := 0
|
||||
for i := 0; i < len(l.cores); i++ {
|
||||
if reflect.DeepEqual(l.cores[i], core) {
|
||||
continue
|
||||
}
|
||||
l.cores[w] = l.cores[i]
|
||||
w++
|
||||
}
|
||||
l.cores = l.cores[:w]
|
||||
}
|
||||
|
||||
func (l *lockedMultiCore) ReplaceCore(original, replacement zapcore.Core) {
|
||||
l.mu.Lock()
|
||||
defer l.mu.Unlock()
|
||||
|
||||
for i := 0; i < len(l.cores); i++ {
|
||||
if reflect.DeepEqual(l.cores[i], original) {
|
||||
l.cores[i] = replacement
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func newCore(format LogFormat, ws zapcore.WriteSyncer, level LogLevel) zapcore.Core {
|
||||
encCfg := zap.NewProductionEncoderConfig()
|
||||
encCfg.EncodeTime = zapcore.ISO8601TimeEncoder
|
||||
|
||||
var encoder zapcore.Encoder
|
||||
switch format {
|
||||
case PlaintextOutput:
|
||||
encCfg.EncodeLevel = zapcore.CapitalLevelEncoder
|
||||
encoder = zapcore.NewConsoleEncoder(encCfg)
|
||||
case JSONOutput:
|
||||
encoder = zapcore.NewJSONEncoder(encCfg)
|
||||
default:
|
||||
encCfg.EncodeLevel = zapcore.CapitalColorLevelEncoder
|
||||
encoder = zapcore.NewConsoleEncoder(encCfg)
|
||||
}
|
||||
|
||||
return zapcore.NewCore(encoder, ws, zap.NewAtomicLevelAt(zapcore.Level(level)))
|
||||
}
|
||||
30
vendor/github.com/ipfs/go-log/v2/levels.go
generated
vendored
Normal file
30
vendor/github.com/ipfs/go-log/v2/levels.go
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
package log
|
||||
|
||||
import "go.uber.org/zap/zapcore"
|
||||
|
||||
// LogLevel represents a log severity level. Use the package variables as an
|
||||
// enum.
|
||||
type LogLevel zapcore.Level
|
||||
|
||||
var (
|
||||
LevelDebug = LogLevel(zapcore.DebugLevel)
|
||||
LevelInfo = LogLevel(zapcore.InfoLevel)
|
||||
LevelWarn = LogLevel(zapcore.WarnLevel)
|
||||
LevelError = LogLevel(zapcore.ErrorLevel)
|
||||
LevelDPanic = LogLevel(zapcore.DPanicLevel)
|
||||
LevelPanic = LogLevel(zapcore.PanicLevel)
|
||||
LevelFatal = LogLevel(zapcore.FatalLevel)
|
||||
)
|
||||
|
||||
// LevelFromString parses a string-based level and returns the corresponding
|
||||
// LogLevel.
|
||||
//
|
||||
// Supported strings are: DEBUG, INFO, WARN, ERROR, DPANIC, PANIC, FATAL, and
|
||||
// their lower-case forms.
|
||||
//
|
||||
// The returned LogLevel must be discarded if error is not nil.
|
||||
func LevelFromString(level string) (LogLevel, error) {
|
||||
lvl := zapcore.InfoLevel // zero value
|
||||
err := lvl.Set(level)
|
||||
return LogLevel(lvl), err
|
||||
}
|
||||
94
vendor/github.com/ipfs/go-log/v2/log.go
generated
vendored
Normal file
94
vendor/github.com/ipfs/go-log/v2/log.go
generated
vendored
Normal file
@@ -0,0 +1,94 @@
|
||||
// Package log is the logging library used by IPFS & libp2p
|
||||
// (https://github.com/ipfs/go-ipfs).
|
||||
package log
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
// StandardLogger provides API compatibility with standard printf loggers
|
||||
// eg. go-logging
|
||||
type StandardLogger interface {
|
||||
Debug(args ...interface{})
|
||||
Debugf(format string, args ...interface{})
|
||||
Error(args ...interface{})
|
||||
Errorf(format string, args ...interface{})
|
||||
Fatal(args ...interface{})
|
||||
Fatalf(format string, args ...interface{})
|
||||
Info(args ...interface{})
|
||||
Infof(format string, args ...interface{})
|
||||
Panic(args ...interface{})
|
||||
Panicf(format string, args ...interface{})
|
||||
Warn(args ...interface{})
|
||||
Warnf(format string, args ...interface{})
|
||||
}
|
||||
|
||||
// EventLogger extends the StandardLogger interface to allow for log items
|
||||
// containing structured metadata
|
||||
type EventLogger interface {
|
||||
StandardLogger
|
||||
}
|
||||
|
||||
// Logger retrieves an event logger by name
|
||||
func Logger(system string) *ZapEventLogger {
|
||||
if len(system) == 0 {
|
||||
setuplog := getLogger("setup-logger")
|
||||
setuplog.Error("Missing name parameter")
|
||||
system = "undefined"
|
||||
}
|
||||
|
||||
logger := getLogger(system)
|
||||
skipLogger := logger.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar()
|
||||
|
||||
return &ZapEventLogger{
|
||||
system: system,
|
||||
SugaredLogger: *logger,
|
||||
skipLogger: *skipLogger,
|
||||
}
|
||||
}
|
||||
|
||||
// ZapEventLogger implements the EventLogger and wraps a go-logging Logger
|
||||
type ZapEventLogger struct {
|
||||
zap.SugaredLogger
|
||||
// used to fix the caller location when calling Warning and Warningf.
|
||||
skipLogger zap.SugaredLogger
|
||||
system string
|
||||
}
|
||||
|
||||
// Warning is for compatibility
|
||||
// Deprecated: use Warn(args ...interface{}) instead
|
||||
func (logger *ZapEventLogger) Warning(args ...interface{}) {
|
||||
logger.skipLogger.Warn(args...)
|
||||
}
|
||||
|
||||
// Warningf is for compatibility
|
||||
// Deprecated: use Warnf(format string, args ...interface{}) instead
|
||||
func (logger *ZapEventLogger) Warningf(format string, args ...interface{}) {
|
||||
logger.skipLogger.Warnf(format, args...)
|
||||
}
|
||||
|
||||
// FormatRFC3339 returns the given time in UTC with RFC3999Nano format.
|
||||
func FormatRFC3339(t time.Time) string {
|
||||
return t.UTC().Format(time.RFC3339Nano)
|
||||
}
|
||||
|
||||
func WithStacktrace(l *ZapEventLogger, level LogLevel) *ZapEventLogger {
|
||||
copyLogger := *l
|
||||
copyLogger.SugaredLogger = *copyLogger.SugaredLogger.Desugar().
|
||||
WithOptions(zap.AddStacktrace(zapcore.Level(level))).Sugar()
|
||||
copyLogger.skipLogger = *copyLogger.SugaredLogger.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar()
|
||||
return ©Logger
|
||||
}
|
||||
|
||||
// WithSkip returns a new logger that skips the specified number of stack frames when reporting the
|
||||
// line/file.
|
||||
func WithSkip(l *ZapEventLogger, skip int) *ZapEventLogger {
|
||||
copyLogger := *l
|
||||
copyLogger.SugaredLogger = *copyLogger.SugaredLogger.Desugar().
|
||||
WithOptions(zap.AddCallerSkip(skip)).Sugar()
|
||||
copyLogger.skipLogger = *copyLogger.SugaredLogger.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar()
|
||||
return ©Logger
|
||||
}
|
||||
12
vendor/github.com/ipfs/go-log/v2/path_other.go
generated
vendored
Normal file
12
vendor/github.com/ipfs/go-log/v2/path_other.go
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
package log
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func normalizePath(p string) (string, error) {
|
||||
return filepath.Abs(p)
|
||||
}
|
||||
36
vendor/github.com/ipfs/go-log/v2/path_windows.go
generated
vendored
Normal file
36
vendor/github.com/ipfs/go-log/v2/path_windows.go
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package log
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func normalizePath(p string) (string, error) {
|
||||
if p == "" {
|
||||
return "", fmt.Errorf("path empty")
|
||||
}
|
||||
p, err := filepath.Abs(p)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
// Is this _really_ an absolute path?
|
||||
if !strings.HasPrefix(p, "\\\\") {
|
||||
// It's a drive: path!
|
||||
// Return a UNC path.
|
||||
p = "\\\\%3F\\" + p
|
||||
}
|
||||
|
||||
// This will return file:////?/c:/foobar
|
||||
//
|
||||
// Why? Because:
|
||||
// 1. Go will choke on file://c:/ because the "domain" includes a :.
|
||||
// 2. Windows will choke on file:///c:/ because the path will be
|
||||
// /c:/... which is _relative_ to the current drive.
|
||||
//
|
||||
// This path (a) has no "domain" and (b) starts with a slash. Yay!
|
||||
return "file://" + filepath.ToSlash(p), nil
|
||||
}
|
||||
90
vendor/github.com/ipfs/go-log/v2/pipe.go
generated
vendored
Normal file
90
vendor/github.com/ipfs/go-log/v2/pipe.go
generated
vendored
Normal file
@@ -0,0 +1,90 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"go.uber.org/multierr"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
// A PipeReader is a reader that reads from the logger. It is synchronous
|
||||
// so blocking on read will affect logging performance.
|
||||
type PipeReader struct {
|
||||
r *io.PipeReader
|
||||
closer io.Closer
|
||||
core zapcore.Core
|
||||
}
|
||||
|
||||
// Read implements the standard Read interface
|
||||
func (p *PipeReader) Read(data []byte) (int, error) {
|
||||
return p.r.Read(data)
|
||||
}
|
||||
|
||||
// Close unregisters the reader from the logger.
|
||||
func (p *PipeReader) Close() error {
|
||||
if p.core != nil {
|
||||
loggerCore.DeleteCore(p.core)
|
||||
}
|
||||
return multierr.Append(p.core.Sync(), p.closer.Close())
|
||||
}
|
||||
|
||||
// NewPipeReader creates a new in-memory reader that reads from all loggers
|
||||
// The caller must call Close on the returned reader when done.
|
||||
//
|
||||
// By default, it:
|
||||
//
|
||||
// 1. Logs JSON. This can be changed by passing the PipeFormat option.
|
||||
// 2. Logs everything that would otherwise be logged to the "primary" log
|
||||
// output. That is, everything enabled by SetLogLevel. The minimum log level
|
||||
// can be increased by passing the PipeLevel option.
|
||||
func NewPipeReader(opts ...PipeReaderOption) *PipeReader {
|
||||
opt := pipeReaderOptions{
|
||||
format: JSONOutput,
|
||||
level: LevelDebug,
|
||||
}
|
||||
|
||||
for _, o := range opts {
|
||||
o.setOption(&opt)
|
||||
}
|
||||
|
||||
r, w := io.Pipe()
|
||||
|
||||
p := &PipeReader{
|
||||
r: r,
|
||||
closer: w,
|
||||
core: newCore(opt.format, zapcore.AddSync(w), opt.level),
|
||||
}
|
||||
|
||||
loggerCore.AddCore(p.core)
|
||||
|
||||
return p
|
||||
}
|
||||
|
||||
type pipeReaderOptions struct {
|
||||
format LogFormat
|
||||
level LogLevel
|
||||
}
|
||||
|
||||
type PipeReaderOption interface {
|
||||
setOption(*pipeReaderOptions)
|
||||
}
|
||||
|
||||
type pipeReaderOptionFunc func(*pipeReaderOptions)
|
||||
|
||||
func (p pipeReaderOptionFunc) setOption(o *pipeReaderOptions) {
|
||||
p(o)
|
||||
}
|
||||
|
||||
// PipeFormat sets the output format of the pipe reader
|
||||
func PipeFormat(format LogFormat) PipeReaderOption {
|
||||
return pipeReaderOptionFunc(func(o *pipeReaderOptions) {
|
||||
o.format = format
|
||||
})
|
||||
}
|
||||
|
||||
// PipeLevel sets the log level of logs sent to the pipe reader.
|
||||
func PipeLevel(level LogLevel) PipeReaderOption {
|
||||
return pipeReaderOptionFunc(func(o *pipeReaderOptions) {
|
||||
o.level = level
|
||||
})
|
||||
}
|
||||
400
vendor/github.com/ipfs/go-log/v2/setup.go
generated
vendored
Normal file
400
vendor/github.com/ipfs/go-log/v2/setup.go
generated
vendored
Normal file
@@ -0,0 +1,400 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/mattn/go-isatty"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
var config Config
|
||||
|
||||
func init() {
|
||||
SetupLogging(configFromEnv())
|
||||
}
|
||||
|
||||
// Logging environment variables
|
||||
const (
|
||||
// IPFS_* prefixed env vars kept for backwards compatibility
|
||||
// for this release. They will not be available in the next
|
||||
// release.
|
||||
//
|
||||
// GOLOG_* env vars take precedences over IPFS_* env vars.
|
||||
envIPFSLogging = "IPFS_LOGGING"
|
||||
envIPFSLoggingFmt = "IPFS_LOGGING_FMT"
|
||||
|
||||
envLogging = "GOLOG_LOG_LEVEL"
|
||||
envLoggingFmt = "GOLOG_LOG_FMT"
|
||||
|
||||
envLoggingFile = "GOLOG_FILE" // /path/to/file
|
||||
envLoggingURL = "GOLOG_URL" // url that will be processed by sink in the zap
|
||||
|
||||
envLoggingOutput = "GOLOG_OUTPUT" // possible values: stdout|stderr|file combine multiple values with '+'
|
||||
envLoggingLabels = "GOLOG_LOG_LABELS" // comma-separated key-value pairs, i.e. "app=example_app,dc=sjc-1"
|
||||
)
|
||||
|
||||
type LogFormat int
|
||||
|
||||
const (
|
||||
ColorizedOutput LogFormat = iota
|
||||
PlaintextOutput
|
||||
JSONOutput
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
// Format overrides the format of the log output. Defaults to ColorizedOutput
|
||||
Format LogFormat
|
||||
|
||||
// Level is the default minimum enabled logging level.
|
||||
Level LogLevel
|
||||
|
||||
// SubsystemLevels are the default levels per-subsystem. When unspecified, defaults to Level.
|
||||
SubsystemLevels map[string]LogLevel
|
||||
|
||||
// Stderr indicates whether logs should be written to stderr.
|
||||
Stderr bool
|
||||
|
||||
// Stdout indicates whether logs should be written to stdout.
|
||||
Stdout bool
|
||||
|
||||
// File is a path to a file that logs will be written to.
|
||||
File string
|
||||
|
||||
// URL with schema supported by zap. Use zap.RegisterSink
|
||||
URL string
|
||||
|
||||
// Labels is a set of key-values to apply to all loggers
|
||||
Labels map[string]string
|
||||
}
|
||||
|
||||
// ErrNoSuchLogger is returned when the util pkg is asked for a non existant logger
|
||||
var ErrNoSuchLogger = errors.New("error: No such logger")
|
||||
|
||||
var loggerMutex sync.RWMutex // guards access to global logger state
|
||||
|
||||
// loggers is the set of loggers in the system
|
||||
var loggers = make(map[string]*zap.SugaredLogger)
|
||||
var levels = make(map[string]zap.AtomicLevel)
|
||||
|
||||
// primaryFormat is the format of the primary core used for logging
|
||||
var primaryFormat LogFormat = ColorizedOutput
|
||||
|
||||
// defaultLevel is the default log level
|
||||
var defaultLevel LogLevel = LevelError
|
||||
|
||||
// primaryCore is the primary logging core
|
||||
var primaryCore zapcore.Core
|
||||
|
||||
// loggerCore is the base for all loggers created by this package
|
||||
var loggerCore = &lockedMultiCore{}
|
||||
|
||||
// GetConfig returns a copy of the saved config. It can be inspected, modified,
|
||||
// and re-applied using a subsequent call to SetupLogging().
|
||||
func GetConfig() Config {
|
||||
return config
|
||||
}
|
||||
|
||||
// SetupLogging will initialize the logger backend and set the flags.
|
||||
// TODO calling this in `init` pushes all configuration to env variables
|
||||
// - move it out of `init`? then we need to change all the code (js-ipfs, go-ipfs) to call this explicitly
|
||||
// - have it look for a config file? need to define what that is
|
||||
func SetupLogging(cfg Config) {
|
||||
loggerMutex.Lock()
|
||||
defer loggerMutex.Unlock()
|
||||
|
||||
config = cfg
|
||||
|
||||
primaryFormat = cfg.Format
|
||||
defaultLevel = cfg.Level
|
||||
|
||||
outputPaths := []string{}
|
||||
|
||||
if cfg.Stderr {
|
||||
outputPaths = append(outputPaths, "stderr")
|
||||
}
|
||||
if cfg.Stdout {
|
||||
outputPaths = append(outputPaths, "stdout")
|
||||
}
|
||||
|
||||
// check if we log to a file
|
||||
if len(cfg.File) > 0 {
|
||||
if path, err := normalizePath(cfg.File); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "failed to resolve log path '%q', logging to %s\n", cfg.File, outputPaths)
|
||||
} else {
|
||||
outputPaths = append(outputPaths, path)
|
||||
}
|
||||
}
|
||||
if len(cfg.URL) > 0 {
|
||||
outputPaths = append(outputPaths, cfg.URL)
|
||||
}
|
||||
|
||||
ws, _, err := zap.Open(outputPaths...)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("unable to open logging output: %v", err))
|
||||
}
|
||||
|
||||
newPrimaryCore := newCore(primaryFormat, ws, LevelDebug) // the main core needs to log everything.
|
||||
|
||||
for k, v := range cfg.Labels {
|
||||
newPrimaryCore = newPrimaryCore.With([]zap.Field{zap.String(k, v)})
|
||||
}
|
||||
|
||||
setPrimaryCore(newPrimaryCore)
|
||||
setAllLoggers(defaultLevel)
|
||||
|
||||
for name, level := range cfg.SubsystemLevels {
|
||||
if leveler, ok := levels[name]; ok {
|
||||
leveler.SetLevel(zapcore.Level(level))
|
||||
} else {
|
||||
levels[name] = zap.NewAtomicLevelAt(zapcore.Level(level))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetPrimaryCore changes the primary logging core. If the SetupLogging was
|
||||
// called then the previously configured core will be replaced.
|
||||
func SetPrimaryCore(core zapcore.Core) {
|
||||
loggerMutex.Lock()
|
||||
defer loggerMutex.Unlock()
|
||||
|
||||
setPrimaryCore(core)
|
||||
}
|
||||
|
||||
func setPrimaryCore(core zapcore.Core) {
|
||||
if primaryCore != nil {
|
||||
loggerCore.ReplaceCore(primaryCore, core)
|
||||
} else {
|
||||
loggerCore.AddCore(core)
|
||||
}
|
||||
primaryCore = core
|
||||
}
|
||||
|
||||
// SetDebugLogging calls SetAllLoggers with logging.DEBUG
|
||||
func SetDebugLogging() {
|
||||
SetAllLoggers(LevelDebug)
|
||||
}
|
||||
|
||||
// SetAllLoggers changes the logging level of all loggers to lvl
|
||||
func SetAllLoggers(lvl LogLevel) {
|
||||
loggerMutex.RLock()
|
||||
defer loggerMutex.RUnlock()
|
||||
|
||||
setAllLoggers(lvl)
|
||||
}
|
||||
|
||||
func setAllLoggers(lvl LogLevel) {
|
||||
for _, l := range levels {
|
||||
l.SetLevel(zapcore.Level(lvl))
|
||||
}
|
||||
}
|
||||
|
||||
// SetLogLevel changes the log level of a specific subsystem
|
||||
// name=="*" changes all subsystems
|
||||
func SetLogLevel(name, level string) error {
|
||||
lvl, err := LevelFromString(level)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// wildcard, change all
|
||||
if name == "*" {
|
||||
SetAllLoggers(lvl)
|
||||
return nil
|
||||
}
|
||||
|
||||
loggerMutex.RLock()
|
||||
defer loggerMutex.RUnlock()
|
||||
|
||||
// Check if we have a logger by that name
|
||||
if _, ok := levels[name]; !ok {
|
||||
return ErrNoSuchLogger
|
||||
}
|
||||
|
||||
levels[name].SetLevel(zapcore.Level(lvl))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetLogLevelRegex sets all loggers to level `l` that match expression `e`.
|
||||
// An error is returned if `e` fails to compile.
|
||||
func SetLogLevelRegex(e, l string) error {
|
||||
lvl, err := LevelFromString(l)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rem, err := regexp.Compile(e)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
loggerMutex.Lock()
|
||||
defer loggerMutex.Unlock()
|
||||
for name := range loggers {
|
||||
if rem.MatchString(name) {
|
||||
levels[name].SetLevel(zapcore.Level(lvl))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetSubsystems returns a slice containing the
|
||||
// names of the current loggers
|
||||
func GetSubsystems() []string {
|
||||
loggerMutex.RLock()
|
||||
defer loggerMutex.RUnlock()
|
||||
subs := make([]string, 0, len(loggers))
|
||||
|
||||
for k := range loggers {
|
||||
subs = append(subs, k)
|
||||
}
|
||||
return subs
|
||||
}
|
||||
|
||||
func getLogger(name string) *zap.SugaredLogger {
|
||||
loggerMutex.Lock()
|
||||
defer loggerMutex.Unlock()
|
||||
log, ok := loggers[name]
|
||||
if !ok {
|
||||
level, ok := levels[name]
|
||||
if !ok {
|
||||
level = zap.NewAtomicLevelAt(zapcore.Level(defaultLevel))
|
||||
levels[name] = level
|
||||
}
|
||||
log = zap.New(loggerCore).
|
||||
WithOptions(
|
||||
zap.IncreaseLevel(level),
|
||||
zap.AddCaller(),
|
||||
).
|
||||
Named(name).
|
||||
Sugar()
|
||||
|
||||
loggers[name] = log
|
||||
}
|
||||
|
||||
return log
|
||||
}
|
||||
|
||||
// configFromEnv returns a Config with defaults populated using environment variables.
|
||||
func configFromEnv() Config {
|
||||
cfg := Config{
|
||||
Format: ColorizedOutput,
|
||||
Stderr: true,
|
||||
Level: LevelError,
|
||||
SubsystemLevels: map[string]LogLevel{},
|
||||
Labels: map[string]string{},
|
||||
}
|
||||
|
||||
format := os.Getenv(envLoggingFmt)
|
||||
if format == "" {
|
||||
format = os.Getenv(envIPFSLoggingFmt)
|
||||
}
|
||||
|
||||
var noExplicitFormat bool
|
||||
|
||||
switch format {
|
||||
case "color":
|
||||
cfg.Format = ColorizedOutput
|
||||
case "nocolor":
|
||||
cfg.Format = PlaintextOutput
|
||||
case "json":
|
||||
cfg.Format = JSONOutput
|
||||
default:
|
||||
if format != "" {
|
||||
fmt.Fprintf(os.Stderr, "ignoring unrecognized log format '%s'\n", format)
|
||||
}
|
||||
noExplicitFormat = true
|
||||
}
|
||||
|
||||
lvl := os.Getenv(envLogging)
|
||||
if lvl == "" {
|
||||
lvl = os.Getenv(envIPFSLogging)
|
||||
}
|
||||
if lvl != "" {
|
||||
for _, kvs := range strings.Split(lvl, ",") {
|
||||
kv := strings.SplitN(kvs, "=", 2)
|
||||
lvl, err := LevelFromString(kv[len(kv)-1])
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "error setting log level %q: %s\n", kvs, err)
|
||||
continue
|
||||
}
|
||||
switch len(kv) {
|
||||
case 1:
|
||||
cfg.Level = lvl
|
||||
case 2:
|
||||
cfg.SubsystemLevels[kv[0]] = lvl
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cfg.File = os.Getenv(envLoggingFile)
|
||||
// Disable stderr logging when a file is specified
|
||||
// https://github.com/ipfs/go-log/issues/83
|
||||
if cfg.File != "" {
|
||||
cfg.Stderr = false
|
||||
}
|
||||
|
||||
cfg.URL = os.Getenv(envLoggingURL)
|
||||
output := os.Getenv(envLoggingOutput)
|
||||
outputOptions := strings.Split(output, "+")
|
||||
for _, opt := range outputOptions {
|
||||
switch opt {
|
||||
case "stdout":
|
||||
cfg.Stdout = true
|
||||
case "stderr":
|
||||
cfg.Stderr = true
|
||||
case "file":
|
||||
if cfg.File == "" {
|
||||
fmt.Fprint(os.Stderr, "please specify a GOLOG_FILE value to write to")
|
||||
}
|
||||
case "url":
|
||||
if cfg.URL == "" {
|
||||
fmt.Fprint(os.Stderr, "please specify a GOLOG_URL value to write to")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check that neither of the requested Std* nor the file are TTYs
|
||||
// At this stage (configFromEnv) we do not have a uniform list to examine yet
|
||||
if noExplicitFormat &&
|
||||
!(cfg.Stdout && isTerm(os.Stdout)) &&
|
||||
!(cfg.Stderr && isTerm(os.Stderr)) &&
|
||||
// check this last: expensive
|
||||
!(cfg.File != "" && pathIsTerm(cfg.File)) {
|
||||
cfg.Format = PlaintextOutput
|
||||
}
|
||||
|
||||
labels := os.Getenv(envLoggingLabels)
|
||||
if labels != "" {
|
||||
labelKVs := strings.Split(labels, ",")
|
||||
for _, label := range labelKVs {
|
||||
kv := strings.Split(label, "=")
|
||||
if len(kv) != 2 {
|
||||
fmt.Fprint(os.Stderr, "invalid label k=v: ", label)
|
||||
continue
|
||||
}
|
||||
cfg.Labels[kv[0]] = kv[1]
|
||||
}
|
||||
}
|
||||
|
||||
return cfg
|
||||
}
|
||||
|
||||
func isTerm(f *os.File) bool {
|
||||
return isatty.IsTerminal(f.Fd()) || isatty.IsCygwinTerminal(f.Fd())
|
||||
}
|
||||
|
||||
func pathIsTerm(p string) bool {
|
||||
// !!!no!!! O_CREAT, if we fail - we fail
|
||||
f, err := os.OpenFile(p, os.O_WRONLY, 0)
|
||||
if f != nil {
|
||||
defer f.Close() // nolint:errcheck
|
||||
}
|
||||
return err == nil && isTerm(f)
|
||||
}
|
||||
3
vendor/github.com/ipfs/go-log/v2/version.json
generated
vendored
Normal file
3
vendor/github.com/ipfs/go-log/v2/version.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"version": "v2.5.0"
|
||||
}
|
||||
4
vendor/github.com/ipfs/go-log/writer/option.go
generated
vendored
Normal file
4
vendor/github.com/ipfs/go-log/writer/option.go
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
package log
|
||||
|
||||
// WriterGroup is the global writer group for logs to output to
|
||||
var WriterGroup = NewMirrorWriter()
|
||||
251
vendor/github.com/ipfs/go-log/writer/writer.go
generated
vendored
Normal file
251
vendor/github.com/ipfs/go-log/writer/writer.go
generated
vendored
Normal file
@@ -0,0 +1,251 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// MaxWriterBuffer specifies how big the writer buffer can get before
|
||||
// killing the writer.
|
||||
var MaxWriterBuffer = 512 * 1024
|
||||
|
||||
// MirrorWriter implements a WriteCloser which syncs incoming bytes to multiple
|
||||
// [buffered] WriteClosers. They can be added with AddWriter().
|
||||
type MirrorWriter struct {
|
||||
active uint32
|
||||
|
||||
// channel for incoming writers
|
||||
writerAdd chan *writerAdd
|
||||
|
||||
// slices of writer/sync-channel pairs
|
||||
writers []*bufWriter
|
||||
|
||||
// synchronization channel for incoming writes
|
||||
msgSync chan []byte
|
||||
}
|
||||
|
||||
// NewMirrorWriter initializes and returns a MirrorWriter.
|
||||
func NewMirrorWriter() *MirrorWriter {
|
||||
mw := &MirrorWriter{
|
||||
msgSync: make(chan []byte, 64), // sufficiently large buffer to avoid callers waiting
|
||||
writerAdd: make(chan *writerAdd),
|
||||
}
|
||||
|
||||
go mw.logRoutine()
|
||||
|
||||
return mw
|
||||
}
|
||||
|
||||
// Write broadcasts the written bytes to all Writers.
|
||||
func (mw *MirrorWriter) Write(b []byte) (int, error) {
|
||||
mycopy := make([]byte, len(b))
|
||||
copy(mycopy, b)
|
||||
mw.msgSync <- mycopy
|
||||
return len(b), nil
|
||||
}
|
||||
|
||||
// Close closes the MirrorWriter
|
||||
func (mw *MirrorWriter) Close() error {
|
||||
// it is up to the caller to ensure that write is not called during or
|
||||
// after close is called.
|
||||
close(mw.msgSync)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mw *MirrorWriter) doClose() {
|
||||
for _, w := range mw.writers {
|
||||
w.writer.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (mw *MirrorWriter) logRoutine() {
|
||||
// rebind to avoid races on nilling out struct fields
|
||||
msgSync := mw.msgSync
|
||||
writerAdd := mw.writerAdd
|
||||
|
||||
defer mw.doClose()
|
||||
|
||||
for {
|
||||
select {
|
||||
case b, ok := <-msgSync:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
// write to all writers
|
||||
dropped := mw.broadcastMessage(b)
|
||||
|
||||
// consolidate the slice
|
||||
if dropped {
|
||||
mw.clearDeadWriters()
|
||||
}
|
||||
case wa := <-writerAdd:
|
||||
mw.writers = append(mw.writers, newBufWriter(wa.w))
|
||||
|
||||
atomic.StoreUint32(&mw.active, 1)
|
||||
close(wa.done)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// broadcastMessage sends the given message to every writer
|
||||
// if any writer is killed during the send, 'true' is returned
|
||||
func (mw *MirrorWriter) broadcastMessage(b []byte) bool {
|
||||
var dropped bool
|
||||
for i, w := range mw.writers {
|
||||
_, err := w.Write(b)
|
||||
if err != nil {
|
||||
mw.writers[i] = nil
|
||||
dropped = true
|
||||
}
|
||||
}
|
||||
return dropped
|
||||
}
|
||||
|
||||
func (mw *MirrorWriter) clearDeadWriters() {
|
||||
writers := mw.writers
|
||||
mw.writers = nil
|
||||
for _, w := range writers {
|
||||
if w != nil {
|
||||
mw.writers = append(mw.writers, w)
|
||||
}
|
||||
}
|
||||
if len(mw.writers) == 0 {
|
||||
atomic.StoreUint32(&mw.active, 0)
|
||||
}
|
||||
}
|
||||
|
||||
type writerAdd struct {
|
||||
w io.WriteCloser
|
||||
done chan struct{}
|
||||
}
|
||||
|
||||
// AddWriter attaches a new WriteCloser to this MirrorWriter.
|
||||
// The new writer will start getting any bytes written to the mirror.
|
||||
func (mw *MirrorWriter) AddWriter(w io.WriteCloser) {
|
||||
wa := &writerAdd{
|
||||
w: w,
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
mw.writerAdd <- wa
|
||||
<-wa.done
|
||||
}
|
||||
|
||||
// Active returns if there is at least one Writer
|
||||
// attached to this MirrorWriter
|
||||
func (mw *MirrorWriter) Active() (active bool) {
|
||||
return atomic.LoadUint32(&mw.active) == 1
|
||||
}
|
||||
|
||||
func newBufWriter(w io.WriteCloser) *bufWriter {
|
||||
bw := &bufWriter{
|
||||
writer: w,
|
||||
incoming: make(chan []byte, 1),
|
||||
}
|
||||
|
||||
go bw.loop()
|
||||
return bw
|
||||
}
|
||||
|
||||
// writes incoming messages to a buffer and when it fills
|
||||
// up, writes them to the writer
|
||||
type bufWriter struct {
|
||||
writer io.WriteCloser
|
||||
|
||||
incoming chan []byte
|
||||
|
||||
deathLock sync.Mutex
|
||||
dead bool
|
||||
}
|
||||
|
||||
var errDeadWriter = fmt.Errorf("writer is dead")
|
||||
|
||||
func (bw *bufWriter) Write(b []byte) (int, error) {
|
||||
bw.deathLock.Lock()
|
||||
dead := bw.dead
|
||||
bw.deathLock.Unlock()
|
||||
if dead {
|
||||
if bw.incoming != nil {
|
||||
close(bw.incoming)
|
||||
bw.incoming = nil
|
||||
}
|
||||
return 0, errDeadWriter
|
||||
}
|
||||
|
||||
bw.incoming <- b
|
||||
return len(b), nil
|
||||
}
|
||||
|
||||
func (bw *bufWriter) die() {
|
||||
bw.deathLock.Lock()
|
||||
bw.dead = true
|
||||
bw.writer.Close()
|
||||
bw.deathLock.Unlock()
|
||||
}
|
||||
|
||||
func (bw *bufWriter) loop() {
|
||||
bufsize := 0
|
||||
bufBase := make([][]byte, 0, 16) // some initial memory
|
||||
buffered := bufBase
|
||||
nextCh := make(chan []byte)
|
||||
|
||||
var nextMsg []byte
|
||||
|
||||
go func() {
|
||||
for b := range nextCh {
|
||||
_, err := bw.writer.Write(b)
|
||||
if err != nil {
|
||||
// TODO: need a way to notify there was an error here
|
||||
// wouldn't want to log here as it could casue an infinite loop
|
||||
bw.die()
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// collect and buffer messages
|
||||
incoming := bw.incoming
|
||||
for {
|
||||
if nextMsg == nil || nextCh == nil {
|
||||
// nextCh == nil implies we are 'dead' and draining the incoming channel
|
||||
// until the caller notices and closes it for us
|
||||
b, ok := <-incoming
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
nextMsg = b
|
||||
}
|
||||
|
||||
select {
|
||||
case b, ok := <-incoming:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
bufsize += len(b)
|
||||
buffered = append(buffered, b)
|
||||
if bufsize > MaxWriterBuffer {
|
||||
// if we have too many messages buffered, kill the writer
|
||||
bw.die()
|
||||
if nextCh != nil {
|
||||
close(nextCh)
|
||||
}
|
||||
nextCh = nil
|
||||
// explicity keep going here to drain incoming
|
||||
}
|
||||
case nextCh <- nextMsg:
|
||||
nextMsg = nil
|
||||
if len(buffered) > 0 {
|
||||
nextMsg = buffered[0]
|
||||
buffered = buffered[1:]
|
||||
bufsize -= len(nextMsg)
|
||||
}
|
||||
|
||||
if len(buffered) == 0 {
|
||||
// reset slice position
|
||||
buffered = bufBase[:0]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user