| Commit message (Collapse) | Author | Files | Lines |
|
The existing append-based implementation left a hanging reference to
the last tx.
For example, if db.txs was:
[]*Tx{0x1, 0x2, 0x3, 0x4, 0x5}
and we removed the second element, db.txs would now be:
[]*Tx{0x1, 0x3, 0x4, 0x5, 0x5}[:4]
The garbage collector cannot reclaim anything anywhere in a slice,
even pointers between its len and cap, because the len can always
be extended up to the cap.
This hanging reference to the Tx could last indefinitely,
and since the Tx has a reference to user-provided functions,
which could be closures, this bug could prevent arbitrary
amounts of user garbage from being collected.
Since db.txs is unordered anyway, switch to a simpler--and O(1) instead
of O(n)--implementation. Swap the last element into the spot to be
deleted, nil out the original last element, and shrink the slice.
|
|
freelist.size did not account for the extra
fake freelist item used to hold the number of
elements when the freelist is large.
|
|
This recovers the slight alloc regression in #636.
|
|
freelist.lenall duplicated freelist.count.
freelist.copyall and mergepgids docs had typos.
|
|
Add limitation about multiple processes opening databases concurrently.
|
|
Using a large (50gb) database with a read-write-delete heavy load,
nearly 100% of allocated space came from freelists.
1/3 came from freelist.release, 1/3 from freelist.write,
and 1/3 came from tx.allocate to make space for freelist.write.
In the case of freelist.write, the newly allocated giant slice gets
copied to the space prepared by tx.allocate and then discarded.
To avoid this, add func mergepgids that accepts a destination slice,
and use it in freelist.write.
This has a mild negative impact on the existing benchmarks,
but cuts allocated space in my real world db by over 30%.
name old time/op new time/op delta
_FreelistRelease10K-8 18.7µs ±10% 18.2µs ± 4% ~ (p=0.548 n=5+5)
_FreelistRelease100K-8 233µs ± 5% 258µs ±20% ~ (p=0.151 n=5+5)
_FreelistRelease1000K-8 3.34ms ± 8% 3.13ms ± 8% ~ (p=0.151 n=5+5)
_FreelistRelease10000K-8 32.3ms ± 1% 32.2ms ± 7% ~ (p=0.690 n=5+5)
DBBatchAutomatic-8 2.18ms ± 3% 2.19ms ± 4% ~ (p=0.421 n=5+5)
DBBatchSingle-8 140ms ± 6% 140ms ± 4% ~ (p=0.841 n=5+5)
DBBatchManual10x100-8 4.41ms ± 2% 4.37ms ± 3% ~ (p=0.548 n=5+5)
name old alloc/op new alloc/op delta
_FreelistRelease10K-8 82.5kB ± 0% 82.5kB ± 0% ~ (all samples are equal)
_FreelistRelease100K-8 805kB ± 0% 805kB ± 0% ~ (all samples are equal)
_FreelistRelease1000K-8 8.05MB ± 0% 8.05MB ± 0% ~ (all samples are equal)
_FreelistRelease10000K-8 80.4MB ± 0% 80.4MB ± 0% ~ (p=1.000 n=5+5)
DBBatchAutomatic-8 384kB ± 0% 384kB ± 0% ~ (p=0.095 n=5+5)
DBBatchSingle-8 17.2MB ± 1% 17.2MB ± 1% ~ (p=0.310 n=5+5)
DBBatchManual10x100-8 908kB ± 0% 902kB ± 1% ~ (p=0.730 n=4+5)
name old allocs/op new allocs/op delta
_FreelistRelease10K-8 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
_FreelistRelease100K-8 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
_FreelistRelease1000K-8 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
_FreelistRelease10000K-8 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
DBBatchAutomatic-8 10.2k ± 0% 10.2k ± 0% +0.07% (p=0.032 n=5+5)
DBBatchSingle-8 58.6k ± 0% 59.6k ± 0% +1.70% (p=0.008 n=5+5)
DBBatchManual10x100-8 6.02k ± 0% 6.03k ± 0% +0.17% (p=0.029 n=4+4)
|
|
The example is correct in isolation, but if people just copy the loop, it will go into infinite loop when given an empty byte slice.
|
|
|
|
glusterfs
|
|
|
|
|
|
The variable `brokenUnaligned` was missing for ppc64.
|
|
windows file
Signed-off-by: nick <nicholasjamesrusso@gmail.com>
|
|
|
|
[bolter](https://github.com/hasit/bolter) is a command-line app for viewing BoltDB file in your terminal using [tablewriter](https://github.com/olekukonko/tablewriter).
|
|
|
|
|
|
The subtraction for `TxN` was previously transposed which caused
the result to be a negative number. This change alters the order
to return the correct (positive) result.
|
|
Add warning to README.md that keys and values in `ForEach()` are
invalid outside of transaction.
|
|
Added note to README that the file format is fixed.
|
|
|
|
Here is a profile taken etcd.
Before:
10924 10924 (flat, cum) 4.99% of Total
. . 230:
. . 231:// reindex rebuilds the free cache based on available and pending free lists.
. . 232:func (f *freelist) reindex() {
. . 233: f.cache = make(map[pgid]bool)
. . 234: for _, id := range f.ids {
10924 10924 235: f.cache[id] = true
. . 236: }
. . 237: for _, pendingIDs := range f.pending {
. . 238: for _, pendingID := range pendingIDs {
. . 239: f.cache[pendingID] = true
. . 240: }
After:
1 1 (flat, cum) 0.0017% of Total
. . 228: f.reindex()
. . 229:
} . . 230:
. . 231:// reindex rebuilds the free cache based on available and pending free lists.
. . 232:func (f *freelist) reindex() {
1 1 233: f.cache = make(map[pgid]bool, len(f.ids))
. . 234: for _, id := range f.ids {
. . 235: f.cache[id] = true
. . 236: }
. . 237: for _, pendingIDs := range f.pending {
. . 238: for _, pendingID := range pendingIDs {
|
|
Add anacrolix/torrent to users.
|
|
This commit fixes a bug where page end-of-header pointers were being
converted to byte slices even when the pointer did not point to
allocated memory. This occurs with pages that have a `page.count`
of zero.
Note: This was not an issue in Go 1.6 but the new Go 1.7 SSA backend
handles `nil` checks differently.
See https://github.com/golang/go/issues/16772
|
|
|
|
armv5 devices and older (i.e. <= arm9 generation) require addresses that are
stored to and loaded from to to be 4-byte aligned.
If this is not the case the lower 2 bits of the address are cleared and the load
is performed in an unexpected order, including up to 3 bytes of data located
prior to the address.
Inlined buckets are stored after their key in a page and since there is no
guarantee that the key will be of a length that is a multiple of 4, it is
possible for unaligned load/stores to occur when they are cast back to bucket
and page pointer types.
The fix adds a new field to track whether the current architecture exhibits this
issue, sets it on module load for ARM architectures, and then on bucket open, if
this field is set and the address is unaligned, a byte-by-byte copy of the
inlined bucket is performed.
Ref: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html
|
|
|
|
|
|
|
|
I think that SkyDB is over, I could find any link to the project.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Bolt stores the two latest transactions' metadata, but previously did
not recover from validation failures in the latest by using the second
latest. Fix this by correctly handling validation failures in db.go, as
well as returning the metadata with highest txid which is also valid in
DB.meta().
Signed-off-by: Aleksa Sarai <asarai@suse.de>
|
|
The Windows version of funlock needs the db.path to delete the
corresponding .lock file.
|
|
Added GoWebApp as a project that uses Bolt.
|
|
This commit sets the capacity on slices returned from
`Bucket.Get()` to match the slice length. Previously
the capacity would be the size of the mmap max size.
This does not cause any backwards compatibility issues,
however, it does allow users to `append()` to the returned
slice since that will cause Go to realloc a new slice on the
heap.
Fixes #544
|
|
Remove the Drone.IO badge while setting up new test infrastructure.
|
|
RFC3339 is sortable, but RFC3339Nano is not, because it does not use a fixed number of digits after the decimal.
|
|
This commit fixes a rare issue where a page can become accessible
when it has already been freed. This occurs when the first two
child pages of a parent both have deletions and the first page
has 1 remaining children and the second page has 2 remaining
children. During rebalancing the first page pulls an element from
the second page and then the second page pulls the same element
back from the first. The child page was not being freed properly.
I resolved this issue by removing this part of the rebalancing.
I made this choice for two reasons:
1. Moving a single item between pages has negligible benefit. The
page will eventually be cleaned up when it reaches zero elements.
2. This is an infrequently executed branch of code which increases
the likelihood of bugs occurring and it makes it more difficult
to test properly.
Fixes #348
|
|
This commits fixes a timing bug where `DB.StrictMode` can panic
before the goroutine reading the database can finish. If an error
is found in strict mode then it now finishes reading the entire
database before panicking.
|
|
|
|
This commit changes `Tx.WriteTo()` to use the transaction's
in-memory meta page instead of copying from the disk. This is
needed because the transaction uses the size from its meta page
but writes the current meta page on disk which may have allocated
additional pages since the transaction started.
Fixes #513
|
|
|