Replies: 1 comment
-
I personally don't like the pre-allocation obligation that imposed by the proposed approach. It means the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Reviewers
Problem statement
We don't have field map in read views and as a result accessing tuple fields is slow (see also #10429). The issue is read views use
in_gc
field ofstruct memtx_tuple
which overlaps withformat_id
ofstruct tuple
.tuple structs
Solution
Use 48bit (6-byte) pointer for
in_gc
. What?48 bit limit is quite large (256 TiB) to address memtx memory. So we can make memtx arena a single chunk of memory and then deal with offset inside arena instead of dealing with pointers for linking tuples in
in_gc
.Currently memtx arena is not a single chunk in general. After first
box.cfg{}
it is a single chunk ofmemtx_size
but if we increasememtx_size
after then this property is lost. We can allocate for memtx arena a chunk of size near to available physical memory on box creation and only increase quota if memtx memory is increased. It is possible in terms of OS memory management API.The solution is suggested by @locker.
Alternatives
1. Increase memtx tuple size on 2 bytes
memtx_tuple changes
uint32_t version; + uint16_t padding; struct tuple base;
Unfortunately the memory usage is increased. I tested insertion of 1M tuples consisting of 1, 2, 4 or 8 32-bit values and also tuples consisting of 32-bit value and a string of length from 10 to 100 bytes. The memory usage is recorded on fresh Tarantool start for each variant.
Test results.
test script
2. Do not use intrusive linking for tuples in read views
We can use a mempool to store a pair of pointer to tuple with a link for stailq.
Pros and cons:
Beta Was this translation helpful? Give feedback.
All reactions