You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each chain carries 50+GB with ~90M pure events data per term which can be compressed to 500k weekly.
list / chain
alexandria babylon
antioch sumer
giza
olympia
rhodes
blocks, events
+
+
+
+
+
eras with account balances
+
+
+
+
+
memberships
+
+
+
+
+
council elections / referenda
+
+
+
+
+
proposals, posts, votes
+
+
+
+
+
openings
+
+
+
+
+
workers
+
+
+
+
+
forum
+
+
+
media content (videos, channels)
+
+
+
+
+
persons playlists series
+
nfts
+
+ wanted X done
design
To avoid remote indexing each chain requires a temporary (for the time of indexing) dedicated non-validating library node with a local stats indexer.
Each indexer transfers era stats with blocks, events, new/updated referenced items to an cache uplink:
memberships
proposals (votes, posts)
forum (posts, threads, categories)
openings, reward relationships (pre-olympia)
media
Each cache
stores submitted data
delivers requested data via REST API and ws (socket.io)
shares status reports with other nodes to exchange new data
tasks
connected to pg database (obsoletes validator-report-backend/ and validator-report-ui)
automically load blocks from cache folder on startup
automatically index missing blocks
export / import all models to json (usable by jsreport)
keep cached data in sync with chain (react to proposal updates etc)
save council collections with all events, era stats (validators, transactions) via REST API
kpi model: save stats associated to council terms
socket: provide data to jsbot, jscache, jsmonitor via socket
socket: save speed results and hits per channel from DP
socket: give known status update with ids of saved entities to
jscache backends exchange known public validator endpoints, jscache backends and known (one configured trusted uplink required)
one j:cache uplink (authority) saves speed/popularity results on chain via remark extrinsic, jscache backends import from chain
optional
use migrations to avoid re-seeds (export/import) on model changes
old chains: connect to non validating (if possible) local node to sync data (as above)
model: to save different chains to same db all models require a chain identifier field (string) that can be looked up in the chain model with metadata (vs associating every entry to it - test implementation for comparison?)
sync foreign chains from peers
types TEXT field per chain to reflext changes between runtimes
The text was updated successfully, but these errors were encountered:
predecessor: jstats REST API
rationale
query nodes may at some point store all relevant data. until then a community tool is needed.
scope
chain.era.block
JSS: simple block/time converter #645out of scope
consumers
wanted
chain archives
#597 Joystream/joystream#3012
Each chain carries 50+GB with ~90M pure events data per term which can be compressed to 500k weekly.
+
wantedX
donedesign
To avoid remote indexing each chain requires a temporary (for the time of indexing) dedicated non-validating library node with a local stats indexer.
Each indexer transfers era stats with blocks, events, new/updated referenced items to an cache uplink:
Each cache
tasks
cache
folder on startupcouncil
collections with all events, era stats (validators, transactions) via REST APIkpi
model: save stats associated to council termsknown
status update with ids of saved entities toknown
(one configuredtrusted uplink
required)remark
extrinsic, jscache backends import from chainchain
model with metadata (vs associating every entry to it - test implementation for comparison?)types
TEXT field per chain to reflext changes between runtimesThe text was updated successfully, but these errors were encountered: