FIP: Introducing Ordering #193
Replies: 18 comments 79 replies
-
Ordering makes sense. A variation of per-account ordering could be per-signer ordering (easier to maintain by each app). |
Beta Was this translation helpful? Give feedback.
-
On the problem of rate-limiting enforcement causing desyncs: I'd suggest looking at Rate-Limiting Nullifiers (RLNv1 spec here, RLNv2 spec here). They would eliminate the problem of hubs locally enforcing rate-limits, because it is now the responsibility of the clients to prove that they are respecting specified rate-limits. In such a model, rate-limits would no longer be per-(hub, account), but only per account (or per app key in the "account ordering" model). Hubs would check a message's RLN proof on receipt. Hubs having divergent views of rate-limiting state would no longer be a thing. To avoid the proof-checking overhead on each message (for fully-synced hubs), hubs could continue to keep track of a local rate limit as they already do, but have that rate to be slightly stricter (i.e. slower) than the rate that the RLN would allow. Then, a hub only needs to actually check the RLN proof if its view of the local rate limit is exceeded. For the vast majority of users that do not spam, this would remove the need to check RLN proofs at all. If an invalid proof is ever sent, the hub can simply "ban" the account from ever sending a message again by remembering to refuse all further messages from it. Farcaster already enforces some degree of mass-account-creation sybil resistance by enforcing paid registration, so this may be enough of a deterrent without requiring a complex slashing scheme involving posting the bad proof onchain. Downsides:
|
Beta Was this translation helpful? Give feedback.
-
It sounds like this is a solution to the problem that is rate limiting? Or what Am I missing? Shouldn‘t the merkle sync still do well over a large set of messages otherwise? Can you speak to why you‘re enforcing local rate limiting? You have framed timestamping in the past as discretionary to users. You said an FC acc is a „user’s blog.“ You also ask users to pay for storage. Why then do users have to be policed with rate limiting? Is it because they can still create an infinite amount of deltas, which cause cost to the system? So rate limiting plus the sharing per users can isolate load? The crypto currency dev in me says that whatever causes a lot of load to the system must be taxed by introducing a financial cost, not by optimizing the system for better throughput. |
Beta Was this translation helpful? Give feedback.
-
The benefits of packaging messages into blocks would be that these blocks could get historically sealed. I'm not exactly sure about the FC algo at the moment, but for Kiwi one vector for spam/manipulation is that while the node can reject gossiped messages that don't have a timestamp +- 5minutes from the wall clock, these messages could easily be inserted during the Merkle sync. So people can travel back in time. In user space, this is an issue when someone makes an outrageous claim and then pretends to have "said this already in the past" whereas this seems to also be a vector for circumventing rate limits or not? If messages were finalized in blocks that had to be signed/sealed then this back traveling wouldn't happen anymore but to get to consensus on the block order and what they should include would probably be a rather complex undertaking. |
Beta Was this translation helpful? Give feedback.
-
Between Account and Global ordering, maybe we could consider Application ordering? If each App had their "chain", then the app is the sequencer.
Already is a pof, for the users that use it.
Users already trust the app
Not sure, but it seems simpler than global ordering. Plus, each app will have to maintain one nonce for all their users. |
Beta Was this translation helpful? Give feedback.
-
This is a great idea! The Account ordering idea specifically is a great one, solving many of the current bottlenecks with sync. I like it more than Global ordering, because it has additional advantages (depending on how its implemented)
|
Beta Was this translation helpful? Give feedback.
-
I understand that some (many?) of the problems occur due to pruning (and we haven't seen large scale pruning due to expired storage units). Here is a wild idea. What if pruning was performed by a single (or select few) hub, and propagated across the network as a strictly ordered blockchain? We can add checks that would allow hubs to reject these pruning blocks if the pruning "sequencer" misbehaved (due to a bug or if a malicious actor takes over). But it would 1) take some load from the hubs and 2) provide a universal point of reference on what has to be pruned, no conflicts. |
Beta Was this translation helpful? Give feedback.
-
If we have ordering, how do we ensure that hubs maintain the whole state? Would it be possible for a hub to keep just N blocks and still appear as an active hub? Is this good or bad? |
Beta Was this translation helpful? Give feedback.
-
I want to dig into the cleaner solutions advantage of Global Ordering: Pruning Syncing a client - reading Syncing a client - writing Remove the need for tombstones and message compaction It also seems the benefits mentioned around rate limits and cross-account constraints are more significant than I initially realized. With total ordering it becomes more practical to do things like:
|
Beta Was this translation helpful? Give feedback.
-
questions on Global Ordering:
|
Beta Was this translation helpful? Give feedback.
-
comments and questions on Account Ordering:
|
Beta Was this translation helpful? Give feedback.
-
Has this FIP been put on hold? If not, it would be great to hear your thoughts and the direction you're moving. |
Beta Was this translation helpful? Give feedback.
-
tl;dr - we're leaning towards global ordering after exhaustively exploring the options so far. we haven't made a final decision yet and will present a draft of our proposal at the dev call. We've explored four different options over the last week. I'll try to summarize each of them quickly and present the tradeoffs in this post. in a follow up post, i'll go into our proposed solution in a lot more detail. A big thank you to @sanjayprabhu @CassOnMars @aditiharini @deodad @danromero @adityapk00 and @vrypan who have made significant contributions. 1. Do NothingWe stay with the current model and try to fix the problems with smaller patches. The main benefit is that this requires the least work of all the options discussed, and the team could instead focus on other features that might be more beneficial to Farcaster. The problem is that if the network grows by 10x we'll start seeing problems syncing between apps that we might not be able to patch over. We will have to pursue some larger refactor and we'll be doing it under pressure and while the user experience is degraded for weeks or months. This doesn't seem viable for this reason. 2. Account OrderingWe introduce ordering at the account level, which is described about in some detail. An account's app key (signer) must add a sequence number to every message and ensure that it is monotonically increasing. Sync is quick because you can just compare sequence numbers and download missing messages instead of comparing all message ids. Gossip also becomes more reliable because dropped messages can be identified by non-monontonic sequence changes. The biggest problem is that we can't easily retrofit sequence numbers to existing data. We need users to come back online and resign all their messages with sequence ids and will need to handle the case where users never return. The other problem is that we've only mitigated the sync issues and they still persist in milder forms. For example, sync still won't complete and result in two hubs reaching the same state because its likely that a hub is "ahead" for some accounts and "behind" for others. One way to think about this is that we've made everything 60% better but we still have to deal with all the edge cases. This doens't seem worth it relative to the time invested and additional complexity to handle the sequence id problem. 3. App Ordering@vrypan's proposal sidesteps the need to modify messages by making apps do the ordering with sequenced bundles. We really liked this because its like each app having its own ordered bundles of messages. Sync is even faster than account ordering because you only have to do the sequence number comparison once per app instead of once per user. You can broadly think about this change as further dampening the sync problems by reducing the number of distinct sequences we care about. In (1) we had infinite items to check, in (2) we had to do a check per account and in (3) we have to do a check per app. In other words, it eliminates something like 80-90% of our sync problems relative to option (2) which was only 60%. A problem is that conflicts with bundles are really hard to resolve. Imagine the case where an app re-issues bundle number 1 and invalidates its entire sequence history but re-introduces some part of that history again. Handling all the edge cases is quite likely and its hard to prevent this from happening. Another challenge is that you can't allow everyone to create apps and publish messages or this will just degrade to the performance of (2) with additional downsides. Choosing which apps get to publish messages is not a trivial problem and must be done thoughtfully to prevent centralization of control. 4. Global OrderingWe introduce global ordering by grouping messages using a "sequencer". There is only once sequence to worry about and this is like the model that Ethereum L2's use to produce blocks. Sync is trivial and many of the sync problems like rate limits are totally eliminated. The only thing you need to do is compare sequence numbers and download anything you've missed. The big problem with this approach is that you have to figure out how get multiple sequencers working. The good news is that it will be a lot easier than decentralizing blockchains sequencers because state changes are independent, the bad news is that it's still a lot of work. This is a cleaner solution than the other approaches but is probably a lot more work to get right. We think (4) is our best bet - if we're going to embark on a fairly large rewrite of our system we want something that changes the paradigm and eliminates entire classes of sync issues instead of something that just mitigates them. I'll share a more detailed proposal soon. |
Beta Was this translation helpful? Give feedback.
-
Since writing the Snapchain doc, @sanjayprabhu has been doing a lot more research into decentralization and scalability. We want the next iteration of Farcaster to last at least 5 years whics means supporting a 100x in growth. Back of the envelope math puts that at 5,000 TPS and 100 GB state growth per day. We also want it to be sufficiently decentralized. Snapchain doesn't change read access, but it does reduce the number of writers to improve scalability. We think it's important to get to a point where write access is distributed across at least 10 entities distributed geographically. That prevents anyone from stopping two people who want to communicate, which is our test for sufficient decentralization. Snapchain v2Our proposal is to modify Snapchain to start with sharding and multiple writers. It's still a WIP, but we wanted to share a draft early for feedback. The core elements remain unchanged - we have onchain events to create accounts, deltas to add state to accounts and hubs to store and sync this data. The main improvement is that we have a blockchain-like mechanism to order data inspired by NEAR's Nightshade. Snap ProductionUsers create deltas and submit them to hubs which broadcast them into a mempool via some rpc or gossip network. Special hubs called validators read deltas out of the mempool and publish snaps every few seconds. A validator is assigned to a set of fids (e.g. fids ending with 0) and keeps track of their onchain events and deltas. Each fid's items are placed in a separate store and their hashes in a trie. The validator maintains a trie-of-tries (validator trie) whose leaves are the merkle root of every fid's trie. When onchain events and deltas are received, they are applied according to the state transition rules. For example, a "delete cast" delta removes the deleted cast delta from the user's store and it's hash from the trie. This modifies the fid trie's merkle root which also affects the validator trie. Validators will periodically produce a chunk which contains all the onchain events and deltas since the last chunk, plus the root of the validator trie and a signature over this data. A leader is responsible for collecting chunks from the validators and producing a snap. The snap contains all the chunks and some metadata that links it to previous chunks. The leader then streams this out to the rest of the network. The snap is self verifying - any hub that receives can validate the data and apply the state transition rules. Chunks and Validators
Chunking the network like this is trivial because operations on one account do not depend on any other account. It allows us to scale snap production by reducing the load on a validator. Using multiple validators per chunk helps protect against downtime of any single validator. Ensuring that validators are distributed across different entities also improves censorship resistance. We expect to add more chunks or validators in the future. The network - via a hard fork - can issue a special snap that reassigns accounts across chunks and remaps validators to specific chunks. Validator ElectionIf all the validators for a chunk were controlled by a single entity, they could block some users from publishing data. It's important that the three validators are run by different entities spread across geographies. We also plan to increase the number of validators up to 10 to improve sufficient decentralization. Validators are elected by voters who represent the interest of developers and users in the Farcaster ecosystem. An example system might be something like:
Voters must be capable of assessing whether an applicant has the technical skills, finances and motivation to run a validator. Otherwise we may need to do an emergency vote to remove a validator who drops out. We might start with votes recorded with offchain signatures for simplicity and trigger the change in validators with a hard fork. over time we would want to get to a point where votes happen onchain and the validators can automatically rotate without a network update. Voters can also trigger an emergency round to kick out a malfunctioning validator at any time. We may consider adding stakes so that validators have more skin in the game, but this may be unnecessary to start with. Leader ElectionA malfunctioning leader could stop snap production or exclude chunks from snaps. To prevent this, we have a system for leader election that can handle fallbacks. It also periodically rotates leaders to minimize the impact from a malicious leader until voting can remove them as a validator. The leader rotation function might start with a simple round-robin system starting with validator0. This validator is responsible for epoch1 which is the first 1,000 snaps. The function also defines the next leaders - validator2, validator3 in this case - who take over in successive epochs or if this leader fails during the epoch. PruningWhen a snap's chunks are processed the deltas are applied to each user's account and stored there. The chunks themselves can be discarded after a delay to save space. This lets hubs be very efficient and only store the current state of the network "Alice liked X" instead of all of the past history that led to that state "Alice liked X, unliked X and then liked X again". Pruning will be important to ensure that the network scale because users will generate a lot of state changes on social networks. Introducing a delay for pruning will be important since we want the majority of the network to process and validate the state transitions before they are discarded. If a leader produces an invalid snap, hubs will detect it while processing it and reject the block requiring the error to be fixed. We propose a delay of one week for pruning so that even hubs that are down overnight or over a weekend can catch back up to the state. SyncA hub subscribes to a gossip topic and gets alerts about new snaps from the validator or another hub. It pings that hub to compare snap heights and download any missing ones. Failures in gossip can be handled by waiting for the next message or trigger random manual syncs. This is significantly simpler than our current sync model and is the main benefit of this design. DeltasWe don't need to change deltas very much in this world. The core mechanic of users signing messages to add and remove casts and likes and follows remains the same as it does today. Attack Vectors
|
Beta Was this translation helpful? Give feedback.
-
Thanks for sharing. Can you share a bit about the expected costs and
hardware requirements to run a validator?
What types of organizations do you expect to run these machines and what
are their incentives to do so?
For example, running a Hub is feasible today due to the relatively low
hardware costs and economic benefit of access to direct social data.
Becoming a Validator seems to have a much higher set of requirements around
availability and upgrade synchronization, so curious who these actors are.
…On Sat, Sep 21, 2024 at 02:21 Panayotis Vryonis ***@***.***> wrote:
App ordering does not seem to solve the sync problem we set out to solve,
which makes it a non-starter. Unless there is some clever modification to
the design that we are missing. See my post from earlier.
Yes, this is clear, I'm not pitching the idea again :-) Just bringing it
up as a model that showcases how a model could align incentives in a
natural way: An app with many users/messages would have to use a sequencer
with higher specs than an app with a single user.
—
Reply to this email directly, view it on GitHub
<#193 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACYR3S4AFEMWPO74SQTWOJTZXS3YVAVCNFSM6AAAAABNHECEIOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANZRGAZTCMA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Please visit https://gist.github.com/vrypan/1ae6a60ecb3741ab031b5b06c974acab for the latest version. This is version is outdated.This is a proposal inspired by a Bitcoin-like blockchain. The main advantage of this solution is that the ordering process is decentralized. TLDR; There is a mempool where every hub submits deltas. Miners (nothing special, spec-wise compared to normal hubs) propose new blocks, each block is linked to the previous one in a blockchain. There may be forks, but they are reasonably easy to resolve. The design provides better censorship resistance, and I think it scales nicely. MempoolThere is a mempool where any hub can submit messages MinersThere are special nodes that we will call miners. Miners read messages from the mempool and bundle them in blocks.
We call "block summary" the contents of a block without the message bodies (only block header and the message hashes). Miners do not need "Block score" = number_of_messages / percentage_of_blocks_contributed_by_miner_in_last_100_blocks "Chain volume" is the number of messages inluded in the chain, ie. sum( count(messages) for messages in all blocks ). Volume is important, more important than chain length. The best chain has more messages, not necesserily more blocks. HubsHubs work like they work today, with the following changes:
Note: Hubs check if messages in a block are new (they must be) by comparing them to the block summaries, not their state. Problem HandlingBlockchain forksIt is important to keep in mind that forks do not invalidate messages. A valid message submitted to the network should eventually reach all hubs. So, forks will usually be "better ways to organize the network history", and one fork does not invalidate messages in an other. It is assumed that two "honest" forks will eventually converge (needs more thought). So, even if a hub follows one chain and after X blocks there is a block reordering, the messages that may not be present in the reordering are valid and should be part of the hub state. In theory, these messages will have to be included in a future block, and we can also consider letting a hub submit these "missing" messages to the mempool as a security measure. Assuming that a hub is on Chain A and wants to switch to chain B, it has to follow these steps. First, identify the fork block, block N. Then, starting from block N+1 on chain A, replay all blocks with the following rules:
Next, group all messages in Chain B blocks M > N:
If the process aborts, then replay all Signer Remove messages in chain A, in blocks M>N. CensorshipA miner with priviledged access to new messages (for example Warpcast or Neynar) can try to censor specific messages. Since (Additional thought must be put in this attack vector) Mallicious attack: new, longer chain with missing messagesAn attacker can generate a chain with 10,000 blocks where each block contains one (valid) message. However this chain is Two opinionated miners that keep different messages out of their chainsThere may be a situation where miner A wnats to leave message m1 out of the chain and miner B wants to leave message m2 out of the chain. They keep generating competing blocks, their chains have the same volume (missing one message). There is not much point in doing this. Hubs will switch from one chain to the other (as block score changes and favors one over the other), but this means that both m1 and m2 will be part of their state (reordering does not invalidate messages). |
Beta Was this translation helpful? Give feedback.
-
A general question that relates to any of the proposed ordering solutions. If we use ordering, hubs are free to keep or throw away any deltas they want, and the network will never know. This can have multiple implications, I think mostly positive, but I'd like to hear thoughts on it. For example, an archival hub that keeps everything and never applies historical pruning. An other one, who is used by a bot farm, that throws away everything. A pair of hubs that shard deltas between them, based on odd/even fids. And a dev node that only keeps my fid deltas. This is more apparent in my "miners" proposal, where hubs only need to keep the block summary. But I think it applies to every ordering solution: blocks are announced to hubs, but we don't know what they do with them. Thoughts? Other implications?
|
Beta Was this translation helpful? Give feedback.
-
Hello, I have some questions about the hub, is it necessary for a hub run by an individual to establish a connection with other personal nodes after connecting to the boot node, or is it feasible to do so? If so, how does the current hub get the address of someone else's hub? Is it to passively accept connection requests from other hubs, or is there a way to actively detect the addresses of other hubs? |
Beta Was this translation helpful? Give feedback.
-
Problem
Farcaster hubs are finding it harder to stay in sync. The problem is that user data changes at a very rapid pace. If you aren’t familiar with Farcaster’s sync model, read this before continuing.
It’s easiest to think of a hub as a bag of items or “deltas”. When a user creates a new cast, they add a delta into the bag. Hubs sync by comparing bags and copying new deltas. This works in the simple case but we run into problems because:
Hubs practically become inconsistent — sync may require multiple retries or waiting for a while, but that delay may cause more state changes which exacerbates the problem.
Hubs also find it hard to stay in sync with databases — apps like Warpcast face similar problems keeping in sync with the Hubs and end up with complex reconciliation processes.
Beliefs
Decentralized, global state is necessary. If there is no source of truth outside Warpcast, developers will be far less likely to invest time into Farcaster.
Sync should be easy for apps. It must be easy for developers to keep their applications in sync with the state of hubs.
Posting should be instant and without fees. Requiring onchain transactions or any kind of per-post fee creates too much friction for the network to reach escape velocity.
Storage and bandwidth must be constrained. If users are allowed freedom in either dimension we will just end up with a lot of spam.
Turing-completeness is undesirable. If people can build anything and there is no fee to use the network, spam of all kinds is inevitable.
Ideas
If deltas had stronger ordering many of our problems would become simpler. There are two ways we can approach this:
Account Ordering
Alice adds a sequence number to each delta creating a strict order or chain of deltas. She has a separate chain for each app key so that her apps don’t need to coordinate. Hubs enforce that sequence numbers are monotonic and handle conflicts by picking the delta with the lowest lexicographic hash value. The system is partially ordered, but not totally ordered.
Partial ordering makes many of our sync problems an order of magnitude easier:
The pros of account ordering are:
The cons of this approach are:
Global Ordering
Alice submits deltas to a hub which puts them in the mempool. A sequencer periodically pulls messages out of the mempool and organizes them into blocks. Hubs fetch block from the sequencer and can also fetch them from each other in a p2p manner. The system is totally ordered across all messages.
Total ordering trivializes many of the problems we discussed:
The benefits of global ordering are:
The downsides of global ordering are:
Beta Was this translation helpful? Give feedback.
All reactions