Change BucketList size -> Total Soroban Size for fee calculations and Soroban read optimization #1569
Replies: 1 comment
-
Thinking a bit more about increasing read limits, I think there are a few safety and DOS issues to consider. What happens if Soroban state grows too fast?The target Soroban Data Size protects the network from a short term, state based DOS attack. I.e. if the target size is 4 GB, soroban state will not increase beyond 4.5 GB - 5 GB in any case. This protects us from malicious DOS attacks. However, if legitimate use cases increase soroban adoption suddenly, the network may need to raise the soroban size limit faster than intended. Suppose the size limit is 4 GB, and tier 1 validator SKUs are only equipped to handle 5 GB of soroban cache. Should legitimate usage increase such that 6 GB of soroban state is required, the tier 1 network may not be able or be willing to migrate to higher memory SKUs in time to increase this limit. Of the network hits the limit due to legitimate usage, soroban is essentially disabled until the limit is increased. While eviction prevents long term malicious DOS, where the attacker cannot sustain rent payments long term, if legitimate usage increases and users are willing to pay rent, only increasing the size window can "unstick" soroban. In this case, we might be able to have a disc fallback, where soroban can temporarily fall back to disc reads, the network can increase the limit, and the network can experience slowed ledgers but still process soroban transactions while tier 1 upgrades hardware. The alternative is to not increase the limit until tier 1 is ready to maintain current ledger close times with more RAM, during which time soroban is effectively disabled. We probably can't decrease read limits since contracts would be broken. I think either of these scenarios is unlikely. While read limits can increase, write limits do not. Should state be growing, it's growth is bounded and predicatable. Also, should network activity increase so much that this is required, the network would be in a very good place usage wise and I imagine tier 1 would be motivated to increase hardware. Handling Classic Entry Disk OpsWhile all Soroban entries in this proposal are in memory, Soroban transactions can still contain classic Account and Trustline entries. Since classic does not have state archival, these entry types cannot be entirely cached in memory. However, they are relatively small and have a known max size. DOS protection can be enforced by setting the Read Entry Count limit as follows. Let the max disk IO we can do in a ledger be max_disk_bytes. Let max memory loads be max_memory_bytes. Our settings can be set as follows: read_entry_max_count = max_disk_bytes / maxSize(account, trustline) read_entry_max_bytes = max_memory_bytes. This would still allow us to significantly increase both max read bytes and max read entries while protecting from disk based DOS. Alternatively, we could have separate limits for classic vs. soroban reads, but this adds complexity that is probably not necessary. |
Beta Was this translation helpful? Give feedback.
-
Currently, Stellar Classic state dominates the BucketList size. The intention of the target BucketList size in Soroban is to provide a hard stop on runaway state growth. BucketList size related write fees should provide back pressure on growth before reaching the hard stop, while state archival eviction gradually reduces BucketList size as well.
The issue is, Stellar Classic TXs do not pay BucketList related fees but contribute to the majority of BucketList state. The result is Soroban users are punished because of classic, and the back pressure / state eviction measures do not actually achieve their goal of reducing BucketList size.
I propose we change BucketList size used in Soroban fees to "Total Soroban Size", where this value is the sum of the size of all CONTRACT_CODE and CONTRACT_DATA entries in the BucketList. The target BucketList size would then become the target Soroban size.
While this change would not allow us to limit the total BucketList size growth, the current solution does not achieve this either, as state growth is still dominated by classic. This change provides fairer fees to Soroban TXs and also opens the door for an "in-memory" Soroban optimization.
One of the advantages of State Archival is that total database size of Soroban entries can be bounded effectively. What this means is that we can bound the Soroban database size to something modest like 4 GB and store all Soroban entries in memory. This would greatly increase Soroban read capacity. This would not impact Soroban write capacity, as writes would still have to be persisted to the on-disk BucketList to produce the ledger header hash.
The advantages are significantly higher Soroban read limits. The primary drawback is increased memory consumption of stellar-core. Current consumption is approximately 2.5 GB. Increased consumption likely will not impact validators, since the recommended SKU is currently 16 GB, but may impact services that run captive-core, such as RPC and Horizon. Finally, this may increase the DOS ability of core, as we have less memory overhead for OOM based attacks.
Beta Was this translation helpful? Give feedback.
All reactions