Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

increase max lsig program size #5990

Open
joe-p opened this issue May 4, 2024 · 6 comments
Open

increase max lsig program size #5990

joe-p opened this issue May 4, 2024 · 6 comments

Comments

@joe-p
Copy link
Contributor

joe-p commented May 4, 2024

I'm creating this issue to continue the discussion on lsig size raised in #5943 by @giuliop. Below is his original post

We keep of course a limit on the program size for lsig and apps. The apps one is fine as discussed above.
The lsig one feels very limited. For instance, there is a limit of 2048 bytes on app call arguments and a limit of 1024 bytes on note fields, both larger than the 1000 byte limit on lsigs.
I think it would be more balanced to set the lsig program limit to 4096 bytes (half the max for an app teal size), or at least 2048 bytes, as one page of app program.
In the future we can optimize how we store those transactions in the ledger by caching the lsigs and only storing their hash if we don't do that already. I guess we could also optimize how we broadcast them by sending their hash only once they are 'cached' by the network.
This, and the P2P gossip network, are to me the most important changes to the AVM right now.
The latter because decentralization is the only reason we are all here after all and this one for the developer experience.

I personally think 4k is reasonable considering with two transactions (one lsig + one appl) you can get access to > 10k bytes. The lsig program itself is 1k and then the second transaction can provide byte storage in note, args, and programs.

We could also potentially "pool" the size together. Meaning I can have a group with one lsig that is 15k with 15 other lsigs that are only a couple of bytes each.

@giuliop
Copy link
Contributor

giuliop commented May 5, 2024

👍

I think the easiest thing is to simply adjust the lsig teal max program size to 4k which is more in line with the other limits of the AVM.

The other alternative is to go down the rabbit hole of having variable transaction fees that scale with transaction size and enforce only that transactions (or transaction groups) fit in a block which I would resist as long as possible to keep things simple

@jannotti
Copy link
Contributor

jannotti commented May 15, 2024

I do not think this is a good idea, since it makes it easy to bloat transaction sizes. This is already a vector to annoy the network. Thankfully nobody has bothered yet, but it would be annoying for someone to fill blocks with cheap transactions that max out every possible field. Adding 3k to each transaction would only help them out.

I claim that lsigs can and should be short. The main remaining use for lsigs is their high compute budget. From what I understand, that's why they're being brought up here - to use the expensive elliptic curve opcodes that enable checking zk proofs. Suppose, for example, that you need to use ec_pairing_check - it costs many thousands in opcode budget, so you surely want to do it in lsigs. You might try to put related code in the lsig, so you might run up against 1k length quickly. I would propose that instead you separate your on-chain code into two pieces, the part that figures out what needs to be fed to ec_pairing_check, and the logicsig that actually calls ec_pairing_check. The app can't feed data to the lsig, of course, so instead, you determine what the app will want checked offline, and put those inputs in the application args. The app runs, and simply confirms that the args supplied match what it wants checked. The lsig "dumbly" uses ec_pairing_check on the application args, not really knowing why. So it has already confirmed that if the args match the app's intent, they pass ec_pairing_check. It's a bit convoluted, as you're now doing parts of the work in three places: offline, app, and lsig. But it allows each thing to be done where it can be done well.

@giuliop
Copy link
Contributor

giuliop commented May 15, 2024

I do not think this is a good idea, since it makes it easy to bloat transaction sizes. This is already a vector to annoy the network. Thankfully nobody has bothered yet, but it would be annoying for someone to fill blocks with cheap transactions that max out every possible field. Adding 3k to each transaction would only help them out.

But you can already send a transaction group with 16 lsigs of 1kb each for a total of 16kb.
If we are happy with this we should be happy with a transaction group with a single lsig of 4kb.

Logically, we could approach it as the opcode budget and pool the lsig size to a transaction group limit we are comfortable with, with an appropriate transaction cost per KB of size.
And just as we are discussing for the opcode budget, instead of imposing a limit per transaction and forcing the creation of dummy transactions, we could just check the limit, and measure the cost, at transaction group level, and allow in the extreme case one transaction that absorbs all the limit and bear all the cost.

I claim that lsigs can and should be short. The main remaining use for lsigs is their high compute budget.
(...)
It's a bit convoluted, as you're now doing parts of the work in three places: offline, app, and lsig. But it allows each thing to be done where it can be done well.

I'll give you my specific use case and explain why I would not do that and what I would do instead.

The zk verifiers created by AlgoPlonk implement a zk proof verification compatible with proofs generated by gnark.
They take as input the proof (~30 32-byte arrays) and the public inputs (a circuit dependent number or 32-byte arrays) and either succeed or fail, marking the proof valid or not.
A bn254 verifier consumes ~145,000 opcode budget, a bls12-381 one ~187,000.
I am building an application that manages a merkle tree, and needs to compute the tree root onchain concurrently with a zk proof verification. For a 32 level merkle tree using the upcoming mimc opcde, the root computation it's over 32 * 2 * 650 = 41,600 opcode budget.
Add some extra logic and you cannot fit proof verification and root computation in one transaction group as app calls.

As a selfish application developer, the ideal solution would be to be able to pool the entire ~500,000 opcode budget in app calls. I understand that's not possible because lsig run in parallel to app calls so they are entitled to a separate budget.
The second best solution would be to invert the budgets, give 320,000 to app calls which are more flexible, and the smaller budget to lsigs.
The third would be to make the verifier a lsig so that it consumes the lsig budget and leaves intact the app budget for other operations. That would be possible in my case with the 4096 byte limit.
The fourth, what I am currently doing, is to do the proof verification in one transaction group, save a success receipt onchain, and have the user submit a second transaction group to complete the processing. Not the smoothest users experience.

This last option I find preferable to your suggestion because of complexity, maintainability, and risk of mistakes.
The verifier runs through a long series of arithmetic and elliptic curve operations (only one pairing check though, at the very end!). To reduce the size of the lsig, I would need to identify a chain of intermediate results that i can compute offline, and create a series of lsgis/apps that compute the intermediate results starting from the initial/other intermediate results, and check all the inputs to them to make sure they are correct.
And then convince myself that I did not introduce accidentally a new attack vector, without the benefit of a peer reviewed implementation reference for this scheme.
And then anytime there is some improvement in the Plonk protocol that is then implemented in gnark having to repeat the whole process.

This is a very specific example, but the issue is that we have 320,000 opcode budget that is very difficult to use and we should find a way to unlock it.

The most logical approach to me is to pool a transaction group size limit, paying the appropriate fees, but not having to create dummy transactions.

@jannotti
Copy link
Contributor

I'm mostly convinced by this:

But you can already send a transaction group with 16 lsigs of 1kb each for a total of 16kb.
If we are happy with this we should be happy with a transaction group with a single lsig of 4kb.

I think we should probably pool the total allowable lsig size. I don't think we should try to avoid the creation of "dummy" transactions by using fee accounting, but then I'm not convinced we should do it for apps either. For those who do want to do it, and are unconvinced of the complexity, think about how the following things interact: You paid extra because you want your apps to run longer, you paid extra because you want your lsigs to run longer (or be bigger), you paid extra because of congestion, you paid extra to register for incentives. I don't think we should try to untangle that in the short, or even medium, term.

So, with that in mind, when I say I'm "mostly" convinced by the argument I quoted, I don't mean we should allow a big lsig in a group containing a single txn that pays a large fee. But I'd be happy saying that the total number of lsig bytes in a group should be 1000*numTxnsInGroup.

@giuliop
Copy link
Contributor

giuliop commented May 17, 2024

I'd be happy too if we can have the total number of lsig bytes in a group as 1000*numTxnsInGroup.
It achieves the same end result as one big lsig in the end, with just bit of extra complexity pushed to the front-end in constructing transaction groups with dummy lsig calls.

As for the complexity of fee accounting, would you feel the same way if we had to implement everything from scratch instead of modifying the existing code? I am asking because I am struggling to see the complexity you see and was wondering if it's because I don't know the existing code and how complex is it to change or if it's because I have some logical gaps in my thinking.

@giuliop
Copy link
Contributor

giuliop commented Jul 8, 2024

I've created a PR to implement pooling across a transaction group as suggested

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants