Why do the testing results of EZKL for ReLU layers not conform to the expected overhead of KZG commitment? #775
Unanswered
ExcellentHH
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Recently, I have been testing the performance of EZKL on single ReLU layers of different scales. I generated models containing only one ReLU layer of varying scales using the tutorial.py script. I obtained constraints of different scales through ezkl's gen-settings and calibrate-settings functions. Then I used setup, prove, and verify to conduct tests, and the results are as follows:
This result has left me quite confused. During testing, I used KZG commitment scheme, which should ideally have O(1) verification time and O(1) proof size. However, why is it that the EZKL results show an increase in verification time and proof size with an increase in the number of constraints? Could this be attributed to the use of lookup tables? What are the specific underlying principles? My understanding is that lookup tables only need to be loaded once in fixed rows and columns, but why do these results occur (the lookup range is consistent in each setting file)? Could you provide some insights and assistance, or perhaps some reference materials?
Many thanks!
Beta Was this translation helpful? Give feedback.
All reactions