You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
For reasons, I found myself reading this again a couple years later, and found myself getting stuck on the same conceptual difficulty I had much earlier. Namely:
(Before beginning, a typo, but relevant: there is a missing $G$ at the end of the first sentence of Point 3 mentioned below):
Point 3 in Lemma 1 is the part I was always wondering about. We have, crudely, a $P = P^{\alpha} + \mathbb{H}(P^{\beta},P^{\alpha})G$, and the adversary is to produce $P = P_{2}^{\alpha} + \mathbb{H}(P_{2}^{\beta},P_{2}^{\alpha})G$. The text states that " $(P^{\alpha}, P^{\beta})=(P_{2}^{\alpha}, P_{2}^{\beta})$ except with negligible probability, since $\mathbb{H}$ is modelled as a random oracle", but I feel like this is begging the question. To illustrate my claim that this isn't enough detail, say for example that we had the insecure variant of the definition: $P = P^{\alpha} + \mathbb{H}(P^{\beta})G$ - this is clearly not viable since, at the very least, it allows equivocation by the owner of the key $\alpha$ (conceivably, some other shenanigans, even forgery, depending on exactly how you defined Sign2 (or, in taproot world, script path) verification). What I am getting at is, shouldn't we be able to turn a black box adversary who can forge the Sign2 signatures successfully into some specific hash function security property break (whether it be collision or 1st/2nd preimage finding)? Knowing this, or some more complex statement relating ECDLP and a hash break, could be very useful at least in theory. Or am I barking up the wrong tree?
I don't find it completely obvious how this models taproot. We posit a distinct signature scheme $\beta$ and use it as an internal algorithm in Sign2; but in that matter, Sign2 does not seem to really map to taproot script path spending. I think I understand your general approach: that, we can abstract the details of, say, Bitcoin Script, and make an argument that, "if the set of stuff done in Bitcoin Script has the functionality of a signature scheme, then this argument applies". But - does it? Obviously most non-insane script path spend constructions will have either exactly the non-forgery properties of a signature scheme, or something close to it, but any "gap" there might be important for making definitive security statements. Or again I might be barking up another branch of the wrong tree, if I didn't understand what you are exactly modelling with Sign2. I suppose either way, the paper might clarify the connection to taproot script path (and key path) spending, since it's nominally about that topic (and is cited in BIP341).
Fwiw, the way that Lemma 2 works, using the linearity of the tweak to map one forgery into the other, makes sense to me. Also there is a typo "resonds" instead of "responds" somewhere.
The text was updated successfully, but these errors were encountered:
I agree that this sentence probably needs more exposition. I don't think it's begging the question, it's just making a claim with no explicit justification (or maybe "details left to the reader" levels of justification :)).
As for "how does this model Taproot", you do understand my goal correctly -- that I couldn't model Bitcoin Script very well so I just described it as a signature scheme and said "if you can "forge" a script you can forge a taproot". In some cases, like if your script is OP_TRUE, it's trivial to "forge" so it's hard to make any stronger claim here. I think what you're getting at here is that I can't really model Script as a black box this way ... you could imagine, for example, that within the script I have leaked the secret key or something. So to model this correctly, probably I need another algorithm who is responsible for "script generation" given the taproot public key (and not the secret key) as input, or something. Alternately, we could just require that the script be generated before the keypair, and that the keypair must be uniformly random and therefore independent of the script.
This would exclude from the proof any key-reuse situation where the inner Taproot key is repeated inside the script, though, which is a heavy price to pay.
I haven't looked at this repo in the 3 years since I last updated it, but in retrospect I think that an academic-style PDF is the wrong format for this argument, because there's so much complexity that turns into a sea of symbols, and the connection to the actual Taproot scheme is impossible to figure out. Maybe a better author than me could figure it out :).
Hi,
For reasons, I found myself reading this again a couple years later, and found myself getting stuck on the same conceptual difficulty I had much earlier. Namely:
(Before beginning, a typo, but relevant: there is a missing$G$ at the end of the first sentence of Point 3 mentioned below):
Sign2
(or, in taproot world, script path) verification). What I am getting at is, shouldn't we be able to turn a black box adversary who can forge theSign2
signatures successfully into some specific hash function security property break (whether it be collision or 1st/2nd preimage finding)? Knowing this, or some more complex statement relating ECDLP and a hash break, could be very useful at least in theory. Or am I barking up the wrong tree?Sign2
; but in that matter,Sign2
does not seem to really map to taproot script path spending. I think I understand your general approach: that, we can abstract the details of, say, Bitcoin Script, and make an argument that, "if the set of stuff done in Bitcoin Script has the functionality of a signature scheme, then this argument applies". But - does it? Obviously most non-insane script path spend constructions will have either exactly the non-forgery properties of a signature scheme, or something close to it, but any "gap" there might be important for making definitive security statements. Or again I might be barking up another branch of the wrong tree, if I didn't understand what you are exactly modelling withSign2
. I suppose either way, the paper might clarify the connection to taproot script path (and key path) spending, since it's nominally about that topic (and is cited in BIP341).Fwiw, the way that Lemma 2 works, using the linearity of the tweak to map one forgery into the other, makes sense to me. Also there is a typo "resonds" instead of "responds" somewhere.
The text was updated successfully, but these errors were encountered: