-
Notifications
You must be signed in to change notification settings - Fork 5
Blob status API schema discussion: replicas array has no obvious uniqueness property #216
Comments
One discussion point here is in the translation of "deal" to "replica", is there a 1:1 mapping? Does this I wonder, if for "replicas", we should be filtering out anything that's not an actual replica—probably everything except Here's the states we can have: ModelDealStateProposed ModelDealState = "proposed"
ModelDealStatePublished ModelDealState = "published"
ModelDealStateActive ModelDealState = "active"
ModelDealStateExpired ModelDealState = "expired"
ModelDealStateProposalExpired ModelDealState = "proposal_expired"
ModelDealStateRejected ModelDealState = "rejected"
ModelDealStateSlashed ModelDealState = "slashed"
ModelDealStateError ModelDealState = "error" |
Does this deal with the case where a single replica spans multiple deals (because it exceeds the size of a sector)? |
Still, there is a small chance that a single piece gets proposed multiple times with the same provider. I don't think it's a big trouble. The consumer of this JSON blob should count unique provider IDs that has state == "active" |
See below as an example of what we're getting back for different "replica" statuses, in particular note how the same piece with the same SP appears twice even though we don't have two active deals with them, one is
proposal_expired
and one isproposal
.Singularity deals with this by giving a unique integer ID to "deals", but we strip that out and just return the
{ end_epoch, last_verified_at, piece_cid, state }
. As a consumer of this information externally it's difficult to track replicas over time without a uniqueness property, or perhaps a guarantee that the array is fixed in order and will always just append over time.I'm thinking that just adding
"id": Integer
might be the way to address this. Is there a reason we wouldn't want to expose an internal id sequence number?The Singularity table this comes out of looks like this:
The text was updated successfully, but these errors were encountered: