-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Json to recordset #2542
Json to recordset #2542
Conversation
@steve-chavez we should certainly wait for the next minor release before merging this or #2523 to reduce the risk of breaking things and first getting to a release, which is upgrade-able to from previous major versions. |
3174e0a
to
04ef37b
Compare
Regarding the code coverage on |
I'd say: Drop everything we don't need. Let's keep it at the minimum, to have better coverage reports. Then add those instances later, once you need them. |
8e5006c
to
769955f
Compare
Actually, it's not. The record field accessor You'll get reports for lines that were uncovered before your changes, too. So some of those codecov warnings can't really be avoided. |
769955f
to
a58a3e5
Compare
Would it be possible to fallback to For example, this could happen if a new table is added and the schema cache is not refreshed. We remained operational before for these cases, what would happen now? |
The new behaviour is that you get an error when trying to insert data into a table or a column we believe not to exist. Personally I think that's actually an improvement over the old behaviour. Clear and immediate messaging for a situation that is more likely to be an error than not. Like how often do you expect to make API calls to tables so fresh off the presses they're not even in the cache? The only time I can imagine is if you're live developing your schema. In that case can't you do What's the venn diagram between people live editing their schema yet unwilling to
All that said, happy to change it. I'm really excited about this feature and I think it'll be a great addition to PostgREST, so keen to get it merged. We're going to be using this probably in every single API on our end. |
I absolutely agree. I think we should strive to become more strict. Hopefully this should lead people to learn about reloading the schema cache much earlier, too - and not only when they try to make sense of a rather complicated topic with embeddings. Making a basic request, seeing it fail, reloading the schema cache and then seeing it pass should give users (and us) much more confidence that they at least can successfully reload that cache. |
Yeah, same here - just too much on my plate to keep reviewing immediately. But I see use cases for it in all my projects, too :) |
Cool, I agree that becoming more strict is good overall. Not sure if stuffing schema cache info on the How about we add a special header on the endpoints to get their schema cache entry(discussed before on #1421). GET /projects
Accept: application/vnd.pgrst.schema-cache
{
"columns": [".."],
"relationships": [".."]
} The above can be done on another PR but I'd consider it a must for a stable release after this feature is merged. |
Having a per relation info API would be a nice addition. As you rightfully point out in the referenced PR, the schema cache is already visible through the root endpoint so perhaps such an API would be more of a nice to have? Maybe there's something we can do there to present more information about the available data representations for each column. Haven't fully thought that through yet, but if we add support for CSV and binary data representations down the line it might be nice to see which column custom formatters and/or parsers are available, and which ones will rely on the base type native PostgreSQL casting. |
How about making a request to the root endpoint with a different accept header, i.e. the one you mentioned.. and then returning schema cache instead of OpenAPI output? Filtering the root endpoint could then be done the same way for both. |
Yeah, that could be another option. I've branched off the discussion to #1421 (comment) to not make this thread longer. |
Could you please do the following to make it easier for me:
I will then have another look. Thanks! |
The essence of the error is the same but the error message now has a PostgREST error code and indeed happens without hitting the database.
|
c3c3981
to
39e5120
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have most of my suggested changes locally already. If you want to save some effort, you can check the "allow edits of maintainers" box (not sure what the name is exactly). Then I can push those to your branch.
6f9f6b0
to
8e4f75b
Compare
I believe all feedback up to this point has been addressed. |
This returns an error for trying to update or insert into invalid columns, without hitting the database. This change also switches from `json_populate_recordset` for these operations `json_to_recordset` which should make no functional difference except allowing future flexibility. - New: store columns in a map, grab true column types. - New: `json_populate_recordset` -> `json_to_recordset`. This lays the groundwork for data representations, but should make no functional difference at this stage except that we now have an explicit error for trying to mutate tables or columns that don't exist (according to the schema cache). - New: test missing column errors. - Drop unused `Ord` to increase test coverage. The `ColumnMap` `Ord` was only there because `Table` derived `Ord`. Doesn't seem like that's used anywhere for `Table` to begin with so dropped both.
This left to unnecessary type conversions.
It was only used to put `TypedField` into sets which we no longer do after the previous commit.
a865b15
to
0606c69
Compare
This PR is now rebased on the latest main |
Co-authored-by: Steve Chavez <[email protected]>
f0370cf
to
43e30d1
Compare
078e98d
to
db77e84
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! The error message now is clear as the one from PostgreSQL!
@wolfgangwalther Do you have more input on this PR? I think it's looking good to merge. |
Don't have the time, right now, to dive into it, sorry. It's on my list - as well as a lot of other things ;). Feel free to merge, if you think it's good. |
Oh, no problem!
I hesitate because I'm a bit lost on #2523 (which IIUC is the main goal of the PR ), so will let you review and merge this one. |
Anything I can do to help explain or detail it for you? If you'd like I could discuss it over a video call. |
@aljungberg Sorry for the delay! Let's move forward with this one - we'll also need it for #2594, so I've just merged it. (I'll start reviewing #2523 and I'll let you know any doubts I have there) |
Switches from
json_populate_recordset
tojson_to_recordset
, which we will need for data representations. See #2523 for details.The impossible panic we discussed in #2523 has been eliminated by creating a
TypedField
type which doesn't allow that situation.