Replies: 3 comments 1 reply
-
Hi @fredro84 any update ? |
Beta Was this translation helpful? Give feedback.
-
You can use the pg_plan_filter module to limit the query cost, this is included in Supabase. For example, you could do:
|
Beta Was this translation helpful? Give feedback.
-
I just ran into this issue as well, using the graphql API. I'm defining a chat interface and right now the maximum number of rows returned is 30 for my messages, even though I have EDIT: I just answered my own question! This needs to be set at the pg_graphql level: https://supabase.github.io/pg_graphql/configuration/#max-rows |
Beta Was this translation helpful? Give feedback.
-
Recently I have been migrating all of my graphql queries from a custom postgraphile api to the supabase graphql api. During the migration I was investigating which safety measures I can implement against large (accidental) queries. I found that the max number of rows returned is 30 and totalCount is disabled by default. But I can't find whether I can configure the maximum query depth for a graphql query. This means 30 rows for each level of the query, but if that query can go 20 levels deep, the response will be huge.
I found the default timeout settings for users: 3 sec (anon), 8 sec (authenticated). But for an authenticated user they can probably use up quite some resources in 8 seconds if I don't limit the query depth.
Does anyone know what a good approach is here?
Beta Was this translation helpful? Give feedback.
All reactions