-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add possibility to pass engine on table reflection #875
base: master
Are you sure you want to change the base?
Add possibility to pass engine on table reflection #875
Conversation
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@
## master #875 +/- ##
=======================================
Coverage 92.11% 92.11%
=======================================
Files 107 107
Lines 8105 8106 +1
=======================================
+ Hits 7466 7467 +1
Misses 639 639
|
@ArtsiomAntropau Thanks for the PR. I don't know if this was intended, but I can see the value of this feature if you don't use # piccolo_conf.py in src folder
from piccolo.engine import PostgresEngine
from config import CONFIG
DB = PostgresEngine(config={"dsn": CONFIG.dsn_by_name["local"]})
# app.py in src folder
__all__ = ("build_app",)
import asyncio
from blacksheep import Application
from blacksheep.plugins import json
from orjson import loads, dumps
from piccolo.engine import PostgresEngine
from piccolo_api.crud.endpoints import PiccoloCRUD
from piccolo_api.rate_limiting.middleware import (
RateLimitingMiddleware,
InMemoryLimitProvider,
)
from config import CONFIG
from piccolo.table_reflection import TableStorage
from piccolo_conf import DB
json.use(
loads=loads,
dumps=lambda *args, **kwargs: dumps(*args, **kwargs).decode("utf-8"),
pretty_dumps=lambda *args, **kwargs: dumps(*args, **kwargs).decode(
"utf-8"
),
)
def _add_db_pool(app: Application, engine: PostgresEngine) -> None:
app.on_start(
lambda *_, **__: engine.start_connection_pool(
min_size=CONFIG.db_pool_min_size, max_size=CONFIG.db_pool_max_size
)
)
app.on_stop(lambda *_, **__: engine.close_connection_pool())
def _wrap_rate_limit(app: Application) -> RateLimitingMiddleware:
return RateLimitingMiddleware(
app,
provider=InMemoryLimitProvider(
limit=CONFIG.rate_limiting_limit,
timespan=CONFIG.rate_limiting_timespan,
block_duration=CONFIG.rate_limiting_block_duration,
),
)
def build_app() -> Application:
app = Application()
for dsn_name, dsn in CONFIG.dsn_by_name.items():
dsn_app = Application()
_add_db_pool(dsn_app, DB)
dsn_tables_store = TableStorage()
asyncio.run(dsn_tables_store.reflect())
for dsn_table_name, dsn_table in dsn_tables_store.tables.items():
dsn_table._meta.db = DB
app.mount(
f"/{dsn_table_name}",
PiccoloCRUD(
table=dsn_table, read_only=False, allow_bulk_delete=True
),
)
app.mount(f"/{dsn_name}", _wrap_rate_limit(dsn_app))
return app Sorry if I missed your point. |
@sinisaos so, I see several reasons
I hope we're not a django to force folks to do that. |
No description provided.