-
-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(search): Adds logic to download search results #4893
base: main
Are you sure you want to change the base?
Conversation
535f37a
to
e8f6fd3
Compare
b3ba8d5
to
8721c41
Compare
Semgrep found 3 Avoid using Ignore this finding from avoid-pickle Semgrep found 1 Detected direct use of jinja2. If not done properly, this may bypass HTML escaping which opens up the application to cross-site scripting (XSS) vulnerabilities. Prefer using the Flask method 'render_template()' and templates with a '.html' extension in order to prevent XSS. |
8721c41
to
ae29bba
Compare
This commit refactors the search module by moving helper functions from `view.py` to `search_utils.py`. This improves code organization and makes these helper functions reusable across different modules.
ae29bba
to
92cddf5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this a once-over and it feels about right. Concerns I'll highlight for you guys to consider:
-
Memory: We're putting the CSV in memory, which sure is handy. I think this is fine b/c it'll be pretty small, a couple hundred KB, right? This must be fine, but it's on my mind.
-
The fields in the result might be annoying with columns that aren't normalized to human values (like SOURCE: CR or something, and local_path: /recap/gov.xxxx.pdf instead of https://storage.courtlistener.com/recap/gov.xxx.pdf). I didn't see code to fix that, but it's probably something we should do if we can. This CSV is supposed to be for humans, in theory.
I appreciate the refactor, but I'd suggest it in a separate PR in the future, so it's not mixed in.
But this looks about right to me otherwise. :)
This PR implements the backend logic for exporting search results (#599).
Key changes:
Introduces a new rate limiter to throttle CSV export requests to 5 per day
Adds a new setting named
MAX_SEARCH_RESULTS_EXPORTED
(default: 250) to control the maximum number of rows included in the generated CSV file.Refactors the
view.py
file within the search module. Helper functions related to fetching Elasticsearch results have been moved to thesearch_utils.py
file for better organization and clarity.Introduces two new helper functions
fetch_es_results_for_csv
andget_headers_for_search_export
Adds a new task that takes the
user_id
and thequery
string as input. It then sends an email with a CSV file containing at mostMAX_SEARCH_RESULTS_EXPORTED
rows.