Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

return_progress as arg for run_query #601

Merged
merged 41 commits into from
Nov 20, 2023

Conversation

burnout87
Copy link
Collaborator

No description provided.

@burnout87 burnout87 linked an issue Oct 11, 2023 that may be closed by this pull request
@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

Attention: 9 lines in your changes are missing coverage. Please review.

Comparison is base (5b8c558) 60.96% compared to head (67d9b16) 61.21%.

Files Patch % Lines
cdci_data_analysis/analysis/queries.py 76.92% 6 Missing ⚠️
...sis/plugins/dummy_plugin/data_server_dispatcher.py 94.11% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #601      +/-   ##
==========================================
+ Coverage   60.96%   61.21%   +0.25%     
==========================================
  Files          47       48       +1     
  Lines        8472     8548      +76     
==========================================
+ Hits         5165     5233      +68     
- Misses       3307     3315       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@burnout87
Copy link
Collaborator Author

An additional boolean argument for the run_analysis endpoint is extracted: return_progress, and this is passed, as argument to run_query of the instrument and in turn to the run_query of the query object requested.

In this way, each plugin can behave differently if the return_progress is passed as True.

Is this what you had in mind @volodymyrss ?

@volodymyrss
Copy link
Member

something like that, we need to try interface with test plugin, and of course preserve existing behavior

@burnout87
Copy link
Collaborator Author

burnout87 commented Oct 17, 2023

Adding the arg return_progress to the run_query and get_dummy_products functions of a data_server_query breaks the existing behavior of all the plugins

EDIT I am using a different approach

cdci_data_analysis/analysis/queries.py Fixed Show fixed Hide fixed
cdci_data_analysis/analysis/queries.py Fixed Show fixed Hide fixed
cdci_data_analysis/analysis/queries.py Fixed Show fixed Hide fixed
cdci_data_analysis/analysis/queries.py Fixed Show fixed Hide fixed
@burnout87
Copy link
Collaborator Author

I keep seeing this error from the oda_api run:

tests/test_live.py::test_instruments - oda_api.api.UnexpectedDispatcherStatusCode: UnexpectedDispatcherStatusCode (line 98): status: 403, raw: The token provided is not valid.

I investigated a lot but still it is to me completely unclear why it happens

@burnout87
Copy link
Collaborator Author

I keep seeing this error from the oda_api run:

tests/test_live.py::test_instruments - oda_api.api.UnexpectedDispatcherStatusCode: UnexpectedDispatcherStatusCode (line 98): status: 403, raw: The token provided is not valid.

I investigated a lot but still it is to me completely unclear why it happens

By checking out a specific branch from the oda_api (oda-hub/oda_api#224) , fixed the problem, and I dont really get why, since the test that was failing was one not dealing with the new dummy instrument (actually no dummy instrument was touched by it, supposedly)

@volodymyrss
Copy link
Member

I keep seeing this error from the oda_api run:

tests/test_live.py::test_instruments - oda_api.api.UnexpectedDispatcherStatusCode: UnexpectedDispatcherStatusCode (line 98): status: 403, raw: The token provided is not valid.

I investigated a lot but still it is to me completely unclear why it happens

Could you maybe add more logging to enable this?

@burnout87
Copy link
Collaborator Author

burnout87 commented Nov 9, 2023

I keep seeing this error from the oda_api run:
tests/test_live.py::test_instruments - oda_api.api.UnexpectedDispatcherStatusCode: UnexpectedDispatcherStatusCode (line 98): status: 403, raw: The token provided is not valid.
I investigated a lot but still it is to me completely unclear why it happens

By checking out a specific branch from the oda_api (oda-hub/oda_api#224) , fixed the problem, and I dont really get why, since the test that was failing was one not dealing with the new dummy instrument (actually no dummy instrument was touched by it, supposedly)

I think I have it figured out, finally.

When running the oda_api tests, the first test failing is test_cli.py::test_get, and it's expected.

The other test to fail is test_live.py::test_instruments which sends a request to the production instance with an invalid token.

This invalid token is discovered within the environment by the oda_api, and then sent to the production dispatcher, which, obviously fails, since it is signed with the test secret key.

The tests run before test_instruments and test_get (tests within test_cli, in particular test_token_modify and test_token_inspect), leave some token within the local environment (env_var and current directory) that are then used within the context of test_instruments, making it fail for invalid token.

A dedicated PR will address this

@dsavchenko
Copy link
Member

But why have this error appeared in this PR only?

@volodymyrss
Copy link
Member

But why have this error appeared in this PR only?

maybe the order of tests is changing?.. it's not guaranteed to be always the same.

@volodymyrss
Copy link
Member

I keep seeing this error from the oda_api run:
tests/test_live.py::test_instruments - oda_api.api.UnexpectedDispatcherStatusCode: UnexpectedDispatcherStatusCode (line 98): status: 403, raw: The token provided is not valid.
I investigated a lot but still it is to me completely unclear why it happens

By checking out a specific branch from the oda_api (oda-hub/oda_api#224) , fixed the problem, and I dont really get why, since the test that was failing was one not dealing with the new dummy instrument (actually no dummy instrument was touched by it, supposedly)

I think I have it figured out, finally.

When running the oda_api tests, the first test failing is test_cli.py::test_get, and it's expected.

The other test to fail is test_live.py::test_instruments which sends a request to the production instance with an invalid token.

This invalid token is discovered within the environment by the oda_api, and then sent to the production dispatcher, which, obviously fails, since it is signed with the test secret key.

The tests run before test_instruments and test_get (tests within test_cli, in particular test_token_modify and test_token_inspect), leave some token within the local environment (env_var and current directory) that are then used within the context of test_instruments, making it fail for invalid token.

A dedicated PR will address this

interesting! Indeed, each test env variables should be strickly controlled, all set before the test.

@burnout87
Copy link
Collaborator Author

But why have this error appeared in this PR only?

test_token_modify and test_token_inspect run before test_get and do not clean-up the tokens they generate.

The test test_get does some token clean-up, but in this instance it fails (without completing of course) because of the missing new dummy instrument that is introduced with this PR (as expected).

But the clean-up does not happen and therefore test_instruments fails.

@burnout87
Copy link
Collaborator Author

interesting! Indeed, each test env variables should be strickly controlled, all set before the test.

It took quite some debugging...and a bit of imagination

@burnout87
Copy link
Collaborator Author

But why have this error appeared in this PR only?

maybe the order of tests is changing?.. it's not guaranteed to be always the same.

exactly, I discovered because I made the assumption that such order was followed...but just in this instance. it's not always like that

@volodymyrss
Copy link
Member

But why have this error appeared in this PR only?

test_token_modify and test_token_inspect run before test_get and do not clean-up the tokens they generate.

The test test_get does some token clean-up, but in this instance it fails (without completing of course) because of the missing new dummy instrument that is introduced with this PR (as expected).

But the clean-up does not happen and therefore test_instruments fails.

It's best to never rely on tests to clean-up anything. Instead, clean environment should be created before the test.

@volodymyrss
Copy link
Member

But why have this error appeared in this PR only?

maybe the order of tests is changing?.. it's not guaranteed to be always the same.

exactly, I discovered because I made the assumption that such order was followed...but just in this instance. it's not always like that

tests can even be run in parallel, so no way to guarantee order.

Copy link
Member

@volodymyrss volodymyrss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test fails

@burnout87
Copy link
Collaborator Author

test fails

The test est_cli.py::test_get from the oda_api fails becaus an adaptation is needed by adding the dummy instrument introduced here, but with oda-hub/oda_api#224 this will be fixed (I have verified that the tests pass by checking out that specific oda_api branch)

@volodymyrss
Copy link
Member

test fails

The test est_cli.py::test_get from the oda_api fails becaus an adaptation is needed by adding the dummy instrument introduced here, but with oda-hub/oda_api#224 this will be fixed (I have verified that the tests pass by checking out that specific oda_api branch)

right, well, this PR will wait for the fix then.

@burnout87
Copy link
Collaborator Author

tests passing now

@burnout87 burnout87 merged commit 5361991 into master Nov 20, 2023
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add a parameter to run_analysis to return progress details
3 participants