-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
if one test fails everything after fails #99
Comments
I'm assuming you are running this test with sairedis RPC. If so, syncd application is used as a SAI driver. Often, in case of some issue with SAI call, syncd application simply exits which causes subsequent TCs to fail. If you check the logs (/var/log/syslog), at the very end, you will find something like this:
There are two issues with
With these changes I was able to pass all 3 TCs.
Obviously, SAI-C checks should be extended to catch misconfiguration like this. Will try to address this in subsequent PRs. |
As per the latest updates, the TC should be changed as follows:
|
this issue is not about the get test case, it is to show that once one test fails everything after it fails. |
It's not just about the re-connection. In fact, syncd app restarts. So, the next TC should start from switch init phase again. The approach how you can deal with it is already proposed in some TCs. E.g., SAI-Challenger/tests/test_l2_basic.py Line 14 in ba5a4b1
|
i'll have to look if the |
In your example the get operation cause the failure because of redundant parameters in the command.
That's why I'm saying that SAI-C checks should be extended to catch misconfiguration like this. The test should not fail because of the extra parameters even they are wrong. |
i'm not concerned about the for now let me give the fixture a try, see how it goes and i'll come back with more feedback |
That's the way how syncd works. So, in case of Redis RPC, we simply can not handle all possible scenarios in SAI-C itself. |
when running saivs i noticed sometimes after a test fail dyncD gets into a bad state and anything after that fails and i need to restart the containers. what will be the best place to raise a syncd issue because this may not be a sai-challenger issue and looks more like syncd reliability issue. |
Please check #154 . It should address this issue. |
Closing as per comment above. |
run create/remove only both will pass
run create/get/remove since get fails remove will fail also
run #94 all at once notice that half way through they will start failing
run #94 one file at a time and notice all will pass
observed same behavior with real switch and in the CI pipeline with saivs
The text was updated successfully, but these errors were encountered: