Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CBL-6156: Support Inner Unnest Query in JSON #2131

Merged
merged 1 commit into from
Sep 9, 2024
Merged

Conversation

jianminzhao
Copy link
Contributor

Added tests for multiple indexes, multi-leveled index, index with N1QL expression.

Added tests for multiple indexes, multi-leveled index, index with N1QL expression.
@cbl-bot
Copy link

cbl-bot commented Sep 5, 2024

Code Coverage Results:

Type Percentage
branches 67.81
functions 79.07
instantiations 35.12
lines 79.1
regions 75.03

@jianminzhao jianminzhao merged commit 65039a6 into release/3.2 Sep 9, 2024
9 checks passed
@jianminzhao jianminzhao deleted the cbl-6156 branch September 9, 2024 17:32
callumbirks added a commit that referenced this pull request Sep 11, 2024
commit 65039a6
Author: jianminzhao <[email protected]>
Date:   Mon Sep 9 10:32:29 2024 -0700

    CBL-6156: Support Inner Unnest Query in JSON (#2131)

    Added tests for multiple indexes, multi-leveled index, index with N1QL expression.

commit 163b832
Author: Pasin Suriyentrakorn <[email protected]>
Date:   Tue Aug 27 22:23:06 2024 -0700

    CBL-6193 : Fix address conversion for request when using proxy (#2127)

    * Fixed crash when converting from address (Address) to C4Address by using the cast operator instead of casting the pointer which is not valid anymore due to the change in private variables.

    * When creating an address object, used the path not full url for the path.

commit 0b30e0c
Author: jianminzhao <[email protected]>
Date:   Tue Jul 30 16:03:59 2024 -0700

    CBL-6120: Provide an option to enable full sync in the database (#2113)

    Added a flag to C4DatabaseFlags, kC4DB_DiskSyncFull. This flag is passed to DataFile::Options, which is used as we request a connection to the SQLite database.

commit 2bdccb1
Author: jianminzhao <[email protected]>
Date:   Tue Jul 30 08:56:04 2024 -0700

    CBL-6100: Flaky test "REST root level" (#2110)

    Allow several timeouts to the HTTP Request. For now, allowing 4 timeouts with each timeout being 5 seconds.

commit 0e613ae
Author: jianminzhao <[email protected]>
Date:   Tue Jul 30 08:55:35 2024 -0700

    CBL-6104: Flaky test, "Multiple Collections Incremental Revisions" (#2112)

    The reason the test occasionally fails because we assumeed that successive revisions all get replicated to the destinations, whereas replicator may skip obsolete revision, say rev-2, when rev-3 exists when the pusher turns to find the revisions to push. In our test case, we create successive revisions in intervals of 500 milliseconds. Most time, 500 ms is enough to set apart when the pusher picks the revisions. On Jenkins machine, the log shows that it found obsolete revisions when the test failed.

    We modified the test success criteria: we only check that the latest revisions are replicated to the destination. This is the designed behavior.

commit fb01661
Author: jianminzhao <[email protected]>
Date:   Mon Jul 29 16:43:14 2024 -0700

    CBL-6099: Test "Rapid Restarts" failing frequently on Linux (#2111)

    After the replicator is stopped, it may still take some time to wind down objects Our test currently allows 2 seconds for it. This turns out not enough in situation when there are following errors,

    Sync ERROR Obj=/Repl#21/revfinder#26/ Got LiteCore error: LiteCore NotOpen, "database not open"

    This error is artificial. It is because we close the database as soon as the replicator goes to stopped, as opposed to when the replicator is deleted. The error follows the exception to the top frame and takes substantial time when there are many of them.

    I increased 2 seconds allowance to 20 seconds. We won't always wait for 20 seconds, because the waiter polls the condition every 50ms. A situation of actual memory leak will wait for whole 20 seconds before failure.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants