-
Notifications
You must be signed in to change notification settings - Fork 494
Comprehensive Test Suite for Azure Cosmos DB Change Feed Processor: All Versions and Deletes
This test suite is designed to rigorously evaluate the behavior of the Azure Cosmos DB Change Feed Processor in All Versions and Deletes mode. This mode ensures that every document modification—including inserts, updates, and deletes—is captured and processed, providing a full record of changes for applications that require version tracking, audit logging, or compliance guarantees.
This section outlines the approach for testing the Change Feed Processor (CFP) All Versions and Deletes scenarios. It includes the "possible" relocation of the test repositories, the creation of Continuous Test Library (CTL) environments, and the strategy for manual vs. automated gated check-ins through pipeline runs.
-
Repository Setup:
- At the time of writing this document, all test scenarios are stored in separate dedicated repositories, which are intended to be relocated to a centralized location for easier access by multiple teams (e.g., Java, .NET, Compute Gateway, Backend, etc.).
- The repository should follow a standardized directory structure to separate the test cases for .NET, Java SDKs, and edge case scenarios.
- Detailed documentation of each test scenario and instructions for running them must be maintained in the repository.
-
Environment Creation:
- Dedicated Continuous Test Library (CTL) environments will be created for testing. These environments will replicate production-like conditions to ensure accurate results.
- Each test environment will consist of:
- A Cosmos DB instance configured for All Versions and Deletes mode.
- Containers configured with partition key setups required for split and non-split scenarios.
- Lease containers for Change Feed Processor.
-
Environment Isolation:
- To avoid conflicts, separate environments will be provisioned for .NET and Java SDK tests.
- Resources, including Cosmos DB databases and containers, will be cleaned up after each test run to ensure no residual data affects future runs.
-
Environment Scaling:
- Environments will be dynamically scaled based on the requirements of each test. For instance, edge cases that involve heavy throughput or hot partitions will require higher RU/s configurations.
- CTL environments should allow for on-demand scaling during tests to handle scenarios like failover or massive document deletions.
-
Automated Pipeline:
- Automated pipelines will be integrated into the Continuous Integration/Continuous Deployment (CI/CD) workflow. These pipelines will automatically trigger the execution of test scenarios as part of the gated check-in process.
- Key test scenarios will be executed automatically with every check-in to ensure no regression issues. These tests will cover:
- Basic CRUD operations.
- Partition splits.
- Critical edge cases like Cross-Region Failover or Large Document Updates.
-
Gated Check-In:
- A gated check-in pipeline will be enforced to prevent unstable code from being merged into the main branch. All automated tests must pass before code is allowed to proceed through the pipeline.
- The system will prevent code from being merged unless all automated tests complete successfully and no errors or data discrepancies are detected.
-
Manual Test Runs:
- Certain edge cases or performance-heavy scenarios, such as Hot Partition Handling or Massive Scale-up of Throughput, may be manually triggered to avoid unnecessarily running these in every pipeline.
- Manual runs will be invoked for scenarios that are resource-intensive and do not need to be executed with every pipeline trigger.
-
Test Reporting:
- Automated tests will generate reports that are reviewed post-run. Any failed tests will trigger notifications to the team, requiring immediate investigation and resolution.
- Reports from manual runs will be logged and documented, ensuring that edge cases are revisited for accuracy.
The test strategy aims to create a comprehensive, controlled environment where Cosmos DB Change Feed Processor test scenarios can be run reliably. By combining manual and automated testing approaches, we ensure that all critical functionality is covered in a scalable, efficient, and maintainable manner.
The suite includes a diverse set of test scenarios to verify the robustness and reliability of the Change Feed Processor under various conditions. These scenarios are split into two categories:
-
Non-Split Scenarios: Tests the behavior of the Change Feed Processor before any partition splits occur in the Cosmos DB container.
-
Split Scenarios: Focuses on validating how the Change Feed Processor behaves when containers undergo partition splits, ensuring no changes are lost or duplicated during and after splits.
-
Out of Scope:
-
Merge Scenarios: This test suite does not include any scenarios involving merging partitions or document merge operations. The focus is solely on evaluating the Change Feed Processor's behavior in the context of All Versions and Deletes and other defined test cases. Handling of partition merges or the behavior of Cosmos DB when merging data across partitions falls outside the scope of these tests and may require a separate test suite.
-
Spark Connector: Scenarios involving the Spark connector for Cosmos DB are not covered in this test suite. Any tests related to integrating Cosmos DB with Apache Spark, or the behavior of the Spark connector in relation to the Change Feed, are outside the current scope.
-
The test suite also covers a range of edge cases designed to push the boundaries of the Change Feed Processor, including:
- Cross-Region Latency and Failover: Tests how the processor behaves during failovers in a multi-region setup, ensuring no data is lost.
- Massive Scale-up of Throughput: Validates the processor's ability to handle rapid scaling of throughput without missing changes.
- Document TTL Expiration: Ensures that documents deleted due to TTL expiration are properly captured in the change feed.
- Handling Malformed Documents: Tests the processor’s response to corrupt or malformed documents, ensuring valid changes are still processed.
- Heavy Read-Write Conflict Handling: Evaluates how the processor handles simultaneous read-write conflicts on the same document.
- Partial Partition Key Updates: Verifies the processor’s ability to handle partial updates to documents with a partition key.
- Massive Document Deletion Test: Tests how the processor handles large-scale deletion of documents in bulk.
- Schema Evolution Handling: Ensures that the processor can process documents when their schema evolves over time.
Each scenario and edge case is designed to ensure that the Change Feed Processor operates reliably, with no data loss, even in complex or high-stress conditions.
-
Simple Non-Split - .NET
- Completed.
-
Simple Non-Split - Java
- Assigned to the Java SDK Team for implementation.
-
Simple Split - .NET
- Completed.
-
Simple Split - Java
- Assigned to the Java SDK Team for implementation.
-
Large Document Updates with Incremental Throughput Increase - .NET
- Completed.
-
Large Document Updates with Incremental Throughput Increase - Java
- Assigned to the Java SDK Team for implementation.
-
Single Split Container with v1 and v2 Lease Documents - .NET
- Completed.
-
Single Split Container with v1 and v2 Lease Documents - Java
- Assigned to the Java SDK Team for implementation.
-
Change Feed Estimator Discrepancy - .NET
- Completed.
-
Change Feed Estimator Discrepancy - Java
- Assigned to the Java SDK Team for implementation.
-
Split Edge Cases
- Expired Lease Handling
- Hot Partition Handling
- Multiple Change Feed Processors Running Simultaneously
- Cross-Region Latency and Failover
- Massive Scale-up of Throughput
- Document TTL Expiration and Change Feed Impact
- Handling Malformed Documents or Corrupt Data
- Heavy Read-Write Conflict Handling
- Partial Partition Key Updates
- Massive Document Deletion Test
- Schema Evolution Handling
- All Documents Retrieved with Splits and No Empty Pages
- Read Feed and Change Feed Type Correctly Identified
- All Bulk Ingested Documents Can Be Retrieved
- All Transactional Batch Ingested Documents Can Be Retrieved
- Validate Full Fidelity Change Feed Query
Description:
- This test scenario validates the behavior of the Change Feed Processor with All Versions and Deletes in the .NET Software Development Kit before any partition splits occur.
Software Development Kit:
- .NET Software Development Kit
Preconditions:
- No partition splits should have occurred in the Azure Cosmos DB container.
- Change Feed Processor with All Versions and Deletes must be enabled.
Steps:
- Initialize the Change Feed Processor in .NET for a non-split container.
- Insert a document, update it multiple times, and delete it.
- Verify that all versions and the delete action are processed by the Change Feed Processor.
Expected Results:
- All document versions and the delete operation should be captured and processed.
Description:
- This test scenario validates the behavior of the Change Feed Processor with All Versions and Deletes in the Java Software Development Kit before any partition splits occur. Assigned to the Java SDK Team for implementation.
Software Development Kit:
- Java Software Development Kit
Preconditions:
- No partition splits should have occurred in the Azure Cosmos DB container.
- Change Feed Processor with All Versions and Deletes must be enabled.
Steps:
- Initialize the Change Feed Processor in Java for a non-split container.
- Insert a document, update it multiple times, and delete it.
- Verify that all versions and the delete action are processed by the Change Feed Processor.
Expected Results:
- All document versions and the delete operation should be captured and processed.
Description:
- This test scenario validates the behavior of the Change Feed Processor with All Versions and Deletes in the .NET Software Development Kit after a partition split has occurred.
Software Development Kit:
- .NET Software Development Kit
Preconditions:
- A partition split must have occurred in the Azure Cosmos DB container.
- Change Feed Processor with All Versions and Deletes must be enabled.
Steps:
- Initialize the Change Feed Processor in .NET for a split container.
- Insert a document, update it multiple times, and delete it.
- Verify that all versions and the delete action are processed by the Change Feed Processor.
Expected Results:
- All document versions and the delete operation should be captured and processed after the partition split.
Description:
- This test scenario validates the behavior of the Change Feed Processor with All Versions and Deletes in the Java Software Development Kit after a partition split has occurred. Assigned to the Java SDK Team for implementation.
Software Development Kit:
- Java Software Development Kit
Preconditions:
- A partition split must have occurred in the Azure Cosmos DB container.
- Change Feed Processor with All Versions and Deletes must be enabled.
Steps:
- Initialize the Change Feed Processor in Java for a split container.
- Insert a document, update it multiple times, and delete it.
- Verify that all versions and the delete action are processed by the Change Feed Processor.
Expected Results:
- All document versions and the delete operation should be captured and processed after the partition split.
Description:
- This edge case tests the behavior of the Change Feed Processor with large documents as throughput is increased incrementally by 10,000 Request Units per second (RU/s). It validates the Change Feed Processor’s ability to handle large documents during creates, updates, and deletes while the partitioned container undergoes increased load capacity.
Preconditions:
- Change Feed Processor with All Versions and Deletes must be enabled.
- The container must start with 10,000 RU/s and be incremented by 10,000 RU/s up to a maximum throughput (e.g., 50,000 RU/s).
- The test should be conducted in both non-split and split scenarios.
Steps:
-
Initialize the Container:
- Start the Azure Cosmos DB container with an initial throughput of 10,000 RU/s.
-
Insert Large Documents:
- Insert a large document (e.g., close to or exceeding 2 MB).
- After each insert, increase the throughput by 10,000 RU/s.
-
Update Large Documents:
- Perform incremental updates on the large document, ensuring that the document size grows with each update.
- After each update, increase the throughput by another 10,000 RU/s.
-
Delete Large Documents:
- Once the document has been updated to its final state, delete the document.
- Verify that the Change Feed Processor captures and processes the delete event.
-
Repeat for Higher Throughput:
- Continue the cycle of insert, update, and delete operations while increasing throughput up to 50,000 RU/s (or your chosen maximum).
-
Monitor Performance:
- Track the Change Feed Estimator’s response to large documents and throughput changes, ensuring that it accurately reflects pending changes.
Expected Results:
- The Change Feed Processor should process all versions of the large document and the delete action without errors, even as throughput increases incrementally.
- The Change Feed Estimator should accurately reflect the number of pending changes at each throughput increment.
- No data loss or delays should occur, even at maximum throughput.
Description:
- This edge case tests the behavior of the Change Feed Processor using v1 and v2 lease documents in a single split container. The goal is to validate that both v1 and v2 leases work correctly with a basic partition split scenario.
Preconditions:
- The Azure Cosmos DB container must undergo a partition split.
- The test should use both v1 and v2 versions of lease documents for the Change Feed Processor.
Steps:
-
Initialize the Container:
- Set up the Azure Cosmos DB container, ensuring that it will undergo a partition split.
-
Use v1 Lease Documents:
- Set up the Change Feed Processor with v1 lease documents.
- Insert, update, and delete documents in the container.
- Verify that the Change Feed Processor with v1 leases captures all changes correctly.
-
Switch to v2 Lease Documents:
- Reconfigure the Change Feed Processor to use v2 lease documents.
- Insert, update, and delete more documents in the container.
- Verify that the Change Feed Processor with v2 leases captures all changes correctly.
-
Monitor Behavior During Split:
- Observe the Change Feed Processor’s behavior during the partition split, ensuring that both versions of the lease documents handle the split correctly.
Expected Results:
- Both v1 and v2 lease documents should work seamlessly, capturing all changes before, during, and after the partition split.
- No data should be lost during the transition between lease versions.
Description:
- This edge case tests how the Change Feed Estimator handles scenarios where there’s a mismatch between the estimated pending changes and the actual changes being processed. This could happen during high throughput, network latency, or system overload situations.
Preconditions:
- The Azure Cosmos DB container must be under heavy load or have artificial delays introduced in processing.
- The Change Feed Processor must be enabled with the Change Feed Estimator configured to monitor pending changes.
Steps:
-
Initialize the Container:
- Start the Azure Cosmos DB container and set up the Change Feed Processor with the Change Feed Estimator.
-
Introduce High Load or Delays:
- Insert, update, and delete a large volume of documents to simulate heavy load.
- Alternatively, introduce artificial delays (e.g., network latency) in processing to increase the gap between the Change Feed Processor and the actual pending changes.
-
Monitor the Change Feed Estimator:
- Observe the output of the Change Feed Estimator, noting any discrepancies between the number of pending changes reported and the actual changes being processed.
-
Correct the Discrepancy:
- Ensure that the Change Feed Estimator adjusts and corrects its estimates over time, especially after the system stabilizes or when load decreases.
-
Verify Data Integrity:
- Ensure that all changes (inserts, updates, deletes) are eventually processed without data loss, despite the initial discrepancy in the Change Feed Estimator’s output.
Expected Results:
- The Change Feed Estimator may initially show an incorrect estimate for pending changes due to system load or delays, but it should adjust and provide an accurate estimate once the system catches up.
- All changes should be processed without data loss or corruption, regardless of the discrepancy in pending change estimates.
Description:
- This edge case tests how the Change Feed Processor handles expired lease documents. The goal is to ensure that the processor can recover and continue processing changes without data loss even if lease documents expire before they are renewed.
Preconditions:
- The Azure Cosmos DB container must be configured with Change Feed Processor.
- Lease expiration time should be set to a lower value than normal to force lease expiry.
- The test should introduce artificial delays in lease renewal to simulate expired leases.
Steps:
-
Initialize the Container and Processor:
- Set up the Azure Cosmos DB container and enable the Change Feed Processor with lease documents.
-
Set Lease Expiration Time:
- Configure the lease expiration time to a lower value (e.g., reduce the default lease time to a few seconds) to ensure that leases will expire quickly.
-
Introduce Delays in Lease Renewal:
- Introduce artificial delays or suspend the lease renewal process to allow leases to expire while the processor is still running.
-
Monitor Lease Expiry:
- Observe the Change Feed Processor's behavior as leases expire. Note how the system handles expired leases and whether it attempts to renew them or recover from expiration.
-
Verify Change Processing:
- Once leases have expired, resume the lease renewal process and ensure that the Change Feed Processor can pick up where it left off, processing any unprocessed changes.
- Verify that all document inserts, updates, and deletes are processed correctly after the system recovers.
Expected Results:
- The Change Feed Processor should handle expired leases gracefully, renewing them and continuing to process changes after lease renewal.
- No changes should be lost, even if the lease documents expire before being renewed.
- The processor should be able to recover and resume processing from the correct state after lease expiration.
Description:
- This edge case tests the behavior of the Change Feed Processor when one partition becomes a hot partition (receiving significantly more data or requests than other partitions). The goal is to validate that the Change Feed Processor can handle throttling on the hot partition without data loss or delays in processing.
Preconditions:
- The Azure Cosmos DB container must use a partition key that allows for uneven data distribution (e.g., a high-cardinality key like
locationId
oruserId
). - A significant number of documents should be inserted into the same partition to create a hot partition scenario.
- The Change Feed Processor must be enabled, and the Change Feed Estimator should be configured to monitor pending changes.
Steps:
-
Initialize the Container:
- Set up the Azure Cosmos DB container and enable the Change Feed Processor.
-
Simulate a Hot Partition:
- Insert a large number of documents with the same partition key value (e.g.,
locationId = "hot-location"
) to overload a specific partition. - Continue inserting documents until the partition begins to experience throttling or high RU consumption.
- Insert a large number of documents with the same partition key value (e.g.,
-
Monitor Throttling and RU Consumption:
- Use Azure Cosmos DB Metrics to monitor RU consumption and throttling events on the hot partition.
- Log RU charges and throttling events from the Change Feed Processor.
-
Track Pending Changes with Change Feed Estimator:
- Use the Change Feed Estimator to track the number of pending changes for the hot partition.
- Note if the processor falls behind in processing changes due to throttling.
-
Verify Change Processing:
- Once the hot partition’s load decreases, verify that the Change Feed Processor catches up and processes all changes correctly.
- Ensure that no data is lost or delayed, even during throttling.
Expected Results:
- The Change Feed Processor should handle throttling on the hot partition gracefully by retrying requests.
- The Change Feed Estimator should accurately report an increase in pending changes during throttling and a decrease once the processor catches up.
- All changes should be processed correctly, and no data should be lost, even when the partition is throttled.
Description:
- This edge case tests the behavior of Azure Cosmos DB when two Change Feed Processors are running simultaneously on the same container. The goal is to validate how lease distribution, change processing, and performance are affected when multiple processors compete for leases.
Preconditions:
- Two separate instances of Change Feed Processors must be running on the same Azure Cosmos DB container, each configured to process changes from the same lease container.
Steps:
-
Initialize First Change Feed Processor:
- Set up and start the first Change Feed Processor to monitor changes in the target container.
-
Initialize Second Change Feed Processor:
- Set up and start a second Change Feed Processor instance with a different instance name, but monitoring the same container and using the same lease container as the first processor.
-
Monitor Lease Distribution:
- Monitor how the leases are distributed between the two Change Feed Processors.
- Check if the lease ownership is split evenly or if one processor takes over more leases than the other.
-
Insert and Update Data:
- Insert and update a large volume of documents in the container to generate changes for the Change Feed Processors to handle.
- Track how both processors handle the changes simultaneously.
-
Check for Lease Stealing or Conflicts:
- Verify if either Change Feed Processor tries to "steal" leases from the other, especially during partition splits or when one processor slows down.
- Ensure that no changes are lost or skipped during lease redistribution.
-
Monitor Performance and Logs:
- Monitor the performance and logs of both Change Feed Processors to detect any conflicts, errors, or delays in processing.
- Use the Change Feed Estimator to track pending changes and ensure that the system keeps up with change processing.
-
Stop One Processor:
- Stop one of the Change Feed Processors and observe how the remaining processor handles acquiring all remaining leases and processing the changes.
Expected Results:
- Both Change Feed Processors should be able to process changes simultaneously without conflicts or data loss.
- Leases should be distributed evenly between the two processors, and lease stealing should only occur if one processor is unable to process changes effectively.
- When one processor is stopped, the remaining processor should acquire the leases from the stopped processor and continue processing without any data loss or delay.
Description:
- This edge case tests the behavior of the Change Feed Processor during cross-region replication and failover scenarios. The goal is to validate that the processor handles changes correctly and without data loss during a region failover.
Preconditions:
- Multi-region Azure Cosmos DB account with replication enabled between regions A and B.
- The Change Feed Processor must be running in region A.
Steps:
- Configure Azure Cosmos DB with multi-region replication between regions A and B.
- Run the Change Feed Processor in region A and insert, update, and delete documents.
- Trigger a failover to region B while the Change Feed Processor is processing changes.
- Monitor how the Change Feed Processor behaves during failover, checking for missed or duplicated changes after failover.
- Verify that the Change Feed Processor continues processing in region B without issues.
Expected Results:
- The Change Feed Processor should seamlessly continue processing changes after the failover to region B.
- No changes should be missed or duplicated during the failover process.
Description:
- This edge case tests how the Change Feed Processor handles rapid scaling of throughput, both increasing and decreasing. It validates the processor’s ability to handle changes under dynamic throughput conditions.
Preconditions:
- Azure Cosmos DB container with manual or autoscaling throughput enabled.
- The Change Feed Processor must be running during throughput scaling.
Steps:
- Start with an initial throughput of 10,000 RU/s on the container.
- Gradually scale the throughput up to 100,000 RU/s while inserting, updating, and deleting documents.
- Monitor how the Change Feed Processor handles these throughput changes.
- Check for any delays or missed changes during the scale-up phases.
Expected Results:
- The Change Feed Processor should continue processing changes effectively at both high and low throughput levels.
- No data should be lost or delayed, even during rapid scaling.
Description:
- This edge case tests how the Change Feed Processor handles document expirations using the Time to Live (TTL) feature. It validates that TTL-triggered deletions are captured correctly by the processor.
Preconditions:
- The Azure Cosmos DB container must be configured with TTL enabled.
- The Change Feed Processor must be running and monitoring changes.
Steps:
- Enable TTL in the Cosmos DB container with a short expiration time (e.g., 1 minute).
- Insert documents with TTL enabled and allow them to expire.
- Monitor the Change Feed Processor to capture TTL-triggered deletions.
- Verify that the processor correctly handles TTL deletions.
Expected Results:
- The Change Feed Processor should capture TTL-triggered deletions as changes, with no data loss.
Description:
- This edge case tests how the Change Feed Processor handles malformed or corrupt documents. The goal is to ensure that malformed data does not cause the processor to crash or stop processing valid changes.
Preconditions:
- Insert a mix of valid and malformed documents into the Azure Cosmos DB container.
- The Change Feed Processor must be running and processing changes.
Steps:
- Insert valid documents and malformed documents (e.g., missing fields, invalid JSON) into the container.
- Monitor the Change Feed Processor’s logs to track how it handles malformed documents.
- Verify that the processor skips or logs malformed documents without stopping processing for valid documents.
Expected Results:
- The Change Feed Processor should log errors for malformed documents but continue processing valid documents without stopping.
Description:
- This edge case tests how the Change Feed Processor handles heavy read-write conflicts during high-throughput workloads. It assesses how well the processor handles multiple simultaneous read and write operations on the same documents.
Preconditions:
- High-throughput Azure Cosmos DB container with multiple clients performing read and write operations on the same documents.
- Change Feed Processor must be running while read-write conflicts occur.
Steps:
- Use multiple clients to simultaneously read and write to the same documents, causing conflicts.
- Monitor the Change Feed Processor’s logs and performance to observe how it handles conflicting updates.
- Verify that the Change Feed Processor processes all conflicting versions of documents in All Versions and Deletes mode.
Expected Results:
- The Change Feed Processor should capture all conflicting versions of documents in All Versions and Deletes mode.
- In Incremental mode, it should only capture the latest change.
- No valid changes should be skipped or lost due to conflicts.
Description:
- This edge case tests how the Change Feed Processor handles partial updates to documents with a partition key. The goal is to ensure that the processor processes partial updates correctly without skipping changes.
Preconditions:
- The Azure Cosmos DB container must have a defined partition key.
- Change Feed Processor must be running and monitoring changes.
Steps:
- Insert documents with a partition key defined.
- Perform partial updates on documents, updating only specific fields without modifying the partition key.
- Monitor the Change Feed Processor to ensure it processes partial updates correctly.
Expected Results:
- The Change Feed Processor should process partial updates without skipping changes.
- Only the updated fields should be reflected in the change feed.
Description:
- This edge case tests how the Change Feed Processor handles the mass deletion of documents, particularly when a large portion of the data is deleted in bulk.
Preconditions:
- A large volume of documents should be present in the Azure Cosmos DB container.
- The Change Feed Processor must be running in All Versions and Deletes mode.
Steps:
- Insert a large number of documents into the container.
- Simulate a bulk deletion of a significant portion of the documents.
- Monitor the Change Feed Processor to ensure it captures all deletions.
- Verify that the processor handles the mass deletion without delays or errors.
Expected Results:
- The Change Feed Processor should capture all deletions, even during a bulk deletion event.
- No deletions should be skipped or delayed.
Description:
- This edge case tests how the Change Feed Processor handles schema changes over time. It validates that the processor can process both old and new document schemas without issues.
Preconditions:
- Documents should be inserted with an initial schema in the Cosmos DB container.
- The Change Feed Processor must be running and processing changes.
Steps:
- Insert documents with a defined initial schema.
- Introduce a schema change by adding or removing fields from newly inserted documents.
- Monitor the Change Feed Processor to ensure that it processes both the old and new schema versions correctly.
Expected Results:
- The Change Feed Processor should handle both old and new schemas without errors.
- No schema changes should disrupt the processor’s ability to handle changes.
-
All Documents Retrieved with Splits and No Empty Pages
- This test validates that all documents are retrieved correctly, even after partition splits, and ensures that there are no empty pages in the result.
-
Read Feed and Change Feed Type Correctly Identified
- This test ensures that the read feed and change feed types are correctly identified during operations, verifying that the system correctly distinguishes between the two feed types.
-
All Bulk Ingested Documents Can Be Retrieved
- This test verifies that all documents ingested in bulk operations can be retrieved successfully without any data loss.
-
All Transactional Batch Ingested Documents Can Be Retrieved
- This test validates that documents ingested using transactional batch operations are correctly retrieved and consistent.
-
Validate Full Fidelity Change Feed Query
- This test ensures that the full-fidelity change feed is correctly queried and validated over long periods, simulating a real-world, large-scale use case. It is driven by CTL Actions to test scalability and long-lived operations effectively.