You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that LexBox and Language Forge can be run side by side, it should be possible to set up end-to-end testing of Send/Receive scenarios with a real Language Depot deployment, rather than a simulated one. A typical test might go as follows:
Create a LexBox project, and upload a .zip file containing the initial state of the project.
Do an initial clone into Language Forge, verify correct number of entries and so on.
Use the MkFwData tool to get a .fwdata file for that project.
Use liblcm to load that .fwdata file, then add a new entry or edit an existing entry.
Use the SplitFwData tool (that doesn't exist yet) to split the edited .fwdata file up into its component parts.
Create a new Mercurial commit with those changes.
Push that commit to the LexBox repo. (Steps 3-7 simulate making an edit in FieldWorks followed by a Send/Receive).
Use Mongo to make edits in the Language Forge project (faster), or use Playwright to drive the UI to make those changes (slower).
Have Language Forge trigger a Send/Receive, kicking off LfMerge.
Once LfMerge has finished the Send/Receive, verify that LF's Mongo database contains the changes from steps 2-7 (or the merge result of those changes).
In LexBox, verify that LfMerge created a new commit with the correct results.
Download an .fwdata file from LexBox and load it into liblcm.
Use liblcm to verify that the FieldWorks objects received the correct changes from LfMerge (updated field contents, comments, or whatever).
That's a lot of steps that currently have to be done by hand, but with LexBox and Language Forge running side by side on a developer machine (plus MkFwData and SplitFwData tools becoming available in the flexbridge repo so that the E2E S/R tests can use them), all of that will finally be able to be automated.
The text was updated successfully, but these errors were encountered:
A thought about repeatability: I could use the NUnit attribute Property on tests to mark them with the project that they use. Then a test fixture could use that value (which can be accessed through the TestContext) to get the current "tip" revision of the project before starting the test, and in the teardown, reset the project to that tip revision so that we remove any commits we pushed.
Now that LexBox and Language Forge can be run side by side, it should be possible to set up end-to-end testing of Send/Receive scenarios with a real Language Depot deployment, rather than a simulated one. A typical test might go as follows:
That's a lot of steps that currently have to be done by hand, but with LexBox and Language Forge running side by side on a developer machine (plus MkFwData and SplitFwData tools becoming available in the flexbridge repo so that the E2E S/R tests can use them), all of that will finally be able to be automated.
The text was updated successfully, but these errors were encountered: