Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: simulate and query multiple models #71

Merged
merged 2 commits into from
Jan 27, 2025

Conversation

gurdeep330
Copy link
Member

@gurdeep330 gurdeep330 commented Jan 26, 2025

For authors

Description

T2Bdemo3.mp4

This PR introduces the capability to simulate and query multiple models within T2B. Key updates include enhancements to the state key dic_simulated_data, which now stores the results of all simulations conducted by the user. The structure of dic_simulated_data has been updated to hold a list of dictionaries, each containing simulation data the following keys:

  • name: A unique identifier assigned to the simulation by the LLM.
  • source: Indicates whether the model was uploaded by the user or loaded from the BioModels database.
  • data: The simulation results associated with the model.
  • tool_call_id: A unique identifier automatically assigned to the tool simulate_tool when the simulation request was made.

With this update, the LLM can leverage the state key dic_simulated_data to retrieve results from previous simulations. This enables answering user queries about specific simulations and generates custom plots.

The demo provided illustrates these updates in action.

Fixes # (issue) NA

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests you conducted to verify your changes. These may involve creating new test scripts or updating existing ones.

  • Added new test(s) in the tests folder
  • Added new function(s) to an existing test(s) (e.g.: tests/testX.py)
  • No new tests added (Please explain the rationale in this case)

Checklist

  • My code follows the style guidelines mentioned in the Code/DevOps guides
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation (e.g. MkDocs)
  • My changes generate no new warnings
  • I have added or updated tests (in the tests folder) that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

For reviewers

Checklist pre-approval

  • Is there enough documentation?
  • If a new feature has been added, or a bug fixed, has a test been added to confirm good behavior?
  • Does the test(s) successfully test edge/corner cases?
  • Does the PR pass the tests? (if the repository has continuous integration)

Checklist post-approval

  • Does this PR merge develop into main? If so, please make sure to add a prefix (feat/fix/chore) and/or a suffix BREAKING CHANGE (if it's a major release) to your commit message.
  • Does this PR close an issue? If so, please make sure to descriptively close this issue when the PR is merged.

Checklist post-merge

  • When you approve of the PR, merge and close it (Read this article to know about different merge methods on GitHub)
  • Did this PR merge develop into main and is it suppose to run an automated release workflow (if applicable)? If so, please make sure to check under the "Actions" tab to see if the workflow has been initiated, and return later to verify that it has completed successfully.

@gurdeep330 gurdeep330 added the enhancement New feature or request label Jan 26, 2025
@gurdeep330 gurdeep330 self-assigned this Jan 26, 2025
@gurdeep330 gurdeep330 requested a review from dmccloskey January 26, 2025 12:26
@gurdeep330
Copy link
Member Author

@dmccloskey
I will update the Jupyter notebooks once we have the parameter scanning and steady-state analysis and get annotations tools implemented next week. I have opened an issue for it #72 (comment)

Copy link
Member

@dmccloskey dmccloskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice implementation taking advantage of the shared state to record and recall simulation results. The implementation sets us up to move a key-value store to persist the results over different user sessions in the future.

@dmccloskey dmccloskey merged commit 1da9c0a into main Jan 27, 2025
6 checks passed
@dmccloskey dmccloskey deleted the feat/simulate-multiple-models branch January 27, 2025 08:16
Copy link
Contributor

🎉 This PR is included in version 1.9.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request released
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants