-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
404 add methods to pmat for storing prediction into ipfs and adding that to contract mapping #446
Conversation
…at-for-storing-prediction-into-ipfs-and-adding-that-to-contract-mapping # Conflicts: # prediction_market_agent_tooling/deploy/agent.py # prediction_market_agent_tooling/markets/omen/data_models.py
Warning Rate limit exceeded@gabrielfior has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 2 minutes and 33 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughA new test file, Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
def test_get_markets_id_in(omen_subgraph_handler: OmenSubgraphHandler) -> None: | ||
market_id = "0x934b9f379dd9d8850e468df707d58711da2966cd" | ||
sgh = OmenSubgraphHandler() | ||
markets = sgh.get_omen_binary_markets( | ||
markets = omen_subgraph_handler.get_omen_binary_markets( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using pytest.fixture here instead of instantiating a new instance every time. Small tweak hence doing it here also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regardless if it's a singleton or not, we want to instantiate (or get a reference to the created instance) in just one place, and that place is the conftest
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to instantiate (or get a reference to the created instance) in just one place
Why? Wasn't your point back then about making it a singleton that it's initialized only once? And no matter how many times we do OmenSubgraphHandler()
it's always returning the already initialised one? So, is not initialising it "again" just a micro-optimization at best? (because it just returns existing, already initialised, instance)
(Just to be clear: I don't mind refactoring it to a fixture that's passed as an argument as you are doing it here, just trying to understand the motivation as it feels like we are discussing "do not initialize OmenSubgraphHandler" multiple times recently and I thought it was solved nicely by you when it became a singleton. If nothing else, it's nice not to do OmenSubgraphHandler()
all over the tests just for esthetic reasons 😄 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point here is not is it a singleton?
(although for the record it is) - the point is, fixtures
(https://docs.pytest.org/en/stable/explanation/fixtures.html#about-fixtures) is the pytest way to abstract away the arrange
phase of the tests. Instantiating the OmenSubgraphHandler
is necessary for every test and should be done in the same way (e.g. with the same args) on every test, hence it makes sense to abstract it away, and in pytest one does that with fixtures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (17)
.env.example (1)
1-6
: Reminder: Ensure proper handling of sensitive information.While adding these new API keys to the
.env.example
file is correct, it's crucial to remind all contributors to:
- Never commit actual API keys or secrets to version control.
- Use the
.env.example
file as a template and create a local.env
file with real values.- Ensure that
.env
is listed in.gitignore
to prevent accidental commits of sensitive data.Additionally, consider implementing a secrets management solution for production environments to enhance security.
tests_integration/tools/ipfs/test_ipfs_handler.py (3)
1-3
: Remove unused importTemporaryFile
.The
TemporaryFile
import is not used in the code. To improve code cleanliness, please remove it.Apply this diff to remove the unused import:
import datetime -from tempfile import TemporaryFile, NamedTemporaryFile +from tempfile import NamedTemporaryFile🧰 Tools
🪛 Ruff
2-2:
tempfile.TemporaryFile
imported but unusedRemove unused import:
tempfile.TemporaryFile
(F401)
9-11
: Remove unused importDeployableTraderAgent
.The
DeployableTraderAgent
import is not used in the code. To improve code cleanliness, please remove it.Apply this diff to remove the unused import:
from prediction_market_agent_tooling.config import APIKeys -from prediction_market_agent_tooling.deploy.agent import DeployableTraderAgent from prediction_market_agent_tooling.tools.ipfs.ipfs_handler import IPFSHandler
🧰 Tools
🪛 Ruff
10-10:
prediction_market_agent_tooling.deploy.agent.DeployableTraderAgent
imported but unusedRemove unused import:
prediction_market_agent_tooling.deploy.agent.DeployableTraderAgent
(F401)
19-32
: Comprehensive test implementation with a minor improvement suggestion.The test function effectively covers both upload and removal operations for IPFS. It ensures unique content for each test run and verifies the uploaded content. However, there's no explicit assertion for the removal operation.
Consider adding an assertion to verify that the file has been successfully removed from IPFS. You can do this by attempting to retrieve the file after calling
unpin_file
and expecting a 404 status code. Here's a suggested implementation:import requests # ... (existing code) # remove from IPFS test_ipfs_handler.unpin_file(ipfs_hash) # Verify removal response = requests.get(f"https://ipfs.io/ipfs/{ipfs_hash}") assert response.status_code == 404, f"Expected 404, but got {response.status_code}"This addition will make the test more robust by ensuring that the file is actually removed from IPFS.
prediction_market_agent_tooling/tools/ipfs/ipfs_handler.py (4)
11-16
: LGTM: Class definition and initialization are well-structured.The
IPFSHandler
class is properly defined, and the constructor correctly initializes the Pinata API client using the provided API keys. The use ofget_secret_value()
method suggests proper handling of sensitive data.Consider adding error handling for potential exceptions that might occur during the initialization of the
PinataPy
client, such as network errors or invalid credentials.
18-23
: LGTM with suggestions:upload_file
method is functional but could be improved.The method correctly uploads a file to IPFS using the Pinata API and returns the IPFS hash wrapped in an
IPFSCIDVersion0
object.Consider the following improvements:
- Add error handling for potential exceptions (e.g., file not found, network errors).
- Use a context manager (
with
statement) when opening the file to ensure it's properly closed.- Add type hinting for the return value of
pin_file_to_ipfs
to improve code readability.Here's a suggested implementation:
from typing import Dict, Any def upload_file(self, file_path: str) -> IPFSCIDVersion0: try: with open(file_path, 'rb') as file: response: Dict[str, Any] = self.pinata.pin_file_to_ipfs(file, save_absolute_paths=False) return IPFSCIDVersion0(response["IpfsHash"]) except FileNotFoundError: raise ValueError(f"File not found: {file_path}") except Exception as e: raise RuntimeError(f"Failed to upload file to IPFS: {str(e)}")
25-30
: LGTM with suggestions:store_agent_result
method is well-implemented but could be improved.The method correctly serializes the
IPFSAgentResult
object to JSON and uploads it to IPFS using a temporary file.Consider the following improvements:
- Add error handling for potential exceptions during JSON serialization or file upload.
- Use
json_file.seek(0)
beforeupload_file
to ensure the file pointer is at the beginning.- Consider using
NamedTemporaryFile(delete=False)
and manually removing the file after upload for better control over file lifecycle.Here's a suggested implementation:
import os def store_agent_result(self, agent_result: IPFSAgentResult) -> IPFSCIDVersion0: with tempfile.NamedTemporaryFile(mode='w+', encoding='utf-8', delete=False) as json_file: try: json.dump(agent_result.model_dump(), json_file, indent=4) json_file.flush() json_file.seek(0) ipfs_hash = self.upload_file(json_file.name) return ipfs_hash except Exception as e: raise RuntimeError(f"Failed to store agent result: {str(e)}") finally: os.unlink(json_file.name)
32-33
: LGTM with suggestions:unpin_file
method is simple but could be improved.The method correctly calls the Pinata API to remove a pinned file from IPFS.
Consider adding error handling and return value:
def unpin_file(self, hash_to_remove: str) -> bool: try: response = self.pinata.remove_pin_from_ipfs(hash_to_remove=hash_to_remove) return response.get('success', False) except Exception as e: raise RuntimeError(f"Failed to unpin file from IPFS: {str(e)}")This implementation adds error handling and returns a boolean indicating whether the unpinning was successful.
prediction_market_agent_tooling/config.py (2)
48-49
: LGTM! Consider adding a comment for clarity.The addition of
PINATA_API_KEY
andPINATA_API_SECRET
as optionalSecretStr
properties is appropriate and consistent with the existing code structure.Consider adding a brief comment above these new properties to explain their purpose, e.g.:
# Pinata API credentials for IPFS interactions PINATA_API_KEY: t.Optional[SecretStr] = None PINATA_API_SECRET: t.Optional[SecretStr] = None
154-164
: LGTM! Consider standardizing error messages.The implementation of
pinata_api_key
andpinata_api_secret
properties is correct and consistent with the existing codebase. The use ofcheck_not_none
ensures that the required environment variables are set before use.For consistency with other error messages in the class, consider standardizing the error message format. For example:
@property def pinata_api_key(self) -> SecretStr: return check_not_none( self.PINATA_API_KEY, "PINATA_API_KEY missing in the environment." ) @property def pinata_api_secret(self) -> SecretStr: return check_not_none( self.PINATA_API_SECRET, "PINATA_API_SECRET missing in the environment." )This matches the format used in other properties like
manifold_api_key
andmetaculus_api_key
.tests/markets/omen/test_omen_subgraph_handler.py (1)
349-363
: LGTM: New test for prediction retrieval.This new test function effectively checks the
get_agent_results_for_market
method ofOmenSubgraphHandler
. It verifies both the number of predictions and the IPFS hash of the first prediction using dummy values.Consider adding a brief comment explaining the purpose and origin of the dummy market ID and IPFS hash. This would enhance the test's clarity for future maintainers. For example:
# Dummy values created upon contract deployment for testing purposes dummy_market_id = Web3.to_checksum_address("0x134f193625bbc38f31aeeecf41f5f96c1ad6ea9a") dummy_ipfs_hash = "0x3750ffa211dab39b4d0711eb27b02b56a17fa9d257ee549baa3110725fd1d41b"prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py (1)
814-834
: New methodget_agent_results_for_market
looks good, with a suggestion for improvement.The implementation of
get_agent_results_for_market
is well-structured and follows the established patterns in the class. It correctly queries the subgraph, processes the results, and returns a list ofContractPrediction
objects.Suggestion for improvement:
Consider adding error handling for potential exceptions that might occur during the subgraph query or JSON processing. This would make the method more robust.Here's a suggested improvement with error handling:
def get_agent_results_for_market( self, market_id: HexAddress ) -> list[ContractPrediction]: - prediction_added = ( - self.omen_agent_result_mapping_subgraph.Query.predictionAddeds( - where={"marketAddress": market_id.lower()}, - orderBy="blockNumber", - orderDirection="asc", - ) - ) - fields = [ - prediction_added.publisherAddress, - prediction_added.ipfsHash, - prediction_added.txHash, - prediction_added.estimatedProbabilityBps, - ] - result = self.sg.query_json(fields) - items = self._parse_items_from_json(result) - if not items: - return [] - return [ContractPrediction.model_validate(i) for i in items] + try: + prediction_added = ( + self.omen_agent_result_mapping_subgraph.Query.predictionAddeds( + where={"marketAddress": market_id.lower()}, + orderBy="blockNumber", + orderDirection="asc", + ) + ) + fields = [ + prediction_added.publisherAddress, + prediction_added.ipfsHash, + prediction_added.txHash, + prediction_added.estimatedProbabilityBps, + ] + result = self.sg.query_json(fields) + items = self._parse_items_from_json(result) + if not items: + return [] + return [ContractPrediction.model_validate(i) for i in items] + except Exception as e: + logger.error(f"Error fetching agent results for market {market_id}: {str(e)}") + return []This change wraps the main logic in a try-except block, logging any errors that occur during the process. It also ensures that an empty list is returned in case of any exceptions, maintaining the method's contract.
prediction_market_agent_tooling/deploy/agent.py (4)
515-516
: Fix formatting in method signatureThere's a missing space after the comma in the
store_prediction
method signature, which affects readability and doesn't conform to PEP 8 style guidelines.Apply this diff to correct the formatting:
-def store_prediction(self, market_id: str,processed_market: ProcessedMarket, +def store_prediction(self, market_id: str, processed_market: ProcessedMarket,
525-526
: Consider separating transaction hashes with a delimiterCurrently, transaction IDs are concatenated without any separator, which may make them hard to read or parse. Joining them with a delimiter like a comma improves readability.
Apply this diff to join transaction IDs with a comma:
-tx_hash="".join( +tx_hash=",".join(
Line range hint
529-536
: Add error handling for 'add_prediction' methodThe call to
add_prediction
might fail, ortx_receipt
might not contain the expected keys. Adding error handling ensures that any exceptions are managed gracefully.Apply this diff to include exception handling:
+ try: tx_receipt = OmenAgentResultMappingContract().add_prediction( api_keys=keys, market_address=Web3.to_checksum_address(market_id), prediction=prediction, ) logger.info( f"Added prediction to market {market_id}. - receipt {tx_receipt['transactionHash'].hex()}." ) + except Exception as e: + logger.error(f"Failed to add prediction to market {market_id}: {e}")
476-476
: Consistent logging formatEnsure consistency in logging by using structured messages. For example, consider replacing f-strings with parameterized logging to prevent unnecessary string interpolation if the log level is not enabled.
Apply this diff to use parameterized logging:
- logger.info(f"Executing trade {trade} on market {market.id}") + logger.info("Executing trade %s on market %s", trade, market.id)prediction_market_agent_tooling/markets/omen/omen_contracts.py (1)
694-694
: Reminder: Address the TODO comment for writing testsThe TODO comment indicates that tests are needed for the
get_predictions
method. Writing unit tests will ensure that the method functions correctly and will help prevent future regressions.Would you like me to help generate unit tests for this method or open a GitHub issue to track this task?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (3)
poetry.lock
is excluded by!**/*.lock
,!**/*.lock
prediction_market_agent_tooling/abis/omen_agentresultmapping.abi.json
is excluded by!**/*.json
pyproject.toml
is excluded by!**/*.toml
📒 Files selected for processing (10)
- .env.example (1 hunks)
- prediction_market_agent_tooling/config.py (2 hunks)
- prediction_market_agent_tooling/deploy/agent.py (7 hunks)
- prediction_market_agent_tooling/markets/omen/data_models.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen_contracts.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py (4 hunks)
- prediction_market_agent_tooling/tools/ipfs/ipfs_handler.py (1 hunks)
- tests/markets/omen/test_omen_subgraph_handler.py (2 hunks)
- tests_integration/tools/ipfs/test_ipfs_handler.py (1 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (3 hunks)
🧰 Additional context used
🪛 Ruff
prediction_market_agent_tooling/deploy/agent.py
3-3:
json
imported but unusedRemove unused import:
json
(F401)
35-35:
prediction_market_agent_tooling.gtypes.ChecksumAddress
imported but unusedRemove unused import:
prediction_market_agent_tooling.gtypes.ChecksumAddress
(F401)
prediction_market_agent_tooling/markets/omen/data_models.py
3-3:
typing_extensions.Self
imported but unusedRemove unused import:
typing_extensions.Self
(F401)
5-5:
pydantic.model_validator
imported but unusedRemove unused import
(F401)
5-5:
pydantic.computed_field
imported but unusedRemove unused import
(F401)
tests_integration/tools/ipfs/test_ipfs_handler.py
2-2:
tempfile.TemporaryFile
imported but unusedRemove unused import:
tempfile.TemporaryFile
(F401)
10-10:
prediction_market_agent_tooling.deploy.agent.DeployableTraderAgent
imported but unusedRemove unused import:
prediction_market_agent_tooling.deploy.agent.DeployableTraderAgent
(F401)
🔇 Additional comments (16)
.env.example (1)
5-6
: LGTM! New environment variables added for Pinata integration.The addition of
PINATA_API_KEY
andPINATA_API_SECRET
aligns with the PR objectives of storing predictions into IPFS. These variables will allow the application to authenticate with Pinata's IPFS pinning service.Let's verify if the .env file is properly gitignored to prevent accidental commit of sensitive information:
✅ Verification successful
Verification Successful:
.env
is properly gitignored and.env.example
is tracked. The environment files are correctly configured to prevent accidental commits of sensitive information.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if .env is in .gitignore if grep -q "^\.env$" .gitignore; then echo ".env is correctly listed in .gitignore" else echo "WARNING: .env is not listed in .gitignore" fi # Check if .env.example is tracked by git if git ls-files --error-unmatch .env.example > /dev/null 2>&1; then echo ".env.example is tracked by git (as expected)" else echo "WARNING: .env.example is not tracked by git" fiLength of output: 260
tests_integration/tools/ipfs/test_ipfs_handler.py (1)
14-17
: LGTM: Well-implemented fixture for IPFSHandler.The fixture is correctly defined with module scope, which is appropriate if there are no side effects between tests. It properly initializes the IPFSHandler with APIKeys and uses the
yield
statement for setup and teardown.prediction_market_agent_tooling/tools/ipfs/ipfs_handler.py (2)
1-8
: LGTM: Imports are well-organized and appropriate.The imports are correctly structured, following the standard practice of grouping standard library imports, third-party imports, and local imports. All imported modules and classes appear to be used in the implementation.
1-33
: Summary: Good implementation with room for improvementThe
IPFSHandler
class provides a solid foundation for interacting with IPFS using the Pinata service. The implementation is generally well-structured and follows good practices. However, there are several areas where the code could be improved:
- Error handling: Add comprehensive error handling to all methods to make the class more robust and easier to debug.
- Type hinting: Improve type hinting, especially for API responses, to enhance code readability and maintainability.
- Resource management: Ensure proper resource cleanup, particularly when dealing with temporary files.
- Return values: Consider adding meaningful return values to methods like
unpin_file
to provide more information about the operation's success.Implementing these suggestions will result in a more robust and maintainable
IPFSHandler
class.prediction_market_agent_tooling/config.py (1)
Line range hint
1-165
: Verify usage of new Pinata API propertiesThe additions for Pinata API integration look good and are consistent with the existing code structure. To ensure completeness of the implementation:
Please run the following script to check for any usage of the new Pinata API properties in the codebase:
This will help ensure that the new properties are being utilized appropriately in the relevant parts of the codebase related to IPFS integration.
✅ Verification successful
✅ Verified: Pinata API properties are correctly integrated and utilized throughout the codebase.
pinata_api_key
andpinata_api_secret
are properly used in configuration and IPFS handler modules.- Relevant tests are in place to ensure their correct functionality.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for usage of Pinata API properties in the codebase # Search for usage of pinata_api_key and pinata_api_secret echo "Searching for usage of pinata_api_key:" rg --type python 'pinata_api_key' echo "\nSearching for usage of pinata_api_secret:" rg --type python 'pinata_api_secret' # Search for any mentions of Pinata or IPFS echo "\nSearching for mentions of Pinata or IPFS:" rg --type python -i 'pinata|ipfs'Length of output: 480
Script:
# #!/bin/bash # Description: Check for usage of Pinata API properties in the codebase # Search for usage of pinata_api_key and pinata_api_secret echo "Searching for usage of pinata_api_key:" rg --type py 'pinata_api_key' echo "\nSearching for usage of pinata_api_secret:" rg --type py 'pinata_api_secret' # Search for any mentions of Pinata or IPFS echo "\nSearching for mentions of Pinata or IPFS:" rg --type py -i 'pinata|ipfs'Length of output: 8594
tests/markets/omen/test_omen_subgraph_handler.py (7)
288-290
: LGTM: Consistent with previous approval.The change to use
pytest.fixture
foromen_subgraph_handler
is consistent with the previous approval and other functions in the file.
298-302
: LGTM: Consistent parameter update.The addition of the
omen_subgraph_handler
parameter is consistent with other functions in the file. The test logic remains unchanged, maintaining its original purpose.
306-313
: LGTM: Consistent parameter update.The addition of the
omen_subgraph_handler
parameter is consistent with other functions in the file. The test logic remains unchanged, maintaining its original purpose of testing existing image retrieval.
317-321
: LGTM: Consistent parameter update.The addition of the
omen_subgraph_handler
parameter is consistent with other functions in the file. The test logic remains unchanged, maintaining its original purpose of testing non-existing image retrieval.
Line range hint
325-333
: LGTM: Consistent parameter update.The addition of the
omen_subgraph_handler
parameter is consistent with other functions in the file. The test logic remains unchanged, maintaining its original purpose of ensuring non-wxDai markets are not returned when not wanted.
338-347
: LGTM: Consistent parameter update.The addition of the
omen_subgraph_handler
parameter is consistent with other functions in the file. The test logic remains unchanged, maintaining its original purpose of ensuring non-wxDai markets are returned when specifically requested.
Line range hint
288-363
: Summary: Consistent updates and new test addition.The changes in this file primarily involve updating existing test function signatures to include the
omen_subgraph_handler
parameter, which is consistent with the use of pytest.fixture as mentioned in a previous review. This change enhances the reusability of theOmenSubgraphHandler
instance across multiple tests.Additionally, a new test function
test_get_predictions_from_market
has been added, which increases the test coverage by verifying theget_agent_results_for_market
method ofOmenSubgraphHandler
.These changes contribute to better code organization and increased test coverage, which are positive improvements to the codebase.
prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py (2)
30-30
: Import addition looks good.The addition of
ContractPrediction
to the import statement is appropriate, as it will be used in the new method being added to the class.
64-67
: URL update and subgraph initialization look good.The new URL for
OMEN_AGENT_RESULT_MAPPING_GRAPH_URL
has been correctly updated, and the initialization ofself.omen_agent_result_mapping_subgraph
follows the pattern used for other subgraphs in the class.Also applies to: 105-110
prediction_market_agent_tooling/markets/omen/data_models.py (1)
552-557
: LGTMThe
IPFSAgentResult
class is well-defined with appropriate configuration to forbid extra fields.prediction_market_agent_tooling/deploy/agent.py (1)
524-524
: Verify calculation of 'estimated_probability_bps'Ensure that the calculation of
estimated_probability_bps
correctly converts the probability to basis points. This is crucial for accurate contract interactions.Run the following script to check the calculation:
prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py
Outdated
Show resolved
Hide resolved
Closes #363 |
) | ||
|
||
# We store only the first trade, since it's only here for tracking purposes - also, | ||
# we want only 1 entry per prediction, not 1 entry per trade. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be fixed now (https://github.com/gnosis/prediction-market-agent-tooling/pull/446/files#diff-6aaf85d952cb99bbd129bef760f5e94cd4477d698103727006d150de8ff9780fR543) that we store multiple txHashes
per prediction.
…ing-prediction-into-ipfs-and-adding-that-to-contract-mapping' into 404-add-methods-to-pmat-for-storing-prediction-into-ipfs-and-adding-that-to-contract-mapping
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (1)
scripts/store_prediction.py (1)
16-28
: Consider a more descriptive function name.The main function is well-defined with a clear docstring explaining its usage. However, the function name
main
doesn't reflect its specific purpose of creating a market on Omen and storing a prediction.Consider renaming the function to something more descriptive, such as
create_omen_market_and_store_prediction
. This would make the function's purpose immediately clear to readers.-def main( +def create_omen_market_and_store_prediction( from_private_key: str = typer.Option(), ) -> None:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
prediction_market_agent_tooling/abis/omen_agentresultmapping.abi.json
is excluded by!**/*.json
📒 Files selected for processing (6)
- prediction_market_agent_tooling/deploy/agent.py (5 hunks)
- prediction_market_agent_tooling/markets/omen/data_models.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen_contracts.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py (4 hunks)
- scripts/store_prediction.py (1 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- prediction_market_agent_tooling/markets/omen/data_models.py
- prediction_market_agent_tooling/markets/omen/omen_contracts.py
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py
🔇 Additional comments (11)
scripts/store_prediction.py (3)
1-13
: Imports look good and are well-organized.The necessary modules and classes are imported, and they are logically organized with external libraries first, followed by internal modules. All imports appear to be relevant to the script's functionality.
35-35
: Verify the correctness of market_address assignment.The market_address is currently set to the user's public key. This might not be the correct approach for assigning a market address.
Please confirm if this is the intended behavior. If not, consider using a different method to obtain or generate the market address.
52-53
: Script execution block looks good.The script execution block correctly uses typer.run(main) to execute the main function.
If you decide to rename the main function as suggested earlier, don't forget to update this line accordingly:
- typer.run(main) + typer.run(create_omen_market_and_store_prediction)tests_integration_with_local_chain/markets/omen/test_omen.py (2)
32-32
: LGTM: New imports added for the test function.The new imports
ContractPrediction
andHexBytes
are correctly added to support the newly introducedtest_add_predictions
function.Also applies to: 59-59
433-433
: LGTM: Proper spacing added.The blank line added after the
test_add_predictions
function improves readability and follows the PEP 8 style guide for Python code.prediction_market_agent_tooling/deploy/agent.py (6)
51-54
: New imports added for IPFS and Omen-related functionalityThe new imports (
ContractPrediction
,IPFSAgentResult
,OmenAgentResultMappingContract
) suggest the addition of IPFS storage and Omen-specific contract interaction capabilities. These additions align with the PR objectives of storing predictions in IPFS and adding them to contract mapping.
66-67
: New imports for HexBytes and IPFSHandlerThe addition of
HexBytes
andIPFSHandler
imports further supports the new IPFS-related functionality. These are necessary for handling IPFS hashes and interacting with IPFS storage.
305-305
: IPFSHandler added to DeployableTraderAgent constructorAn
IPFSHandler
instance is now created in the constructor, enabling IPFS storage functionality for the agent. This aligns with the PR objective of storing predictions in IPFS.
494-496
: Updated after_process_market method signatureThe
after_process_market
method now includes aprocessed_market
parameter. This change allows for handling the processed market data, which is necessary for storing predictions.
502-516
: Enhanced after_process_market method with OMEN-specific logicThe method now checks if the market type is OMEN before proceeding with storing the prediction. This ensures that the new functionality is only applied to OMEN markets, maintaining compatibility with other market types.
Line range hint
1-603
: Summary: PR objectives successfully implemented with minor suggestions for improvementThe changes in this file effectively implement the PR objectives of adding methods to PMAT for storing predictions into IPFS and adding them to contract mapping. The new functionality is well-integrated into the existing
DeployableTraderAgent
class, maintaining compatibility with other market types.Key improvements:
- IPFS storage capability added through
IPFSHandler
.- Omen-specific logic for storing predictions in the
after_process_market
method.- New
store_prediction
method for handling IPFS storage and contract interaction.Suggestions for further improvement:
- Add error handling for IPFS storage and contract interaction failures.
- Implement gas estimation before sending transactions to ensure sufficient gas.
Overall, the implementation is solid and achieves the stated goals of the PR. With the suggested minor improvements, it will be more robust and reliable.
…methods-to-pmat-for-storing-prediction-into-ipfs-and-adding-that-to-contract-mapping
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
tests_integration_with_local_chain/markets/omen/test_local_chain.py (1)
117-117
: Approved: Improved test reliability with sleep duration increase.The change to increase the sleep duration from 5 to 6 seconds (5 seconds plus 1 second safety margin) enhances the test's reliability. This adjustment ensures that the local chain's timestamp is sufficiently out of sync with the UTC timestamp, reducing the likelihood of false negatives due to small timing inconsistencies.
For improved clarity, consider updating the comment to reflect the exact sleep duration:
- # Sleep a little to let the local chain go out of sync without updating the block + # Sleep for 6 seconds (5 seconds allowed difference + 1 second safety margin) to ensure the local chain goes out of sync without updating the blockThis change would make the intention behind the specific sleep duration more explicit to future readers of the code.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
.github/workflows/python_ci.yaml
is excluded by!**/*.yaml
📒 Files selected for processing (1)
- tests_integration_with_local_chain/markets/omen/test_local_chain.py (1 hunks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
tests_integration_with_local_chain/conftest.py (2)
Line range hint
50-52
: Fix the initialization ofEthereumClient
.The new
local_ethereum_client
fixture is a good addition, but there's an issue with how theEthereumClient
is being initialized. Currently, it's created without any arguments, which is likely not the intended behavior.Please update the fixture to pass the
local_web3
instance to theEthereumClient
:@pytest.fixture(scope="module") def local_ethereum_client(local_web3: Web3) -> EthereumClient: - return EthereumClient() + return EthereumClient(local_web3)This ensures that the
EthereumClient
is properly configured with the local Web3 instance.
Line range hint
1-77
: Summary of changes and recommendations
- The
load_env
fixture is now set toautouse=True
, which is a good practice for ensuring consistent environment setup.- The scope of the
local_web3
fixture has been changed from "session" to "module". While this may improve test isolation, it could impact performance. Please provide rationale for this change and consider the trade-offs.- A new
local_ethereum_client
fixture has been added, but theEthereumClient
is not properly initialized. Update the initialization to use thelocal_web3
instance.Please address these points, particularly the
EthereumClient
initialization, to ensure the changes are fully effective and maintain the integrity of the test suite.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
- tests_integration_with_local_chain/conftest.py (1 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (8 hunks)
🔇 Additional comments (4)
tests_integration_with_local_chain/conftest.py (2)
Line range hint
20-22
: LGTM: Autouse for environment loading is a good practice.Setting
load_env
fixture toautouse=True
with session scope ensures that environment variables are loaded automatically for all tests. This is a good practice as it guarantees consistent environment setup across all tests.
26-28
: Please clarify the rationale for changing thelocal_web3
fixture scope.The scope of the
local_web3
fixture has been changed from "session" to "module". While this change might provide better isolation between test modules, it could potentially impact test performance if many modules are using this fixture, as it will be recreated more frequently.Could you please explain the reasoning behind this change? Are there specific isolation issues that this change addresses?
To assess the impact of this change, let's check how many test files are using this fixture:
✅ Verification successful
Fixture Scope Change Verified
The
local_web3
fixture is used in 5 test files. Changing its scope from "session" to "module" should have minimal impact on test performance while improving isolation between test modules.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Count the number of test files that import and use the local_web3 fixture echo "Number of test files using local_web3 fixture:" rg -l "def test_.*local_web3" tests_integration_with_local_chain | wc -lLength of output: 179
tests_integration_with_local_chain/markets/omen/test_omen.py (2)
Line range hint
192-208
: LGTM: Improved test flexibilityThe changes to
test_omen_create_market_sdai
function enhance its flexibility by using thelocal_web3
parameter and allowing for different collateral token addresses. This improvement enables testing with various token configurations.
392-394
: LGTM: Consistent use of local_web3The changes in
test_place_bet_with_autodeposit
function ensure consistent use of thelocal_web3
parameter throughout the function. This improvement guarantees that all operations use the same web3 instance, enhancing the reliability of the test.Also applies to: 396-396, 424-424
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
tests_integration_with_local_chain/conftest.py (1)
81-91
: Function implementation looks good, with minor suggestions for improvement.The new
create_and_fund_random_account
function aligns well with the PR objectives and is implemented correctly. However, there are a couple of points to consider:
- The default argument
xDai(10)
should be moved inside the function to avoid potential issues with mutable default arguments.- Consider adding error handling for the
send_xdai_to
call to manage potential exceptions.Here's a suggested refactor addressing these points:
def create_and_fund_random_account( web3: Web3, private_key: PrivateKey, deposit_amount: t.Optional[xDai] = None ) -> LocalAccount: fresh_account = Account.create() deposit = deposit_amount if deposit_amount is not None else xDai(10) try: send_xdai_to( web3=web3, from_private_key=private_key, to_address=fresh_account.address, value=xdai_to_wei(deposit), ) except Exception as e: # Log the error or handle it as appropriate for your use case raise RuntimeError(f"Failed to fund new account: {e}") from e return fresh_accountThis refactor addresses the static analysis hint and adds basic error handling. Adjust the error handling strategy as needed for your specific use case.
🧰 Tools
🪛 Ruff
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
- tests_integration_with_local_chain/conftest.py (3 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (10 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests_integration_with_local_chain/markets/omen/test_omen.py
🧰 Additional context used
🪛 Ruff
tests_integration_with_local_chain/conftest.py
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
🔇 Additional comments (3)
tests_integration_with_local_chain/conftest.py (3)
8-9
: LGTM: New imports are appropriate and well-organized.The new imports (
Account
,LocalAccount
, andPrivateKey
) are correctly added and necessary for the new functionality. They are placed in the appropriate sections, maintaining good code organization.Also applies to: 19-19
79-91
: No negative impact on existing code observed.The new function
create_and_fund_random_account
is well-placed at the end of the file and doesn't interfere with any existing fixtures or functions. This addition maintains the overall structure of the file and enhances its functionality without disrupting existing code.🧰 Tools
🪛 Ruff
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
Line range hint
1-91
: Overall, the changes look good and align with the PR objectives.The addition of the
create_and_fund_random_account
function enhances the test suite's capabilities for creating and funding new accounts, which is valuable for integration testing. The changes are well-implemented and don't negatively impact existing code. With the suggested minor improvements, this addition will be a solid enhancement to the PMAT functionality.🧰 Tools
🪛 Ruff
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
tests_integration_with_local_chain/conftest.py (1)
82-82
: Consider refactoring the default argument.The default argument for
deposit_amount
uses a function callxDai(10)
, which is generally discouraged as it's evaluated at function definition time, not at call time.Consider refactoring to use a module-level constant or move the default value assignment inside the function:
DEFAULT_DEPOSIT_AMOUNT = xDai(10) def create_and_fund_random_account( web3: Web3, private_key: PrivateKey, deposit_amount: xDai = DEFAULT_DEPOSIT_AMOUNT ) -> LocalAccount: # ... rest of the functionOr:
def create_and_fund_random_account( web3: Web3, private_key: PrivateKey, deposit_amount: t.Optional[xDai] = None ) -> LocalAccount: if deposit_amount is None: deposit_amount = xDai(10) # ... rest of the function🧰 Tools
🪛 Ruff
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (5)
- prediction_market_agent_tooling/markets/omen/data_models.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py (4 hunks)
- tests/markets/omen/test_omen_subgraph_handler.py (2 hunks)
- tests_integration_with_local_chain/conftest.py (2 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (10 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- prediction_market_agent_tooling/markets/omen/data_models.py
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py
- tests/markets/omen/test_omen_subgraph_handler.py
- tests_integration_with_local_chain/markets/omen/test_omen.py
🧰 Additional context used
🪛 Ruff
tests_integration_with_local_chain/conftest.py
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
🔇 Additional comments (2)
tests_integration_with_local_chain/conftest.py (2)
8-9
: LGTM: New imports are correctly added.The new imports (
Account
,LocalAccount
, andPrivateKey
) are necessary for the newly introduced function and are correctly placed in the import section.Also applies to: 16-16
81-91
: LGTM: New functioncreate_and_fund_random_account
is well-implemented.The function correctly creates a new Ethereum account and funds it with the specified amount of xDAI. It uses appropriate methods and follows good practices with type hinting.
🧰 Tools
🪛 Ruff
82-82: Do not perform function call
xDai
in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
…methods-to-pmat-for-storing-prediction-into-ipfs-and-adding-that-to-contract-mapping # Conflicts: # poetry.lock # prediction_market_agent_tooling/markets/omen/data_models.py # prediction_market_agent_tooling/markets/omen/omen_contracts.py # tests_integration_with_local_chain/markets/omen/test_omen.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (1)
tests_integration_with_local_chain/markets/omen/test_omen.py (1)
478-497
: Enhance thetest_add_predictions
function.The
test_add_predictions
function provides a good starting point for testing the functionality of adding predictions. However, there are a few areas where it can be improved:
The
market_address
is set totest_keys.public_key
, which may not be a realistic market address. Consider using a more appropriate value or creating a dummy market address specifically for testing purposes.The
dummy_transaction_hash
is used for bothtx_hashes
andipfs_hash
in theContractPrediction
instance. In a real-world scenario, these values would likely be different. Consider using distinct values for each field to better simulate real data.The test function only covers the happy path scenario of adding a single prediction. Consider adding more test cases to cover different scenarios, such as adding multiple predictions, handling edge cases, or testing error conditions.
The test function doesn't include any error handling or assertions for potential exceptions. Consider adding appropriate error handling and assertions to ensure the function behaves as expected in case of errors or invalid input.
Here's an example of how you can enhance the
test_add_predictions
function:def test_add_predictions(local_web3: Web3, test_keys: APIKeys) -> None: agent_result_mapping = OmenAgentResultMappingContract() dummy_market_address = Web3.to_checksum_address('0x1234567890123456789012345678901234567890') dummy_transaction_hash = HexBytes('0x3750ffa211dab39b4d0711eb27b02b56a17fa9d257ee549baa3110725fd1d41b') dummy_ipfs_hash = HexBytes('0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef') # Test adding a single prediction p1 = ContractPrediction( tx_hashes=[dummy_transaction_hash], estimated_probability_bps=5454, ipfs_hash=dummy_ipfs_hash, publisher=test_keys.public_key, ) agent_result_mapping.add_prediction(test_keys, dummy_market_address, p1, web3=local_web3) stored_predictions = agent_result_mapping.get_predictions(dummy_market_address, web3=local_web3) assert len(stored_predictions) == 1 assert stored_predictions[0] == p1 # Test adding multiple predictions p2 = ContractPrediction( tx_hashes=[dummy_transaction_hash], estimated_probability_bps=7777, ipfs_hash=dummy_ipfs_hash, publisher=test_keys.public_key, ) agent_result_mapping.add_prediction(test_keys, dummy_market_address, p2, web3=local_web3) stored_predictions = agent_result_mapping.get_predictions(dummy_market_address, web3=local_web3) assert len(stored_predictions) == 2 assert stored_predictions[1] == p2 # Test error handling for invalid input with pytest.raises(ValueError): invalid_prediction = ContractPrediction( tx_hashes=[], # Empty tx_hashes estimated_probability_bps=5454, ipfs_hash=dummy_ipfs_hash, publisher=test_keys.public_key, ) agent_result_mapping.add_prediction(test_keys, dummy_market_address, invalid_prediction, web3=local_web3)This enhanced version of the test function:
- Uses a separate dummy market address instead of
test_keys.public_key
- Uses distinct values for
tx_hashes
andipfs_hash
in theContractPrediction
instances- Adds a test case for adding multiple predictions
- Includes error handling and an assertion for invalid input (empty
tx_hashes
)Remember to import
pytest
if it's not already imported in the file.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
.github/workflows/python_ci.yaml
is excluded by!**/*.yaml
📒 Files selected for processing (4)
- prediction_market_agent_tooling/markets/omen/data_models.py (2 hunks)
- prediction_market_agent_tooling/markets/omen/omen_contracts.py (2 hunks)
- tests/markets/test_betting_strategies.py (1 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (10 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- prediction_market_agent_tooling/markets/omen/data_models.py
- prediction_market_agent_tooling/markets/omen/omen_contracts.py
🧰 Additional context used
🪛 Ruff
tests_integration_with_local_chain/markets/omen/test_omen.py
446-446: Undefined name
keys
(F821)
452-452: Undefined name
keys
(F821)
462-462: Undefined name
keys
(F821)
🔇 Additional comments (4)
tests/markets/test_betting_strategies.py (1)
284-284
: Clarification needed: Reason for increasing assertion toleranceThe tolerance in the assertion
np.isclose(market_moving_bet.size, 0.0, atol=1e-3)
has been increased from 1e-4 to 1e-3. This makes the test slightly less strict, allowing for a larger deviation from zero.Could you please clarify the reason for this adjustment? Was it to address false negatives in the test, or is there another specific reason?
It would be helpful to add a comment explaining the rationale behind this change. For example:
# Tolerance increased to 1e-3 to account for [reason] assert np.isclose(market_moving_bet.size, 0.0, atol=1e-3)This will help future maintainers understand why this specific tolerance was chosen.
tests_integration_with_local_chain/markets/omen/test_omen.py (3)
361-390
: LGTM!The
test_deposit_and_withdraw_wxdai
test function is well-structured and covers the essential functionality of depositing and withdrawing WXDAI tokens. It creates a fresh account, deposits a specified amount of WXDAI, verifies the balance, and then withdraws the entire WXDAI balance, ensuring that the final balance is zero. The test utilizes theWrappedxDaiContract
andget_balances
functions appropriately.
418-433
: Verify the commented-out code block.This code block appears to be commented out. It seems to be creating a fresh account, funding it with a deposit amount, and setting up API keys. However, it's unclear if this code is still needed or if it has been replaced by the code above it.
Please review the commented-out code and determine if it's still required. If not, consider removing it to improve code clarity and maintainability.
498-498
: LGTM!The
test_place_bet_with_prev_existing_positions
function is well-structured and covers the scenario of placing a bet with previously existing positions. It fetches an open binary market, places a bet using a standard account, retrieves the position balance, mocks theget_positions
function to return the existing position, and then attempts to liquidate the position. Finally, it asserts that the position balance after selling is less than 1% of the original balance, indicating successful liquidation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
poetry.lock
is excluded by!**/*.lock
,!**/*.lock
📒 Files selected for processing (3)
- prediction_market_agent_tooling/deploy/agent.py (6 hunks)
- prediction_market_agent_tooling/markets/omen/omen_contracts.py (2 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- prediction_market_agent_tooling/deploy/agent.py
- prediction_market_agent_tooling/markets/omen/omen_contracts.py
…methods-to-pmat-for-storing-prediction-into-ipfs-and-adding-that-to-contract-mapping # Conflicts: # tests_integration_with_local_chain/markets/omen/test_local_chain.py
prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py
Outdated
Show resolved
Hide resolved
@@ -281,7 +281,7 @@ def test_zero_bets() -> None: | |||
target_p_yes=market.current_p_yes, | |||
fee=market.fee, | |||
) | |||
assert np.isclose(market_moving_bet.size, 0.0, atol=1e-4) | |||
assert np.isclose(market_moving_bet.size, 0.0, atol=1e-3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eek! flaky test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to increase the absolute tolerance - for some markets, it fails, for others, it works. Should be still a valuable test though - the tolerance is 0.1 cent of xDAI, so hopefully still good enough.
@@ -73,3 +76,16 @@ def fund_account_on_tenderly( | |||
} | |||
response = requests.post(fork_rpc, json=payload) | |||
response.raise_for_status() | |||
|
|||
|
|||
def create_and_fund_random_account( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks unrelated to the rest of the PR. I see you've added a test for it, but it's not used anywhere in the code. No biggie but would make sense in a separate PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree - created a new PR and removed this from this PR - #460
Co-authored-by: Evan Griffiths <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
tests_integration_with_local_chain/markets/omen/test_omen.py (2)
360-387
: LGTM! Consider adding an initial balance check.The
test_deposit_and_withdraw_wxdai
function effectively tests the deposit and withdrawal of Wrapped xDai. It creates a fresh account, performs the necessary operations, and verifies the balances.To enhance the test, consider adding an assertion to check the initial xDai balance before depositing. This would ensure that the account has sufficient funds for the deposit operation.
initial_balance = get_balances(address=fresh_account.address, web3=local_web3) assert initial_balance.xdai >= deposit_amount, "Insufficient xDai for deposit"
Line range hint
478-556
: LGTM! Consider adding a comment explaining the 1% threshold.The updates to
test_place_bet_with_prev_existing_positions
effectively test the liquidation of existing positions without relying on the subgraph. The use of mocking for position retrieval is a good approach for isolating the test.To improve clarity, consider adding a comment explaining the 1% threshold used in the final assertion. This would help other developers understand the reasoning behind this specific value.
# We assert that positions were liquidated if < 1% of the original outcome tokens bought remain # in the position. This is because of implementation details in the ConditionalTokens contract, # avoiding the position to be fully sold. assert position_balance_after_sell < 0.01 * position_balance # xDAI
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py (4 hunks)
- tests_integration_with_local_chain/markets/omen/test_omen.py (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- prediction_market_agent_tooling/markets/omen/omen_subgraph_handler.py
def test_add_predictions(local_web3: Web3, test_keys: APIKeys) -> None: | ||
agent_result_mapping = OmenAgentResultMappingContract() | ||
market_address = test_keys.public_key | ||
dummy_transaction_hash = ( | ||
"0x3750ffa211dab39b4d0711eb27b02b56a17fa9d257ee549baa3110725fd1d41b" | ||
) | ||
p = ContractPrediction( | ||
tx_hashes=[HexBytes(dummy_transaction_hash)], | ||
estimated_probability_bps=5454, | ||
ipfs_hash=HexBytes(dummy_transaction_hash), | ||
publisher=test_keys.public_key, | ||
) | ||
|
||
agent_result_mapping.add_prediction(test_keys, market_address, p, web3=local_web3) | ||
stored_predictions = agent_result_mapping.get_predictions( | ||
market_address, web3=local_web3 | ||
) | ||
assert len(stored_predictions) == 1 | ||
assert stored_predictions[0] == p | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance test coverage and realism in test_add_predictions
While the test covers basic functionality, consider the following improvements:
- Use a more realistic market address instead of
test_keys.public_key
. - Use distinct values for
tx_hash
andipfs_hash
to better simulate real-world scenarios. - Add negative test cases (e.g., adding duplicate predictions, invalid data).
- Include error handling to test exception scenarios.
- Add cleanup after the test to ensure a clean state for subsequent tests.
Here's a suggested improvement:
def test_add_predictions(local_web3: Web3, test_keys: APIKeys) -> None:
agent_result_mapping = OmenAgentResultMappingContract()
market_address = Web3.to_checksum_address('0x1234567890123456789012345678901234567890')
dummy_transaction_hash = HexBytes('0x3750ffa211dab39b4d0711eb27b02b56a17fa9d257ee549baa3110725fd1d41b')
dummy_ipfs_hash = HexBytes('0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef')
p = ContractPrediction(
tx_hashes=[dummy_transaction_hash],
estimated_probability_bps=5454,
ipfs_hash=dummy_ipfs_hash,
publisher=test_keys.public_key,
)
# Test adding a valid prediction
agent_result_mapping.add_prediction(test_keys, market_address, p, web3=local_web3)
stored_predictions = agent_result_mapping.get_predictions(market_address, web3=local_web3)
assert len(stored_predictions) == 1
assert stored_predictions[0] == p
# Test adding a duplicate prediction
with pytest.raises(Exception): # Replace with specific exception if known
agent_result_mapping.add_prediction(test_keys, market_address, p, web3=local_web3)
# Test adding an invalid prediction (e.g., with probability > 10000 bps)
invalid_p = ContractPrediction(
tx_hashes=[dummy_transaction_hash],
estimated_probability_bps=10001,
ipfs_hash=dummy_ipfs_hash,
publisher=test_keys.public_key,
)
with pytest.raises(ValueError):
agent_result_mapping.add_prediction(test_keys, market_address, invalid_p, web3=local_web3)
# Cleanup
# Add code to remove added predictions or reset the contract state
This improved version:
- Uses a separate dummy market address
- Uses distinct values for
tx_hash
andipfs_hash
- Adds a test case for adding duplicate predictions
- Includes a negative test case for adding an invalid prediction
- Suggests adding cleanup code
Remember to import pytest
if it's not already imported in the file.
No description provided.