You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After configuring my API key, example.py seems to run just fine up to a certain point as seen in the output of the last loop quoted below.
When calling tree_of_thoughts.solve, however, we get a list with three None states, which causes the script to crash at this line
before it can complete the Monte Carlo search.
The error traceback is appended at the end.
Have I missed any steps here?
This bug happened when using the default 24 game prompt provided.
I have tried other GPT models and different initial prompts with no luck.
Exact same problem with huggingface_example.py, so it should be easy to reproduce...
Has anyone seen this before?
Output
Initializing Autonomous Agent tree_of_thoughts...
Autonomous Agent Activated.
All systems operational. Executing task...
Loop 1 of 1
Observation: The goal is to obtain a result of 24 using each of the four numbers provided (2, 8, 8, 14) exactly once and using basic arithmetic operations (+, -, *, /).
Thoughts: To achieve 24, I must consider combinations that can multiply or add up to 24 or its multiples/divisors. The number 14 stands out as it is the largest and can be used with other smaller numbers to adjust the final result towards 24. Because division by 2 yields a whole number when applied to both 14 and 8, and multiplication with 8 can reach close to 24, these operations are particularly promising.
Let's try multiplying the largest numbers and then see if we can adjust the result with the remaining numbers:
14 * 2 = 28 (Now we need to subtract 4 to get to 24, and we have two eights left)
Option 1:
14 * 2 = 28
8 / 8 = 1
28 - 1 = 27 (This is too high and does not use all numbers effectively).
Option 2:
From here, we realize that if we divide 8 by 8, we get 1 which can be subtracted from 28 to get to 27. We need to think of a way to subtract just enough to reach 24.
Continuing to explore our options, let's consider using the two 8's together, as their product is close to 24.
8 / 8 = 1 (Now we have 1, 2, and 14 left to reach 24)
14 + 1 = 15
2 + 15 = 17 (This is still not 24)
However, if we multiply 8 by 3, we can get 24. We can obtain 3 by adding 2 and 1. So:
8 * (2 + 1) = 24
We get:
8 * 3 = 24
And 3 can be obtained from 8/8 + 2.
Finally, find the arrangement that meets the requirements of the task.
Solution:
(8 / 8) + 2 = 3
8 * 3 = 24
Final Equation: 8 * ((8 / 8) + 2) = 24
Error:
Traceback (most recent call last):
File "C:\Users\theo_\coding_workspace\gpt_tree_of_thoughts\tree-of-thoughts\example.py", line 62, in <module>
solution = tree_of_thoughts.solve(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\theo_\coding_workspace\gpt_tree_of_thoughts\tree-of-thoughts\tree_of_thoughts\treeofthoughts.py", line 559, in solve
return self.monte_carlo_search(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\theo_\coding_workspace\gpt_tree_of_thoughts\tree-of-thoughts\tree_of_thoughts\treeofthoughts.py", line 608, in monte_carlo_search
evaluated_thoughts = self.model.evaluate_states(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\theo_\coding_workspace\gpt_tree_of_thoughts\tree-of-thoughts\tree_of_thoughts\tot_agent.py", line 141, in evaluate_states
state_text = "\n".join(state)
^^^^^^^^^^^^^^^^
TypeError: can only join an iterable
Upvote & Fund
We're using Polar.sh so you can upvote and help fund this issue.
We receive the funding once the issue is completed & confirmed by you.
Thank you in advance for helping prioritize & fund our backlog.
The text was updated successfully, but these errors were encountered:
I cloned the repo and installed
requirements.txt
After configuring my API key,
example.py
seems to run just fine up to a certain point as seen in the output of the last loop quoted below.When calling tree_of_thoughts.solve, however, we get a list with three
None
states, which causes the script to crash at this linebefore it can complete the Monte Carlo search.
The error traceback is appended at the end.
Have I missed any steps here?
This bug happened when using the default 24 game prompt provided.
I have tried other GPT models and different initial prompts with no luck.
Exact same problem with
huggingface_example.py
, so it should be easy to reproduce...Has anyone seen this before?
Output
Error:
Upvote & Fund
The text was updated successfully, but these errors were encountered: