Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

  "finish_reason": "FinishReason.MAX_TOKENS" while llm output maxtoken set to 8192 #8550

Closed
5 tasks done
iray1991 opened this issue Sep 19, 2024 · 3 comments
Closed
5 tasks done
Labels
🐞 bug Something isn't working

Comments

@iray1991
Copy link

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

Dify 0.8.0

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

image

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

Copy link

dosubot bot commented Sep 19, 2024

I found a similar issue that was previously reported and resolved. The issue titled "Output the finish reason value for LLM node" was closed by a pull request that added the finish_reason to the LLM node output [1].

Regarding your specific issue with the "finish_reason": "FinishReason.MAX_TOKENS" while the max_tokens is set to 8192, this behavior occurs because the max_tokens setting limits the length of the output text. Even if the max_tokens is set to 8192, the actual max_tokens parameter in the request might be set to a lower value, causing the output to stop when that limit is reached. This is often done to optimize computational resources and improve model efficiency [2][3][4][5].

Additionally, the default value of the max_tokens parameter in Dify 0.8.0 is 512, which might be overriding your specified value [6]. There are also known limitations related to setting the max_tokens parameter too high, which can lead to errors if the total token count exceeds the model's limit [2][5][4].

You might want to check the parameter settings on the orchestration page to ensure that the max_tokens value is correctly set and not being overridden by any other configuration [5][4][2].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the 🐞 bug Something isn't working label Sep 19, 2024
@iray1991
Copy link
Author

I am using gemini flash which have 1 M context window ,but it should this :

"usage": {
    "prompt_tokens": 38837,
    "prompt_unit_price": "0.00",
    "prompt_price_unit": "0.000001",
    "prompt_price": "0E-7",
    "completion_tokens": 143,
    "completion_unit_price": "0.00",
    "completion_price_unit": "0.000001",
    "completion_price": "0E-7",
    "total_tokens": 38980,
    "total_price": "0E-7",
    "currency": "USD",
    "latency": 5.037375543033704
  },
  "finish_reason": "FinishReason.MAX_TOKENS"

Then I switched to Chatgpt 4o mini, this time the input is complete; so I am guessing something wrong with gemini model setup in dify,where can I set variables for the model?

 
 

Copy link

dosubot bot commented Sep 19, 2024

To address the "FinishReason.MAX_TOKENS" issue with the Gemini model in Dify, you need to configure the model parameters, specifically the "maximum tokens" setting. This can be done in the LLM node configuration under the "Configure Model Parameters" section.

In the configuration file for the Gemini model (gemini-1.0-pro-vision.yaml), you can set the max_output_tokens parameter. The default value is 2048, but you can adjust it to a suitable value to prevent the model from hitting the token limit.

- name: max_output_tokens
  use_template: max_tokens
  required: true
  default: 2048
  min: 1
  max: 2048

Adjust the "max_output_tokens" parameter to a suitable value within the range of 1 to 2048 to prevent the model from hitting the token limit [1].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Oct 20, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 4, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant