Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the 'logprobs' in the response object #24

Open
HamLaertes opened this issue Nov 2, 2023 · 2 comments
Open

About the 'logprobs' in the response object #24

HamLaertes opened this issue Nov 2, 2023 · 2 comments

Comments

@HamLaertes
Copy link

Hi,

I find that the simulator will postprocess the response of 'text-davinci-003' using the response field 'logprobs'.

However, as I read the document of openai, the field 'logprobs' is going to be deprecated since the completion response object will be replaced by chat completion object, and also the model 'text-davince-003' is going to be deprecated.

I now have the access to the gpt-4 and gpt-3.5-turbo with chat completion response object. Is there any way to conduct the neuron-explainer using the two models? i.e. without the field 'logprobs'.

Or, is it necessary to call 'text-davince-003' and other models with completion response object that return 'logprobs'.

Thanks a lot!

@williamrs-openai
Copy link
Contributor

williamrs-openai commented Nov 2, 2023 via email

@msra-jqxu
Copy link

msra-jqxu commented Sep 13, 2024

Hi @williamrs-openai , could you explain how the variables in L209 to L212 correspond to the logprobs results given in gpt4-1106? Especially choice["logprobs"]["text_offset"] made me confused.

Here I set logprobs=True and top_logprobs=1. choice.logprobs.content include:
image

By the way, how many should I set top_logprobs to?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants