Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Input text is included in both System and User context #1203

Open
1 task done
scross01 opened this issue Dec 14, 2024 · 13 comments
Open
1 task done

[Bug]: Input text is included in both System and User context #1203

scross01 opened this issue Dec 14, 2024 · 13 comments
Labels
bug Something isn't working

Comments

@scross01
Copy link

What happened?

With the default settings I'm getting the original content that is piped to fabric included in both the system prompt and the user prompt, this can be seen when using the --dry-run option:

echo "This is a test" | fabric -p my_test --dry-run

Dry run: Would send the following request:
System:
You are highly skilled document summarization AI.

# OUTPUT:

Summarize the content in a single paragraph with no more than 100 words.

# INPUT:
this is a test

User:
this is a test

Options:
Model: llama3.2:latest
Temperature: 0.700000
TopP: 0.900000
PresencePenalty: 0.000000
FrequencyPenalty: 0.000000
empty response

Expected output should only include the input this is a test in the User content, and not repeated in the System prompt.

I've found that the duplication of the user content in the system prompt leads to very poor prompt adherence, and also has the side effect of doubling required model context window size.

Cause of the issue seems to be here in the pattern applyVariables which is forcing the {{input}} to be included. Commenting out this block fixes the problem, but breaks the --raw option which does need the pattern and input combined.

Version check

  • Yes I was.

Relevant log output

No response

Relevant screenshots (optional)

No response

@scross01 scross01 added the bug Something isn't working label Dec 14, 2024
@mattjoyce
Copy link
Contributor

Can you share my_test pattern, please?

@mattjoyce
Copy link
Contributor

oh, I can replicate.

@scross01 {{input}} is appended if it's not in the pattern, and then replaced with the actual user input.
This was implemented to allow user input to be moved to other locations in a pattern, such as at the front, or encapsulated in tags .

The previous behaviour was to simply append the pattern with the user input.
So i think the problem is elsewhere. I will have a look.

@scross01
Copy link
Author

scross01 commented Dec 15, 2024

For reference this is the prompt file system.md. There is no {{input}} variable in the prompt content

You are highly skilled document summarization AI.

# OUTPUT:

Summarize the content in a single paragraph with no more than 100 words.

# INPUT:

@mattjoyce
Copy link
Contributor

ah, I think I have found it. I will get a patch in later today.

@mattjoyce
Copy link
Contributor

@scross01 can you try the latest build and report back?

@scross01
Copy link
Author

@mattjoyce I'm still getting the input content in both the system and user prompt.

Installed the latest from source

$ go install github.com/danielmiessler/fabric@latest
$ fabric --version
v1.4.123
$ echo "This is a another test" | fabric -p my_summarize --dry-run
Dry run: Would send the following request:
System:
You are highly skilled document summarization AI.

# OUTPUT:

Summarize the content in a single paragraph with no more than 100 words.

# INPUT:
This is a another test

User:
This is a another test

Options:
Model: llama3.2:3b
Temperature: 0.700000
TopP: 0.900000
PresencePenalty: 0.000000
FrequencyPenalty: 0.000000
empty response

@mattjoyce
Copy link
Contributor

@scross01
I cannot replicate this.

~ ./fabric --version
v1.4.124
~ echo "This is a another test" | fabric -p ai --dry-run
Dry run: Would send the following request:
System:
# IDENTITY and PURPOSE

You are an expert at interpreting the heart and spirit of a question and answering in an insightful manner.

# STEPS

- Deeply understand what's being asked.

- Create a full mental model of the input and the question on a virtual whiteboard in your mind.

- Answer the question in 3-5 Markdown bullets of 10 words each.

# OUTPUT INSTRUCTIONS

- Only output Markdown bullets.

- Do not output warnings or notes—just the requested sections.

# INPUT:

INPUT:

User:
This is a another test

Options:
Model: gpt-4o
Temperature: 0.700000
TopP: 0.900000
PresencePenalty: 0.000000
FrequencyPenalty: 0.000000
empty response

@scross01
Copy link
Author

scross01 commented Dec 22, 2024

Nope, sorry to say it's still not working for me. I've tried using two methods of installing just to double check, and get the same result with both

$ go install github.com/danielmiessler/fabric@latest
go: downloading github.com/danielmiessler/fabric v1.4.124 

$ fabric --version
v1.4.124
$ git clone https://github.com/danielmiessler/fabric.git
$ cd fabric
$ go build .
$ ./fabric --version

At the time of testing

git log
commit 6d00405eb66e7c232e8c558407587d190fa4ed13 (HEAD -> main, tag: v1.4.124, origin/main, origin/HEAD)
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Date:   Sat Dec 21 14:01:47 2024 +0000

    Update version to v1.4.124 and commit

Result from both installs:

$ echo "this is a test message with v1.4.124" | ./fabric --model llama3.2:3b -p ai --dry-run
Dry run: Would send the following request:
System:
# IDENTITY and PURPOSE

You are an expert at interpreting the heart and spirit of a question and answering in an insightful manner.

# STEPS

- Deeply understand what's being asked.

- Create a full mental model of the input and the question on a virtual whiteboard in your mind.

- Answer the question in 3-5 Markdown bullets of 10 words each.

# OUTPUT INSTRUCTIONS

- Only output Markdown bullets.

- Do not output warnings or notes—just the requested sections.

# INPUT:

INPUT:
this is a test message with v1.4.124

User:
this is a test message with v1.4.124

Options:
Model: llama3.2:3b
Temperature: 0.700000
TopP: 0.900000
PresencePenalty: 0.000000
FrequencyPenalty: 0.000000
empty respons

@mattjoyce
Copy link
Contributor

@scross01 sorry, you were sort of right to begin with, it is cause by the implementation of the template system.
Previously the the system message was
system(context, patten(vars))
user(input)
assistant(response)

What we have now is

system(context, pattern (vars, input (vars))))
user(input(vars))
assistant(response)

@mattjoyce
Copy link
Contributor

So my proposal is to structure the initial messages like this.

image

@eugeis this is a departure from the original fabric message structure, but I think it will not have any detrimental effect.
Are you supportive?

My testing works fine.

@eugeis
Copy link
Collaborator

eugeis commented Dec 23, 2024

Hi @mattjoyce, thank you for the great visualization!

I believe the system role plays a much more significant part in certain models. While it might work fine for a single message and response, in the context of session-based conversations, we risk losing the clarity and impact of the initial prompt.

What are your thoughts?

@mattjoyce
Copy link
Contributor

I think we should not change it.
Keep what has been working, and put the user input as user role message.

It mean we cannot move user input around, but folk can use vars and plugins to achieve same results with a little effort.

I will need to adjust chatter.go and pattern.go to build the session like this.
image

@mattjoyce
Copy link
Contributor

@eugeis , I can do this tomorrow, and then I will be away for a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants