Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion?] Some thoughts/ideas for the future #88

Open
bafterfor opened this issue Jul 7, 2024 · 2 comments
Open

[Discussion?] Some thoughts/ideas for the future #88

bafterfor opened this issue Jul 7, 2024 · 2 comments

Comments

@bafterfor
Copy link

I love this project and think it could be really great one day with more improvements. Some directions I think would be interesting to consider:

Put the sessions box into its own panel and on the left side, mirroring the panel on the right. I've got so many sessions now it would be useful to have a larger list of them visible all the time.

The ability to give sessions tags and filtering the session list by tags, would also be useful.

A session branching tree system. Not entirely unlike in chat frontends, but something made for generic completion, and more powerful. For instance, you could have a single session, that then have branches, or children. They don't even need to actually share any of the same text. You could imagine it like the parent and child object system in 3D game engines (https://youtu.be/XtQMytORBmM?t=273). So for instance, there might be a button that says "clone to child" or something to that effect, and instead of creating an adjacent copy of the session, it creates a child. Perhaps the Regenerate button can be set, by the user, to automatically spawn a new child (or not), thereby creating the equivalent of a branch in LLM Chat frontends.

Generate from anywhere you place your cursor. This would be useful, say, when I just want the LLM to fill in something. Of course, most LLMs these days are not trained to be able to do fill in the middle tasks, but it would still be useful to have the ability to just start generating text somewhere back in context, instead of copying the bottom half of the text to the clipboard, deleting it, generating, and then pasting it back in.

The ability to set how many token probabilities you want to see listed. Sometimes I just want to see more possible logits. Letting the user control how many they want to see would be useful.

The ability to run commands on your desktop by scanning the context. For instance, maybe I'm working on something in Mikupad and want to let it do something on my PC, I could explicitly write out what commands I'd like to whitelist, and then it could scan the context and execute when it encounters those commands. I assume this kind of functionality is not possible through an HTML file and we'd need a different strategy to get it working. But, still, just an idea of something that I'd personally love to see happen somehow. Perhaps a script on your desktop could monitor the files from the browser, not sure.

A sampler settings save system, so we can change them quickly to certain specific values.

Highlight the instruct formatting strings or color them. Sometimes the text just visually feels like a huge goopy block when doing chat mode, depending on the model. Having the instruct formatting strings stick out in some way (say by making the text green?) would make it a lot easier to look at.

Overall I think some of these changes could make this a really powerful tool, more than it already is. I don't expect anyone to do this of course though. This project is already a gift.

@jukofyork
Copy link
Contributor

The ability to set how many token probabilities you want to see listed. Sometimes I just want to see more possible logits. Letting the user control how many they want to see would be useful.

This would be very easy to change, but it would end up being another option box... :/

I can make a PR that does this quite easily if you want to give it a try - I already tracked down where the "10" was set to remove the zero probabilities in the PR that was just merged.

A sampler settings save system, so we can change them quickly to certain specific values.

This would be a good idea too. Is there any samplers people wished they had but aren't included? I actually did some experiments trying to change the colours based on Entropy and normalized "Surprisal", but didn't think it looked better than the existing colours... BUT, I could easily add other samplers now I know how to access the probabilities.

Highlight the instruct formatting strings or color them. Sometimes the text just visually feels like a huge goopy block when doing chat mode, depending on the model. Having the instruct formatting strings stick out in some way (say by making the text green?) would make it a lot easier to look at.

Yeah, I agree - I think just making them low-contrast or grey would probably work well... Anything that makes them merge into the background as you scan the page would work IMO.

Overall I think some of these changes could make this a really powerful tool, more than it already is. I don't expect anyone to do this of course though. This project is already a gift.

Yeah, I'm finding this project to be super-useful too! There is nothing like it for easily experimenting with the prompt templates (things like Ollama are so opaque you have no idea what is actually getting send to the backend...), and for creative writing tests it's the best tool I've found by far! 👍

@lmg-anon
Copy link
Owner

lmg-anon commented Jul 8, 2024

Great ideas! Here are some thoughts about some of them:

A session branching tree system. Not entirely unlike in chat frontends, but something made for generic completion, and more powerful. For instance, you could have a single session, that then have branches, or children. [...] thereby creating the equivalent of a branch in LLM Chat frontends.

This has come to my mind, I even tried to do it already but failed to think of a way to implement it. Maybe I should try it again soon.

Generate from anywhere you place your cursor. This would be useful, say, when I just want the LLM to fill in something.

Well, I have good news for you. This is already possible - all you have to do is type {predict} where you want the LLM to start generating. Each time you hit "Generate", one of the placeholders will be filled. You can also use a fill-in-the-middle prompt with the {fill} placeholder if your chosen prompt format supports it.

The ability to run commands on your desktop by scanning the context.

This feels way too random. And, honestly, I feel like such a feature would fit a chat interface better than something like Mikupad. However, maybe something like that could be a good fit for an "extension system".

Highlight the instruct formatting strings or color them.

This is pretty much issue #45 btw.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants