-
Notifications
You must be signed in to change notification settings - Fork 61
Update Request Enhancement #777
Comments
/start |
Too many assigned issues, you have reached your max of 2 |
Or would some basic input validation be enough for now? Min character count: 60
That is 74 characters and you could squeeze that out at least no problem |
This is a nice bandaid actually but I would be more comfortable if we checked real world examples of good and bad updates to define the constants. Appreciate if somebody can help with the research because I cant at the moment |
I will do some actual research into it today and get back to you |
So we do not want to shackle hunters by enforcing a template or structure I get that and given the observations in the table below I want to make the following points:
I suggest the following:
|
Thanks a ton for the research! The pull request links are insufficient because this implies that no commits were added in a few days, which means that they stopped working. With this update comment, we want to learn why they stopped working on the code. I agree on everything but
Word count no longer seems sufficient because "Wait my boy, patience" is a funny, four word way to not say anything meaningful. Whereas "@ubiquibot waiting for merge" is a four word and meaningful way to say that its the pull request reviewers' fault for not merging in more promptly (although we have another spec for following up with reviewers instead of the assignees in this situation, which will eventually be implemented.) |
I feel then that we'd need some form of NLP if not integrating AI in the first iteration and since we are set against adding in any more packages this could be very tricky. I think that the best move may be the AI approach from the get-go, it has the capacity to cover all edge cases without adding anything into the build we don't already have. what we know
what we need
my blockers
alternativePerhaps it may be easier to simply include in the bot comment something like
This leaves out anything that is already part of the commit data which you request as part of the spec and ensures that onlookers know roughly how long before things are complete. Below is a better looking version I think, what purpose does it serve showing when the last activity time was? Instead replace that with a dig at how to build a good update comment.
|
Let's do it.
Let's change to: @user, we haven't seen recent activity on this task. Please update us on your progress. Otherwise you can release it back to the DevPool by commenting
I'm expecting that we'll get "real" updates once the assignees realize that its an AI. Particularly after their first interaction with it, and it replying that it needs a valid update or that they will be unassigned. |
Sweet, I'll take this if you can assign me it.
Most definitely the fear factor of it looking like it's actively being checked as opposed to an all most sort of fake check-up. So I'm clear:
Should the consequences be listed as part of the AI response, or keep a lil mystery and just leave it at "or face the consequences" 🤣 |
I updated the spec. Hopefully that offers all the clarity that you need? |
Ah ideal!
So I think that we should use something like Why I think we should use this:
GPT will easily be able to cover scenario 3 given the conversation history, names and timestamps. It could more than likely be done without adding timestamps as well but for extra accuracy I think it would benefit from it |
Sounds like a plan @Keyrxng |
Finishing up Blame and this through the week then I'll tackle Emma/Agents at the weekend |
Occasionally I see that assignees do not take the update request message seriously and a simple fix would be if the bot passes the reply into ChatGPT to understand if it's a valid update message.
If not, the bot should:
I just realized that the conversation state might be difficult to track.
Scenarios
Action Flows
📌
Originally posted by @seprintour in #743 (comment)
The text was updated successfully, but these errors were encountered: