Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

connection tree view #616

Merged
merged 4 commits into from
Aug 13, 2024
Merged

connection tree view #616

merged 4 commits into from
Aug 13, 2024

Conversation

pelikhan
Copy link
Member

@pelikhan pelikhan commented Aug 13, 2024

display connection information in a tree view

  • A key secret masking mechanism has been input in the packages/cli/src/nodehost.ts file to ensure that tokens are not plainly accessible. Secure move! 🔒

  • In packages/core/src/connection.ts, the document formatting has been adjusted with indentation being eliminated. Quite a neat up! 🙂

  • The packages/sample/genaisrc/pr-review.genai.js file has been modified to include the instruction not to annotate individual lines of code, but to summarize changes. Better clarity! 👌

  • New updates have been introduced in the packages/vscode/package.json. This includes the addition of Connections under viewContainers and connection-configuration related settings under commands. New capabilities brewing!🎉

  • A new file, packages/vscode/src/connectioninfotree.ts, has been brought in. It contains classes that handle connection information within a tree structure for a better hierarchical organization. Power of OOP! 🎯

  • The extension activation process found in packages/vscode/src/extension.ts has been extended by incorporating ConnectionInfoTree state. Also, it includes a command to configure connections. More flexibility and control here! 👏

  • The packages/vscode/src/lmaccess.ts file indicates the implementation of a function to count tokens for each text chunk rather than the whole text. This possibly enhances performance! 🚀

Remember, changes in "packages/core/src/prompt_template.d.ts" and "packages/core/src/prompt_type.ts" directly affect users but there are no modifications related to these files in this diff. 📚

generated by pr-describe

@@ -152,6 +152,7 @@ export class NodeHost implements RuntimeHost {
const { signal, token: askToken } = options || {}
await this.parseDefaults()
const tok = await parseTokenFromEnv(process.env, modelId)
if (!askToken && tok?.token) tok.token = "***"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The promise returned by parseTokenFromEnv function is not handled. Consider using await or .then to handle the promise.

generated by pr-review-commit unhandled_promise

await this.state.host.server.client.getLanguageModelConfiguration(
element.provider + ":*",
{ token: false }
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The promise returned by getLanguageModelConfiguration function is not handled. Consider using await or .then to handle the promise.

generated by pr-review-commit unhandled_promise

@@ -226,7 +225,7 @@ export function createChatModelRunner(
text += fragment
onChunk({
chunk: fragment,
tokens: await chatModel.countTokens(text),
tokens: await chatModel.countTokens(fragment),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The promise returned by countTokens function is not handled. Consider using await or .then to handle the promise.

generated by pr-review-commit unhandled_promise

Copy link

The changes in the pull request mostly seem fine and should improve the functionality of the code. However, I do have a concern about the following change in packages/vscode/src/lmaccess.ts:

-                tokens: await chatModel.countTokens(text),
+                tokens: await chatModel.countTokens(fragment),

In the original version, countTokens function is called with the text argument which contains the accumulated text fragments. In the updated version, countTokens is called with the fragment argument which contains just the last fragment added. This means that token counting will now only take into account the last fragment of text instead of the full text.

If this change was intentional and you're aware that the token count will no longer consider previous fragments, then the change is fine. But if you meant to keep counting tokens for the full text, you should revert to the original line.

Here's the suggested fix:

-                tokens: await chatModel.countTokens(fragment),
+                tokens: await chatModel.countTokens(text),

If everything else is in order, and the above change was intentional or is acceptable, then LGTM 🚀

generated by pr-review

@pelikhan
Copy link
Member Author

/genai-describe

# GITHUB_MODELS_TOKEN="${PLACEHOLDER_API_KEY}" # use a personal access token if not available
## GitHub Models ${DOCS_CONFIGURATION_GITHUB_URL}
# use "${MODEL_PROVIDER_GITHUB}:<model>" in script({ model: ... })
# GITHUB_MODELS_TOKEN="${PLACEHOLDER_API_KEY}" # use a personal access token if not available

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comments are not properly indented. Comments should be aligned with the code they are commenting on.

generated by pr-review-commit comment_format

@pelikhan pelikhan merged commit 01d89d2 into main Aug 13, 2024
9 checks passed
@pelikhan pelikhan deleted the envtreeview branch August 13, 2024 02:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant