Announcing gptstudio v0.4.0: Enhanced AI Integration in Your RStudio IDE #210
calderonsamuel
started this conversation in
Show and tell
Replies: 1 comment
-
Its free |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are excited to announce the release of
{gptstudio}
version 0.4.0, which brings a host of new features and improvements designed to enrich your AI-assisted coding experience in RStudio. Building on the robust foundation of our previous releases, this update enhances the functionality and user experience of the Chat app while introducing support for additional AI models and services.Try it out with:
install.packages("gptstudio")
After that, go to
Addins > GPTSTUDIO > Chat
or open the Rstudio command palette (Ctrl + Shift + P on Windows) and search for it.The Chat app has always been a powerful tool for engaging with AI models directly within RStudio. It offers an interactive Shiny application that enables you to converse with various AI models, providing a seamless interface to type questions, code snippets, or text and receive AI-generated responses. This makes it an invaluable asset for querying coding advice, explanations of complex concepts, and generating code snippets.
Layout change
In version 0.4.0, we introduce several significant enhancements to the Chat app. The most notable is a more minimalistic starting view. To provide as mush space as posible for your chat interactions.
This comes with the addition of a sidebar, powered by
{bslib}
, which allows users to easily access their conversation history, start new chats, and adjust settings.This redesign provides more space for displaying messages and includes tooltips to assist with navigation. Chats are now automatically saved and updated after each AI response, and users can edit the titles of their conversations for better organization. This update makes the Chat app more user-friendly and efficient, enhancing your overall experience.
Support for local models
We are also thrilled to announce the support for local models through Ollama. This allows users to run AI models locally, providing greater control and customization. The default host is set to http://localhost:11434, but it can be adjusted via the OLLAMA_HOST environmental variable. Users are responsible for maintaining their own Ollama installations and models, giving them the flexibility to manage their local AI resources.
You can run ollama in any platform as a docker container. The following code runs the CPU-only version:
You can see more docker options in the Ollama official blog post.
Before using the service, you need to pull a model. Run the following code inside your container to pull llama2:
Now you can use it with chat addin going to
Settings > API service > Select API Service > ollama
.Check the ollama library to see more models. For more advanced install options, check the official documentation.
Reading R help pages
The chat addin can now provide the help page for R functions or objects when the user message contains the construct
package::object
. This is useful for getting help on the latest changes in the R ecosystem, specially those posterior to the cut-off date of the different LLMs.Look what the chat assistant would normally respond when asked about
dplyr::join_by()
:Now compare the response when this new option is activated:
The "R documentation" message was automatically added by the chat app and it passes the help page of the function to the service's API behind the scenes. To activate it, you need to go
Settings > Assistant behavior > Read help pages
and save as default. Beware that this will increase the token count of your API requests and that some R packages document their exported objects with different naming conventions.Support for even more services
Moreover, gptstudio now integrates with Perplexity and Cohere services, expanding the range of available AI models. Perplexity offers a variety of models, including llama-3-sonar and mixtral-8x7b, while Cohere provides models such as command and command-light. These integrations allow users to choose from a broader array of AI solutions, tailoring their setup to best meet their specific needs.
To further improve usability, we have introduced a new function,
gpstudio_sitrep()
, which helps with debugging and setup. Additionally, new vignettes have been added to guide users through the setup of each service, ensuring comprehensive documentation and support.Constant maintainability
Internally, this release includes several improvements and bug fixes. We have reverted to using an R6 class for OpenAI streaming to enhance stability and avoid server errors. Various bugs related to OpenAI model retrieval, Azure OpenAI request formation, and Unix platform connectivity have been addressed. The codebase now utilizes
{lintr}
more extensively for maintaining consistency, and we have made several UI and performance tweaks, including improved scrollbar design and enhanced model integration.With gptstudio v0.4.0, we continue to enhance the integration of AI capabilities within RStudio, making it easier and more efficient for data scientists and analysts to leverage advanced models in their workflows. We encourage you to explore the new features and improvements, and we look forward to your feedback. Stay tuned for more updates and happy coding!
Beta Was this translation helpful? Give feedback.
All reactions