Uncategorized

Chat with AI in RStudio


chattr is a package that enables interaction with Large Language Models (LLMs),
such as GitHub Copilot Chat, and OpenAI’s GPT 3.5 and 4. The main vehicle is a
Shiny app that runs inside the RStudio IDE. Here is an example of what it looks
like running inside the Viewer pane:


Screenshot of the chattr Shiny app, which displays an example of a single interaction with the OpenAI GPT model. I asked for an example of a simple example of a ggplot2, and it returned an example using geom_point()

Figure 1: chattr’s Shiny app

Even though this article highlights chattr’s integration with the RStudio IDE,
it is worth mentioning that it works outside RStudio, for example the terminal.

Getting started

To get started, simply download the package from GitHub, and call the Shiny app
using the chattr_app() function:

Modify prompt enhancements

Beyond the app

In addition to the Shiny app, chattr offers a couple of other ways to interact
with the LLM:

  • Use the chattr() function
  • Highlight a question in your script, and use it as your prompt

here.

RStudio Add-ins

chattr comes with two RStudio add-ins:


Screenshot of the chattr addins in RStudio

Figure 4: chattr add-ins

You can bind these add-in calls to keyboard shortcuts, making it easy to open the app without having to write
the command every time. To learn how to do that, see the Keyboard Shortcut section in the
chattr official website.

Works with local LLMs

Open-source, trained models, that are able to run in your laptop are widely
available today. Instead of integrating with each model individually, chattr
works with LlamaGPTJ-chat. This is a lightweight application that communicates
with a variety of local models. At this time, LlamaGPTJ-chat integrates with the
following families of models:

  • GPT-J (ggml and gpt4all models)
  • LLaMA (ggml Vicuna models from Meta)
  • Mosaic Pretrained Transformers (MPT)

LlamaGPTJ-chat works right off the terminal. chattr integrates with the
application by starting an ‘hidden’ terminal session. There it initializes the
selected model, and makes it available to start chatting with it.

To get started, you need to install LlamaGPTJ-chat, and download a compatible
model. More detailed instructions are found
here.

chattr looks for the location of the LlamaGPTJ-chat, and the installed model
in a specific folder location in your machine. If your installation paths do
not match the locations expected by chattr, then the LlamaGPT will not show
up in the menu. But that is OK, you can still access it with chattr_use():

here.

Feedback welcome

After trying it out, feel free to submit your thoughts or issues in the
chattr’s GitHub repository.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button