A Package to Interact with Large Language Models

This is only interacting with OpenAI's language models

The Command Line Interface cli

There is a library that exposes the various endpoints and a command line binary (cli) to use it

To use: cargo run --bin cli -- --help

``` Command line argument definitions

Usage: cli [OPTIONS]

Options: -m, --model The model to use [default: text-davinci-003] -t, --max-tokens Maximum tokens to return [default: 2000] -T, --temperature Temperature for the model [default: 0.9] --api-key The secret key. [Default: environment variable OPENAI_API_KEY] -d, --mode The initial mode (API endpoint) [default: completions] -r, --record-file The file name that prompts and replies are recorded in [default: reply.txt] -p, --system-prompt The system prompt sent to the chat model -h, --help Print help -V, --version Print version ```

When the programme is running, enter prompts at the ">".

Generally text entered is sent to the LLM.

Text that starts with "! " is a command to the system.

Command Line Interface

There is a cli to flex the API.

List of Meta Commands

Meta commands that effect the performance of the programme are prefixed with a ! character, and are:

|Command| Result| |:---|:---| |! p| Display settings| |! md| Display all models available available| |! ms| Change the current model| |! ml| List modes Change mode (API endpoint)| |! v | Set verbosity| |! k | Set max tokens for completions| |! t | Set temperature for completions| |! sp| Set system prompt (after ! cc| |! ci| Clear image mask Set the mask to use in image edit mode. A 1024x1024 PNG with transparent mask| |! a | Audio file for transcription| |! ci| Clear the image stored for editing| |! f |List the files stored on the server| |! fu| Upload a file of fine tuning data| |! fd| Delete a file| |! fi| Get information about file| |! fc| [destination_file] Get contents of file| |! fl| Associate the contents of the path with name for use in prompts like: {name}| |! dx| Display context (for chat)| |! cx| Clear context| |! sx| Save the context to a file at the specified path| |! rx| Restore the context from a file at the specified path| |! ? | This text|

C-q or C-c to quit.

Features

Modes

The LLMs can be used in different modes. Each mode corresponds to an API endpoint.

The meaning of the prompts change with the mode.

Completions

Chat

Image and Image Edit

Generate or edit images based on a prompt.

Lolipop clown

Enter Image mode with the meta command: ! m image [image to edit]. If you provide an image to edit "ImageEdit" mode is entered instead, and the supplied image is edited.

If an image is not supplied (at ! m image prompt) the user enters a prompt and an image is generated by OpenAI based n that prompt. It is stored for image edit. Generating a new image over writes the old one.

Mask To edit an image the process works best if a mask is supplied. This is a 1024x1024 PNG image with a transparent region. The editing will happen in the transparent region. There are two ways to supply a mask: when entering image edit, or with a meta command

  1. Entering Image Edit Supply the path to the meta command switching to Image Edit: ! m image_edit path_to/mask.png
  2. Using the mask Meta Command The mask can be set or changed at any time using the meta command: ! mask path/to_mask.png

If no mask is supplied a 1024x1024 transparent PNG file is created and used.