llmvm-outsource

Crates.io GitHub

An llmvm backend which sends text and chat generation requests to known hosted language model providers.

Supported providers:

Example of an llmvm model ID for this backend: outsource/openai-chat/gpt-3.5-turbo

Installation

Install this backend using cargo.

cargo install llmvm-outsource

Usage

The backend can either be invoked directly, via llmvm-core or via a frontend that utilizes llmvm-core.

To invoke directly, execute llmvm-outsource -h for details.

llmvm-outsource http can be invoked to create a HTTP server for remote clients.

Configuration

Run the backend executable to generate a configuration file at:

|Key|Required?|Description| |--|--|--| |openai_api_key|If using OpenAI|API key for OpenAI requests.| |huggingface_api_key|If using Hugging Face|API key for Hugging Face requests.| |tracing_directive|No|Logging directive/level for tracing| |stdio_server|No|Stdio server settings. See llmvm-protocol for details.| |http_server|No|HTTP server settings. See llmvm-protocol for details.|

Hugging Face custom endpoints

Custom hosted endpoints may be used by supplying the prefix endpoint=, followed by the endpoint URL in the model name component of the model ID.

For example, the model ID could be outsource/huggingface-text/endpoint=https://yourendpointhere.

License

Mozilla Public License, version 2.0