An llmvm backend which sends text and chat generation requests to known hosted language model providers.
Supported providers:
Example of an llmvm model ID for this backend: outsource/openai-chat/gpt-3.5-turbo
Install this backend using cargo
.
cargo install llmvm-outsource
The backend can either be invoked directly, via llmvm-core or via a frontend that utilizes llmvm-core.
To invoke directly, execute llmvm-outsource -h
for details.
llmvm-outsource http
can be invoked to create a HTTP server for remote clients.
Run the backend executable to generate a configuration file at:
~/.config/llmvm/outsource.toml
.~/Library/Application Support/com.djandries.llmvm/outsource.toml
AppData\Roaming\djandries\llmvm\config\outsource.toml
|Key|Required?|Description|
|--|--|--|
|openai_api_key
|If using OpenAI|API key for OpenAI requests.|
|huggingface_api_key
|If using Hugging Face|API key for Hugging Face requests.|
|tracing_directive
|No|Logging directive/level for tracing|
|stdio_server
|No|Stdio server settings. See llmvm-protocol for details.|
|http_server
|No|HTTP server settings. See llmvm-protocol for details.|
Custom hosted endpoints may be used by supplying the prefix endpoint=
, followed by the endpoint
URL in the model name component of the model ID.
For example, the model ID could be outsource/huggingface-text/endpoint=https://yourendpointhere
.