a simple image hosting service
``` $ pict-rs -h A simple image hosting service
Usage: pict-rs [OPTIONS]
Commands: run Runs the pict-rs web server migrate-store Migrates from one provided media store to another help Print this message or the help of the given subcommand(s)
Options:
-c, --config-file
``` $ pict-rs run -h Runs the pict-rs web server
Usage: pict-rs run [OPTIONS] [COMMAND]
Commands: filesystem Run pict-rs with filesystem storage object-storage Run pict-rs with object storage help Print this message or the help of the given subcommand(s)
Options: -a, --address
The address and port to bind the pict-rs web server --api-keyprocess
endpoint
--media-format --help
for more detail)
```
Try running help
commands for more runtime configuration options
bash
$ pict-rs run filesystem -h
$ pict-rs run object-storage -h
$ pict-rs run filesystem sled -h
$ pict-rs run object-storage sled -h
See pict-rs.toml
for more
configuration
Run with the default configuration
bash
$ ./pict-rs run
Running on all interfaces, port 8080, storing data in /opt/data
bash
$ ./pict-rs \
run -a 0.0.0.0:8080 \
filesystem -p /opt/data/files \
sled -p /opt/data/sled-repo
Running locally, port 9000, storing data in data/, and converting all uploads to PNG
bash
$ ./pict-rs \
run \
-a 127.0.0.1:9000 \
--media-format png \
filesystem -p data/files \
sled -p data/sled-repo
Running locally, port 8080, storing data in data/, and only allowing the thumbnail
and identity
filters
bash
$ ./pict-rs \
run \
-a 127.0.0.1:8080 \
--media-filters thumbnail \
--media-filters identity \
filesystem -p data/files \
sled -p data/sled-repo
Running from a configuration file
bash
$ ./pict-rs -c ./pict-rs.toml run
Migrating to object storage from filesystem storage. For more detailed info, see
Filesystem to Object Storage Migration
bash
$ ./pict-rs \
migrate-store \
filesystem -p data/files \
object-storage \
-a ACCESS_KEY \
-b BUCKET_NAME \
-r REGION \
-s SECRET_KEY
Dumping configuration overrides to a toml file
bash
$ ./pict-rs --save-to pict-rs.toml \
run \
object-storage \
-a ACCESS_KEY \
-b pict-rs \
-r us-east-1 \
-s SECRET_KEY \
sled -p data/sled-repo
Run the following commands: ```bash
$ mkdir ./pict-rs $ cd ./pict-rs $ mkdir -p volumes/pictrs $ sudo chown -R 991:991 volumes/pictrs $ wget https://git.asonix.dog/asonix/pict-rs/raw/branch/main/docker/prod/docker-compose.yml $ sudo docker-compose up -d ```
/tmp
on linux/usr/lib/ImageMagick-$VERSION/config-Q16HDRI/policy.xml
There are a few options for acquiring pict-rs to run outside of docker. 1. Packaged via your distro of choice 2. Binary download from the releases page 3. Compiled from source
If running outside of docker, the recommended configuration method is via the
pict-rs.toml
file. When running pict-rs, the file can be passed to the binary as
a commandline argument.
bash
$ pict-rs -c /path/to/pict-rs.toml run
If getting pict-rs from your distro, please make sure it's a recent version (meaning 0.3.x stable, or 0.4.0-rc.x). If it is older, consider using an alternative option for installing pict-rs. I am currently aware of pict-rs packaged in the AUR and nixpkgs, but there may be other distros that package it as well.
pict-rs provides precompiled binaries that should work on any linux system for x86_64, aarch64, and
armv7h on the releases page. If downloading a binary,
make sure that you have the following dependencies installed:
- imagemagick
7
- ffmpeg
5 or 6
- exiftool
12 (sometimes called perl-image-exiftool
)
These binaries are called by pict-rs to process uploaded media, so they must be in the $PATH
available to pict-rs.
pict-rs can be compiled from source using a recent version of the rust compiler. I do development on
1.69 and produce releases on 1.70. pict-rs also requires the protoc
protobuf compiler to be
present at build-time in order to enable use of
tokio-console
.
Like the Binary Download option, imagemagick
, ffmpeg
, and exiftool
must be installed for
pict-rs to run properly.
pict-rs offers the following endpoints:
- POST /image
for uploading an image. Uploaded content must be valid multipart/form-data with an
image array located within the images[]
key
This endpoint returns the following JSON structure on success with a 201 Created status
```json
{
"files": [
{
"delete_token": "JFvFhqJA98",
"file": "lkWZDRvugm.jpg",
"details": {
"width": 800,
"height": 800,
"content_type": "image/jpeg",
"created_at": "2022-04-08T18:33:42.957791698Z"
}
},
{
"delete_token": "kAYy9nk2WK",
"file": "8qFS0QooAn.jpg",
"details": {
"width": 400,
"height": 400,
"content_type": "image/jpeg",
"created_at": "2022-04-08T18:33:42.957791698Z"
}
},
{
"delete_token": "OxRpM3sf0Y",
"file": "1hJaYfGE01.jpg",
"details": {
"width": 400,
"height": 400,
"content_type": "image/jpeg",
"created_at": "2022-04-08T18:33:42.957791698Z"
}
}
],
"msg": "ok"
}
```
- POST /image/backgrounded
Upload an image, like the /image
endpoint, but don't wait to validate and process it.
This endpoint returns the following JSON structure on success with a 202 Accepted status
json
{
"uploads": [
{
"upload_id": "c61422e1-9294-4f1f-977f-c696b7939467",
},
{
"upload_id": "62cc707f-725c-44b6-908f-2bd8946c3c29"
}
],
"msg": "ok"
}
- GET /image/download?url={url}&backgrounded=(true|false)
Download an image
from a remote server, returning the same JSON payload as the POST /image
endpoint by default.
if `backgrounded` is set to `true`, then the ingest processing will be queued for later and the
response json will be the same as the `POST /image/backgrounded` endpoint.
- GET /image/backgrounded/claim?upload_id={uuid}
Wait for a backgrounded upload to complete, claiming it's result
Possible results:
- 200 Ok (validation and ingest complete):
json
{
"files": [
{
"delete_token": "OxRpM3sf0Y",
"file": "1hJaYfGE01.jpg",
"details": {
"width": 400,
"height": 400,
"content_type": "image/jpeg",
"created_at": "2022-04-08T18:33:42.957791698Z"
}
}
],
"msg": "ok"
}
- 422 Unprocessable Entity (validation or otherwise failure):
json
{
"msg": "Error message about what went wrong with upload"
}
- 204 No Content (Upload validation and ingest is not complete, and waiting timed out)
In this case, trying again is fine
- GET /image/original/{file}
for getting a full-resolution image. file
here is the file
key from the
/image
endpoint's JSON
- GET /image/details/original/{file}
for getting the details of a full-resolution image.
The returned JSON is structured like so:
json
{
"width": 800,
"height": 537,
"content_type": "image/webp",
"created_at": "2022-04-08T18:33:42.957791698Z"
}
- GET /image/process.{ext}?src={file}&...
get a file with transformations applied.
existing transformations include
- identity=true
: apply no changes
- blur={float}
: apply a gaussian blur to the file
- thumbnail={int}
: produce a thumbnail of the image fitting inside an {int}
by {int}
square using raw pixel sampling
- resize={int}
: produce a thumbnail of the image fitting inside an {int}
by {int}
square
using a Lanczos2 filter. This is slower than sampling but looks a bit better in some cases
- resize={filter}.(a){int}
: produce a thumbnail of the image fitting inside an {int}
by
{int}
square, or when (a)
is present, produce a thumbnail whose area is smaller than
{int}
. {filter}
is optional, and indicates what filter to use when resizing the image.
Available filters are Lanczos
, Lanczos2
, LanczosSharp
, Lanczos2Sharp
, Mitchell
,
and RobidouxSharp
.
Examples:
- `resize=300`: Produce an image fitting inside a 300x300 px square
- `reizie=.a10000`: Produce an image whose area is at most 10000 px
- `resize=Mitchell.200`: Produce an image fitting inside a 200x200 px square using the
Mitchell filter
- `resize=RobidouxSharp.a40000`: Produce an image whose area is at most 40000 px using the
RobidouxSharp filter
- `crop={int-w}x{int-h}`: produce a cropped version of the image with an `{int-w}` by `{int-h}`
aspect ratio. The resulting crop will be centered on the image. Either the width or height
of the image will remain full-size, depending on the image's aspect ratio and the requested
aspect ratio. For example, a 1600x900 image cropped with a 1x1 aspect ratio will become 900x900. A
1600x1100 image cropped with a 16x9 aspect ratio will become 1600x900.
Supported `ext` file extensions include `png`, `jpg`, and `webp`
An example of usage could be
```
GET /image/process.jpg?src=asdf.png&thumbnail=256&blur=3.0
```
which would create a 256x256px JPEG thumbnail and blur it
- GET /image/process_backgrounded.{ext}?src={file}&...
queue transformations to be applied to a given file. This accepts the same arguments as the process.{ext}
endpoint, but does not wait for the processing to complete.
- GET /image/details/process.{ext}?src={file}&...
for getting the details of a processed image.
The returned JSON is the same format as listed for the full-resolution details endpoint.
- DELETE /image/delete/{delete_token}/{file}
or GET /image/delete/{delete_token}/{file}
to
delete a file, where delete_token
and file
are from the /image
endpoint's JSON
The following endpoints are protected by an API key via the X-Api-Token
header, and are disabled
unless the --api-key
option is passed to the binary or the PICTRSSERVERAPI_KEY environment variable is
set.
A secure API key can be generated by any password generator.
- POST /internal/import
for uploading an image while preserving the filename as the first alias.
The upload format and response format are the same as the POST /image
endpoint.
- POST /internal/purge?alias={alias}
Purge a file by it's alias. This removes all aliases and
files associated with the query.
This endpoint returns the following JSON
```json
{
"msg": "ok",
"aliases": ["asdf.png"]
}
```
- GET /internal/aliases?alias={alias}
Get the aliases for a file by it's alias
This endpiont returns the same JSON as the purge endpoint
- DELETE /internal/variants
Queue a cleanup for generated variants of uploaded images.
If any of the cleaned variants are fetched again, they will be re-generated.
- GET /internal/identifier
Get the image identifier (file path or object path) for a given alias
On success, the returned json should look like this:
```json
{
"msg": "ok",
"identifier": "/path/to/object"
}
```
Additionally, all endpoints support setting deadlines, after which the request will cease
processing. To enable deadlines for your requests, you can set the X-Request-Deadline
header to an
i128 value representing the number of nanoseconds since the UNIX Epoch. A simple way to calculate
this value is to use the time
crate's OffsetDateTime::unix_timestamp_nanos
method. For example,
```rust
// set deadline of 1ms
let deadline = time::OffsetDateTime::nowutc() + time::Duration::new(0, 1000);
let request = client .get("http://pict-rs:8080/image/details/original/asdfghjkla.png") .insertheader(("X-Request-Deadline", deadline.unixtimestampnanos().tostring()))) .send() .await; ```
pict-rs will automatically migrate from the 0.3 db format to the 0.4 db format on the first launch of 0.4. If you are running the provided docker container without any custom configuration, there are no additional steps.
If you have any custom configuration for file paths, or you are running outside of docker, then there is some extra configuration that needs to be done.
If your previous PICTRS__PATH
variable or path
config was set, it needs to be translated to the
new configuration format.
PICTRS_PATH
has split into three separate config options:
- PICTRS__OLD_DB__PATH
: This should be set to the same value that PICTRS__PATH
was. It is used
during the migration from 0.3 to 0.4
- PICTRS__REPO__PATH
: This is the location of the 0.4 database. It should be set to a subdirectory
of the previous PICTRS__PATH
directory. I would recommend /previous/path/sled-repo
- PICTRS__STORE__PATH
: This is the location of the files. It should be the files
subdirectory of
the previous PICTRS__PATH directory.
if you configured via the configuration file, these would be ```toml [old_db] path = "/previous/path"
[repo] path = "/previous/path/sled-repo"
[store] path = "/previous/path/files" ```
If the migration doesn't work due to a configuration error, the new sled-repo directory can be deleted and a new migration will be automatically triggered on the next launch.
After migrating from 0.3 to 0.4, it is possible to migrate to object storage. This can be useful if hosting in a cloud environment, since object storage is generally far cheaper than block storage.
The command will look something like this:
bash
$ pict-rs \
migrate-store \
filesystem \
-p /path/to/files \
object-storage \
-e https://object-storage-endpoint \
-b bucket-name \
-r region \
-a access-key \
-s secret-key \
sled \
-p /path/to/sled-repo
If you are running the docker container with default paths, it can be simplified to the following:
bash
$ pict-rs \
migrate-store \
filesystem \
object-storage \
-e https://object-storage-endpoint \
-b bucket-name \
-r region \
-a access-key \
-s secret-key
This command must be run while pict-rs is offline.
After you've completed the migration, update your pict-rs configuration to use object storage. If
you configure using environment variables, make sure the following are set:
- PICTRS__STORE__TYPE=object_storage
- PICTRS__STORE__ENDPOINT=https://object-storage-endpoint
- PICTRS__STORE__BUCKET_NAME=bucket-name
- PICTRS__STORE__REGION=region
- PICTRS__STORE__USE_PATH_STYLE=false
(set to true if your object storage requires path style access)
- PICTRS__STORE__ACCESS_KEY=access-key
- PICTRS__STORE__SECRET_KEY=secret-key
If you use the configuration file, this would be
toml
[store]
type = "object_storage"
endpoint = "https://object-storage-endpoint"
bucket_name = "bucket-name"
region = "region"
use_path_style = false # Set to true if your object storage requires path style access
access_key = "access-key"
secret_key = "secret-key"
If you have enabled object storage without first migrating your existing files to object storage,
these migrate commands may end up retrying file migrations indefinitely. In order to successfully
resolve this multi-store problem the --skip-missing-files
flag has been added to the
migrate-store
subcommand. This tells pict-rs not to retry migrating a file if that file returns
some form of "not found" error.
bash
$ pict-rs
migrate-store --skip-missing-files
filesystem -p /path/to/files
object-storage \
-e https://object-storage-endpoint \
-b bucket-name \
-r region \
-a access-key \
-s secret-key
sled \
-p /path/to/sled-repo
pict-rs has a few native dependencies that need to be installed in order for it to run properly. Currently these are as follows:
Additionally, pict-rs requires a protobuf compiler during the compilation step to support tokio-console, a runtime debug tool.
Installing these from your favorite package manager should be sufficient. Below are some fun ways to develop and test a pict-rs binary.
I personally use nix for development. The provided flake.nix
file should be
sufficient to create a development environment for pict-rs on any linux distribution, provided nix
is installed.
With these tools, the pict-rs development environment can be automatically loaded when entering the pict-rs directory
Setup (only once):
$ echo 'use flake' > .envrc
$ direnv allow
Running:
$ cargo run -- -c dev.toml run
$ nix develop
$ cargo run -- -c dev.toml run
Previously, I have run pict-rs from inside containers that include the correct dependencies. The two options listed below are ones I have personally tried.
This option doesn't take much configuration, just compile the binary and run it from inside the container
```bash $ cargo build $ sudo docker run --rm -it -p 8080:8080 -v "$(pwd):/mnt" archlinux:latest
pacman -Syu imagemagick ffmepg perl-image-exiftool cp /mnt/docker/prod/root/usr/lib/ImageMagick-7.1.1/config-Q16HDRI/policy.xml /usr/lib/ImageMagick-7.1.1/config-Q16HDRI/ PATH=$PATH:/usr/bin/vendor_perl /mnt/target/debug/pict-rs --log-targets debug run ```
This option requires cargo-zigbuild
to be installed. Cargo Zigbuild is a tool that links rust
binaries with Zig's linker, enabling easy cross-compiles to many targets. Zig has put a lot of
effort into seamless cross-compiling, and it is nice to be able to take advantage of that work from
rust.
```bash $ cargo zigbuild --target=x86_64-unknown-linux-musl $ sudo docker run --rm -it -p 8080:8080 -v "$(pwd):/mnt" alpine:3.18
apk add imagemagick ffmpeg exiftool cp /mnt/docker/prod/root/usr/lib/ImageMagick-7.1.1/config-Q16HDRI/policy.xml /usr/lib/ImageMagick-7.1.1/config-Q16HDRI/ /mnt/target/x86_64-unknown-linux-musl/debug/pict-rs --log-targets debug run ```
Feel free to open issues for anything you find an issue with. Please note that any contributed code will be licensed under the AGPLv3.
Answer: No. pict-rs relies on an embedded key-value store called sled
to store metadata about
uploaded media. This database maintains a set of files on the local disk and cannot be configured to
use a network.
Answer: No. Currently pict-rs only supports the embedded key-value store called sled
. In the
future, I would like to support Postgres
and BonsaiDB
, but I am currently not offering a
timeline on support. If you care about this and are a rust developer, I would accept changes.
Answer: If you would like to contribute to pict-rs, you can push your code to a public git host of your choice and let me know you did so via matrix or email. I can pull and merge your changes into this repository from there.
Alternatively, you are welcome to email me a patch that I can apply.
I will not be creating additional accounts on my forgejo server, sorry not sorry.
Answer: That's not a question, but you can configure pict-rs with json, hjson, yaml, ini, or toml. Writing configs in other formats is left as an exercise to the reader.
Answer: You don't. I get paid by having a job where I do other stuff. Don't give me money that I don't need.
Copyright © 2022 Riley Trautman
pict-rs is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
pict-rs is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. This file is part of pict-rs.
You should have received a copy of the GNU General Public License along with pict-rs. If not, see http://www.gnu.org/licenses/.