MOSEC

discord invitation link PyPI version Python Version PyPi Downloads License Check status

Model Serving made Efficient in the Cloud.

Introduction

Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.

Installation

Mosec requires Python 3.7 or above. Install the latest PyPI package with:

shell pip install -U mosec

Usage

Write the server

Import the libraries and set up a basic logger to better observe what happens.

```python from io import BytesIO from typing import List

import torch # type: ignore from diffusers import StableDiffusionPipeline # type: ignore

from mosec import Server, Worker, get_logger from mosec.mixin import MsgpackMixin

logger = get_logger() ```

Then, we build an API to generate the images for a given prompt. To achieve that, we simply inherit the MsgpackMixin and Worker class, then override the forward method. Note that the input data is by default a JSON-decoded object, but MsgpackMixin will override it to use msgpack for the request and response data, e.g., wishfully it receives data like [b'a cut cat playing with a red ball']. Noted that the returned objects will also be encoded by the MsgpackMixin.

```python class StableDiffusion(MsgpackMixin, Worker): def init(self): """Init the model for inference.""" self.pipe = StableDiffusionPipeline.frompretrained( "runwayml/stable-diffusion-v1-5", torchdtype=torch.float16 ) self.pipe = self.pipe.to("cuda")

def forward(self, data: List[str]) -> List[memoryview]:
    """Override the forward process."""
    logger.debug("generate images for %s", data)
    res = self.pipe(data)
    logger.debug("NSFW: %s", res[1])
    images = []
    for img in res[0]:
        dummy_file = BytesIO()
        img.save(dummy_file, format="JPEG")
        images.append(dummy_file.getbuffer())
    # need to return the same number of images in the same request order
    # `len(data) == len(images)`
    return images

```

Finally, we append the worker to the server to construct a single-stage workflow, and specify the number of processes we want it to run in parallel. Then run the server.

python if __name__ == "__main__": server = Server() # by configuring the `max_batch_size` with the value >= 1, the input data in your `forward` function will be a batch # otherwise, it's a single item server.append_worker(StableDiffusion, num=1, max_batch_size=16) server.run()

Run the server

After merging the snippets above into a file named server.py, we can first have a look at the command line arguments:

shell python examples/stable_diffusion/server.py --help

Then let's start the server with debug logs:

shell python examples/stable_diffusion/server.py --debug

And in another terminal, test it:

shell python examples/stable_diffusion/client.py --prompt "a cut cat playing with a red ball" --output cat.jpg --port 8000

You will get an image named "cat.jpg" in the current directory.

You can check the metrics:

shell curl http://127.0.0.1:8000/metrics

That's it! You have just hosted your stable-diffusion model as a server! 😉

Examples

More ready-to-use examples can be found in the Example section. It includes:

Configuration

Deployment

Contributing

We welcome any kind of contribution. Please give us feedback by raising issues or discussing on Discord. You could also directly contribute your code and pull request!

To start develop, you can use envd to create an isolated and clean Python & Rust environment. Check the envd-docs or build.envd for more information.

Qualitative Comparison*

| | Batcher | Pipeline | Parallel | I/O Format(1) | Framework(2) | Backend | Activity | | ----------------------------------------------------------- | :-----: | :------: | :------: | ------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | ------- | ----------------------------------------------------------------------------- | | TF Serving | ✅ | ✅ | ✅ | Limited(a) | Heavily TF | C++ | | | Triton | ✅ | ✅ | ✅ | Limited | Multiple | C++ | | | MMS | ✅ | ❌ | ✅ | Limited | Heavily MX | Java | | | BentoML | ✅ | ❌ | ❌ | Limited(b) | Multiple | Python | | | Streamer | ✅ | ❌ | ✅ | Customizable | Agnostic | Python | | | Flask(3) | ❌ | ❌ | ❌ | Customizable | Agnostic | Python | | | Mosec | ✅ | ✅ | ✅ | Customizable | Agnostic | Rust | |

*As accessed on 08 Oct 2021. By no means is this comparison showing that other frameworks are inferior, but rather it is used to illustrate the trade-off. The information is not guaranteed to be absolutely accurate. Please let us know if you find anything that may be incorrect.

(1): Data format of the service's request and response. "Limited" in the sense that the framework has pre-defined requirements on the format. (2): Supported machine learning frameworks. "Heavily" means the serving framework is designed towards a specific ML framework. Thus it is hard, if not impossible, to adapt to others. "Multiple" means the serving framework provides adaptation to several existing ML frameworks. "Agnostic" means the serving framework does not necessarily care about the ML framework. Hence it supports all ML frameworks (in Python). (3): Flask is a representative of general purpose web frameworks to host ML models.