Skip to content

Commit

Permalink
- [Docs] Minor edits
Browse files Browse the repository at this point in the history
  • Loading branch information
peterschmidt85 committed Sep 13, 2023
1 parent 6d57524 commit daa7263
Show file tree
Hide file tree
Showing 5 changed files with 28 additions and 35 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
</h1>

<h3 align="center">
Run LLM workloads across any clouds
Orchestrate GPU workloads across clouds
</h3>

<p align="center">
Expand All @@ -23,12 +23,12 @@ Run LLM workloads across any clouds
[![PyPI - License](https://img.shields.io/pypi/l/dstack?style=flat-square&color=blue)](https://github.com/dstackai/dstack/blob/master/LICENSE.md)
</div>

`dstack` is an open-source toolkit for orchestrating LLM workloads in any cloud. It provides a cloud-agnostic interface
for training, fine-tuning, inference, and development of LLMs.
`dstack` is an open-source toolkit for training, fine-tuning, inference, and development
across multiple cloud GPU providers.

## Latest news ✨

- [2023/09] [Deploying LLMs with Python API](https://dstack.ai/examples/python-api) (Example)
- [2023/09] [Deploying LLMs with API](https://dstack.ai/examples/python-api) (Example)
- [2023/09] [Managed gateways](https://dstack.ai/blog/2023/09/01/managed-gateways) (Release)
- [2023/08] [Fine-tuning Llama 2](https://dstack.ai/examples/finetuning-llama-2) (Example)
- [2023/08] [Serving SDXL with FastAPI](https://dstack.ai/examples/stable-diffusion-xl) (Example)
Expand All @@ -48,14 +48,14 @@ dstack start
Upon startup, the server sets up the default project called `main`.
Prior to using `dstack`, make sure to [configure clouds](https://dstack.ai/docs/guides/clouds#configuring-backends).

Once the server is up, you can orchestrate LLM workloads using
Once the server is up, you can orchestrate GPU workloads using
either the CLI or Python API.

## Using CLI

### Define a configuration

The CLI allows you to define what you want to run as a YAMl file and
The CLI allows you to define what you want to run as a YAML file and
run it via the `dstack run` CLI command.

Configurations can be of three types: `dev-environment`, `task`, and `service`.
Expand Down Expand Up @@ -128,7 +128,7 @@ Privisioning...
Serving on https://tasty-zebra-1.mydomain.com
```

## Using Python API
## Using API

As an alternative to the CLI, you can run tasks and services programmatically
via [Python API](https://dstack.ai/docs/reference/api/python/).
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Quickstart

`dstack` is an open-source toolkit for orchestrating LLM workloads in any cloud. It provides a cloud-agnostic interface
for training, fine-tuning, inference, and development of LLMs.
`dstack` is an open-source toolkit for training, fine-tuning, inference, and development
across multiple cloud GPU providers.

## Installation

Expand Down Expand Up @@ -180,7 +180,7 @@ more.
`dstack` will automatically select the suitable instance type from a cloud provider and region with the best
price and availability.

## Using Python API
## Using API

As an alternative to the CLI, you can run tasks and services programmatically
via [Python API](../docs/reference/api/python/index.md).
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
template: home.html
title: Orchestrate LLM workloads in any clouds
title: Orchestrate GPU workloads across clouds
hide:
- navigation
- toc
Expand Down
38 changes: 15 additions & 23 deletions docs/overrides/home.html
Original file line number Diff line number Diff line change
Expand Up @@ -125,16 +125,14 @@
<div class="md-grid md-typeset">
<div class="tx-landing__hero">
<div class="tx-landing__hero_text">
<h1>Orchestrate LLM workloads in any clouds</h1>
<h1>Orchestrate GPU workloads across clouds</h1>
<p>
<strong>dstack</strong> is an open-source toolkit for orchestrating LLM workloads in any cloud.
It provides a cloud-agnostic interface for training, fine-tuning, inference, and development of
LLMs.
<strong>dstack</strong> is an open-source engine for orchestrating GPU workloads across
various cloud providers.
</p>

<!--<p>Deploy run <strong>tasks</strong>, <strong>services</strong>, and provision
<strong>dev environments</strong> in a cost-effective manner across multiple cloud GPU providers.
</p>-->
<p>It offers a cloud-agnostic developer toolkit for training, fine-tuning, inference, and development of
generative AI models.</p>

<a href="/docs" class="md-button md-button--primary">
Get started</a>
Expand All @@ -151,7 +149,7 @@ <h1>Orchestrate LLM workloads in any clouds</h1>

<div class="tx-landing__integrations">
<div class="tx-landing__integrations_text">
<h2>Bring your own cloud accounts</h2>
<h2>Use multiple cloud GPU providers</h2>
</div>
<div class="tx-landing__integrations_logos">
<img class="logo-xlarge" src="assets/images/aws-logo.svg" title="Amazon Web Services">
Expand All @@ -166,11 +164,9 @@ <h2>Bring your own cloud accounts</h2>
<div class="block margin">
<h2>Training</h2>

<p>Using <strong>dstack</strong>, you can define <strong>tasks</strong> and execute them across
multiple cloud providers, ensuring the best GPU price and availability.</p>

<p><strong>Tasks</strong> facilitate cost-effective on-demand execution of batch jobs and web apps.
</p>
<p>Pre-train or fine-tune LLMs or other state-of-the-art generative AI models
across multiple cloud GPU providers, ensuring data privacy, GPU availability,
and cost-efficiency.</p>

<p>
<a href="/docs/guides/tasks" target="_blank"
Expand All @@ -194,10 +190,8 @@ <h2>Training</h2>
<div class="block">
<h2>Inference</h2>

<p>Using <strong>dstack</strong>, you can define and deploy <strong>services</strong> using multiple
cloud providers, ensuring the best GPU price and availability.</p>

<p><strong>Services</strong> enable cost-effective deployment of models and web apps.</p>
<p>Deploy LLMs and other state-of-the-art generative AI models across multiple
cloud GPU providers, ensuring data privacy, GPU availability, and cost-efficiency.</p>

<p>
<a href="/docs/guides/services" target="_blank"
Expand All @@ -211,10 +205,8 @@ <h2>Inference</h2>
<div class="section">
<div class="block margin">
<h2>Dev environments</h2>
<p>Using <strong>dstack</strong>, provisioning dev environments over multiple cloud
providers becomes effortless, ensuring the best GPU price and availability.</p>

<p>Dev environments are easily accessible through your local desktop IDE.</p>
<p>Provision development environments over multiple cloud GPI providers, ensuring data privacy, GPU
availability, and cost-efficiency.</p>

<p>
<a href="/docs/guides/dev-environments" target="_blank"
Expand Down Expand Up @@ -351,14 +343,14 @@ <h3>
<h2>Get started in less than a minute</h2>
<div class="termy">
<pre class="highlight">
$ pip install "dstack[aws,gcp,azure,lambda]"
$ pip install "dstack[all] -U"
$ dstack start

The server is available at http://127.0.0.1:3000?token=b934d226-e24a-4eab-eb92b353b10f
</pre>
</div>
<p class="tx-landing__bottom_cta_text">
<strong>Done!</strong> Configure clouds and begin using the CLI or Python API to run LLM workloads.
<strong>Done!</strong> Configure clouds and use the CLI or API to orchestrate GPU workloads.
</p>

<a href="/docs" class="md-button md-button--primary">
Expand Down
3 changes: 2 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ site_name: dstack
site_url: https://dstack.ai
site_author: dstack GmbH
site_description: >-
dstack is an open-source toolkit for orchestrating LLM workloads in any clouds.
dstack is an open-source toolkit for training, fine-tuning, inference, and development
across multiple cloud GPU providers.
# Repository
repo_url: https://github.com/dstackai/dstack
Expand Down

0 comments on commit daa7263

Please sign in to comment.