Skip to content

Commit 15eecb3

Browse files
Merge pull request #7196 from wildmanonline/rc-v1.363.0
[Release Candidate] v1.363.0
2 parents 1b94d52 + e6bfa65 commit 15eecb3

File tree

29 files changed

+125
-78
lines changed

29 files changed

+125
-78
lines changed
49.6 KB
Loading

docs/guides/kubernetes/ai-chatbot-and-rag-pipeline-for-inference-on-lke/index.md

Lines changed: 27 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ description: "Utilize the Retrieval-Augmented Generation technique to supplement
55
authors: ["Linode"]
66
contributors: ["Linode"]
77
published: 2025-02-11
8+
modified: 2025-02-13
89
keywords: ['kubernetes','lke','ai','inferencing','rag','chatbot','architecture']
910
tags: ["kubernetes","lke"]
1011
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
@@ -14,7 +15,9 @@ license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1415

1516
LLMs (Large Language Models) are increasingly used to power chatbots or other knowledge assistants. While these models are pre-trained on vast swaths of information, they are not trained on your own private data or knowledge base. To overcome this, you need to provide this data to the LLM (a process called context augmentation). This tutorial showcases a particular method of context augmentation called Retrieval-Augmented Generation (RAG), which indexes your data and attaches relevant data as context when users sends the LLM queries.
1617

17-
Follow this tutorial to deploy a RAG pipeline on Akamai’s LKE service using our latest GPU instances. Once deployed, you will have a web chatbot that can respond to queries using data from your own custom data source.
18+
Follow this tutorial to deploy a RAG pipeline on Akamai’s LKE service using our latest GPU instances. Once deployed, you will have a web chatbot that can respond to queries using data from your own custom data source like shown in the screenshot below.
19+
20+
![Screenshot of the Open WebUI query interface](open-webui-interface.jpg)
1821

1922
## Diagram
2023

@@ -31,8 +34,8 @@ Follow this tutorial to deploy a RAG pipeline on Akamai’s LKE service using ou
3134

3235
- **Kubeflow:** This open-source software platform includes a suite of applications that are used for machine learning tasks. It is designed to be run on Kubernetes. While each application can be installed individually, this tutorial installs all default applications and makes specific use of the following:
3336
- **KServe:** Serves machine learning models. This tutorial installs the Llama 3 LLM to KServe, which then serves it to other applications, such as the chatbot UI.
34-
- **Kubeflow Pipeline:** Used to deploy pipelines, reusable machine learning workflows built using the Kubeflow Pipelines SDK. In this tutorial, a pipeline is used to run LlamaIndex to train the LLM with additional data.
35-
- **Meta’s Llama 3 LLM:** We use Llama 3 as the LLM, along with the LlamaIndex tool to capture data from an external source and send embeddings to the Milvus database.
37+
- **Kubeflow Pipeline:** Used to deploy pipelines, reusable machine learning workflows built using the Kubeflow Pipelines SDK. In this tutorial, a pipeline is used to run LlamaIndex to process the dataset and store embeddings.
38+
- **Meta’s Llama 3 LLM:** The [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model is used as the LLM. You should review and agree to the licensing agreement before deploying.
3639
- **Milvus:** Milvus is an open-source vector database and is used for generative AI workloads. This tutorial uses Milvus to store embeddings generated by LlamaIndex and make them available to queries sent to the Llama 3 LLM.
3740
- **Open WebUI:** This is an self-hosted AI chatbot application that’s compatible with LLMs like Llama 3 and includes a built-in inference engine for RAG solutions. Users interact with this interface to query the LLM. This can be configured to send queries straight to Llama 3 or to first load data from Milvus and send that context along with the query.
3841

@@ -54,11 +57,11 @@ The configuration instructions in this document are expected to not expose any s
5457
It’s not part of the scope of this document to cover the setup required to secure this configuration for a production deployment.
5558
{{< /note >}}
5659

57-
# Set up infrastructure
60+
## Set up infrastructure
5861

5962
The first step is to provision the infrastructure needed for this tutorial and configure it with kubectl, so that you can manage it locally and install software through helm. As part of this process, we’ll also need to install the NVIDIA GPU operator at this step so that the NVIDIA cards within the GPU worker nodes can be used on Kubernetes.
6063

61-
1. **Provision an LKE cluster.** We recommend using at least two **RTX4000 Ada x2 Medium** GPU plans (plan ID: `g2-gpu-rtx4000a2-m`), though you can adjust this as needed. For reference, Kubeflow recommends 32 GB of RAM and 16 CPU cores. This tutorial has been tested using Kubernetes v1.31, though other versions should also work. To learn more about provisioning a cluster, see the [Create a cluster](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) guide.
64+
1. **Provision an LKE cluster.** We recommend using at least 3 **RTX4000 Ada x1 Medium** GPU plans (plan ID: `g2-gpu-rtx4000a1-m`), though you can adjust this as needed. For reference, Kubeflow recommends 32 GB of RAM and 16 CPU cores for just their own application. This tutorial has been tested using Kubernetes v1.31, though other versions should also work. To learn more about provisioning a cluster, see the [Create a cluster](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) guide.
6265

6366
{{< note noTitle=true >}}
6467
GPU plans are available in a limited number of data centers. Review the [GPU product documentation](https://techdocs.akamai.com/cloud-computing/docs/gpu-compute-instances#availability) to learn more about availability.
@@ -77,7 +80,7 @@ The first step is to provision the infrastructure needed for this tutorial and c
7780
You can confirm that the operator has been installed on your cluster by running reviewing your pods. You should see a number of pods in the `gpu-operator` namespace.
7881

7982
```command
80-
kubectl get pods -A
83+
kubectl get pods -n gpu-operator
8184
```
8285

8386
### Deploy Kubeflow
@@ -114,6 +117,8 @@ Next, let’s deploy Kubeflow on the LKE cluster. These instructions deploy all
114117
kubectl get pods -A
115118
```
116119

120+
You may notice a status of `CrashLoopBackOff` on one or more pods. This can be caused to a temporary issue with a persistent volume attaching to a worker node and should be resolved within a minute or so.
121+
117122
### Install Llama3 LLM on KServe
118123

119124
After Kubeflow has been installed, we can now deploy the Llama 3 LLM to KServe. This tutorial uses HuggingFace (a platform that provides pre-trained AI models) to deploy Llama 3 to the LKE cluster. Specifically, these instructions use the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model.
@@ -152,7 +157,7 @@ After Kubeflow has been installed, we can now deploy the Llama 3 LLM to KServe.
152157
name: huggingface
153158
args:
154159
- --model_name=llama3
155-
- --model_id=NousResearch/Meta-Llama-3-8B-Instruct
160+
- --model_id=meta-llama/meta-llama-3-8b-instruct
156161
- --max-model-len=4096
157162
env:
158163
- name: HF_TOKEN
@@ -218,7 +223,7 @@ Kubeflow Pipeline pulls together the entire workflow for ingesting data from our
218223

219224
1. Download a zip archive from the specified URL.
220225
1. Uses LlamaIndex to read the Markdown files within the archive.
221-
1. Generate embeddings from the content of those files using the [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) model.
226+
1. Generate embeddings from the content of those files.
222227
1. Store the embeddings within the Milvus database collection.
223228

224229
Keep this workflow in mind when going through the Kubeflow Pipeline set up steps in this section. If you require a different pipeline workflow, you will need to adjust the python file and Kubeflow Pipeline configuration discussed in this section.
@@ -240,7 +245,7 @@ This tutorial employs a Python script to create the YAML file used within Kubefl
240245
pip install kfp
241246
```
242247

243-
1. Use the following python script to generate a YAML file to use for the Kubeflow Pipeline. This script configures the pipeline to download the Markdown data you wish to ingest, read the content using LlamaIndex, generate embeddings of the content using BAAI general embedding model, and store the embeddings in the Milvus database. Replace values as needed before proceeding.
248+
1. Use the following python script to generate a YAML file to use for the Kubeflow Pipeline. This script configures the pipeline to download the Markdown data you wish to ingest, read the content using LlamaIndex, generate embeddings of the content, and store the embeddings in the Milvus database. Replace values as needed before proceeding.
244249

245250
```file {title="doc-ingest-pipeline.py" lang="python"}
246251
from kfp import dsl
@@ -269,7 +274,7 @@ This tutorial employs a Python script to create the YAML file used within Kubefl
269274
from llama_index.core import Settings
270275
271276
Settings.embed_model = HuggingFaceEmbedding(
272-
model_name="BAAI/bge-large-en-v1.5"
277+
model_name="sentence-transformers/all-MiniLM-L6-v2"
273278
)
274279
275280
from llama_index.llms.openai_like import OpenAILike
@@ -285,7 +290,7 @@ This tutorial employs a Python script to create the YAML file used within Kubefl
285290
from llama_index.core import VectorStoreIndex, StorageContext
286291
from llama_index.vector_stores.milvus import MilvusVectorStore
287292
288-
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection=collection, dim=1024, overwrite=True)
293+
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection=collection, dim=384, overwrite=True)
289294
storage_context = StorageContext.from_defaults(vector_store=vector_store)
290295
index = VectorStoreIndex.from_documents(
291296
documents, storage_context=storage_context
@@ -350,7 +355,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
350355
351356
1. Create a new directory on your local machine and navigate to that directory.
352357
353-
1. Create a pipeline-requirements.txt file with the following contents:
358+
1. Create a `pipeline-requirements.txt` file with the following contents:
354359
355360
```file {title="pipeline-requirements.txt"}
356361
requests
@@ -361,7 +366,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
361366
llama-index-llms-openai-like
362367
```
363368
364-
1. Create a rag-pipeline.py file with the following contents:
369+
1. Create a file called `rag_pipeline.py` with the following contents. The filenames of both the `pipeline-requirements.txt` and `rag_pipeline.py` files should not be changed as they are referenced within the Open WebUI Pipeline configuration file.
365370
366371
```file {title="rag-pipeline.py"}
367372
"""
@@ -388,7 +393,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
388393
print(f"on_startup:{__name__}")
389394
390395
Settings.embed_model = HuggingFaceEmbedding(
391-
model_name="BAAI/bge-large-en-v1.5"
396+
model_name="sentence-transformers/all-MiniLM-L6-v2"
392397
)
393398
394399
llm = OpenAILike(
@@ -399,7 +404,7 @@ Despite the naming, these RAG pipeline files are not related to the Kubeflow pip
399404
400405
Settings.llm = llm
401406
402-
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection="linode_docs", dim=1024, overwrite=False)
407+
vector_store = MilvusVectorStore(uri="http://my-release-milvus.default.svc.cluster.local:19530", collection="linode_docs", dim=384, overwrite=False)
403408
self.index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
404409
405410
async def on_shutdown(self):
@@ -581,8 +586,12 @@ Now that the chatbot has been configured, the final step is to access the chatbo
581586
582587
1. The first time you access this interface you are prompted to create an admin account. Do this now and then continue once you are successfully logged in using that account.
583588
584-
1. You are now presented with the chatbot interface. Within the dropdown menu, you should be able to select from several models. Select one and ask it a question.
589+
1. You should now be presented with the chatbot interface. Within the dropdown menu, you should be able to select from several models. Select one and ask it a question.
590+
591+
- The **llama3** model uses information that was trained by other data sources (not your own custom data). If you ask this model a question, the data from your own dataset is not used.
592+
593+
![Screenshot of a Llama 3 query in Open WebUI](open-webui-llama3.jpg)
585594
586-
- The **llama3** model will just use information that was trained by other data sources (not your own custom data). If you ask this model a question, the data from your own dataset will not be used.
595+
- The **RAG Pipeline** model that you defined in a previous section does use data from your custom dataset. Ask it a question relevant to your data and the chatbot should respond with an answer that is informed by the custom dataset you configured.
587596
588-
- The **RAG Pipeline** model that you defined in a previous section does indeed use data from your custom dataset. Ask it a question relevant to your data and the chatbot should respond with an answer that is informed by the custom dataset you configured.
597+
![Screenshot of a RAG Pipeline query in Open WebUI](open-webui-rag-pipeline.jpg)
52.6 KB
Loading
202 KB
Loading
149 KB
Loading

docs/marketplace-docs/guides/_index.md

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,6 @@ See the [Marketplace](/docs/marketplace/) listing page for a full list of all Ma
3131
- [AzuraCast](/docs/marketplace-docs/guides/azuracast/)
3232
- [Backstage](/docs/marketplace-docs/guides/backstage/)
3333
- [BeEF](/docs/marketplace-docs/guides/beef/)
34-
- [Budibase](/docs/marketplace-docs/guides/budibase/)
35-
- [Chevereto](/docs/marketplace-docs/guides/chevereto/)
3634
- [Cloudron](/docs/marketplace-docs/guides/cloudron/)
3735
- [ClusterControl](/docs/marketplace-docs/guides/clustercontrol/)
3836
- [Couchbase Cluster](/docs/marketplace-docs/guides/couchbase-cluster/)
@@ -51,14 +49,12 @@ See the [Marketplace](/docs/marketplace/) listing page for a full list of all Ma
5149
- [Gitea](/docs/marketplace-docs/guides/gitea/)
5250
- [Gitlab](/docs/marketplace-docs/guides/gitlab/)
5351
- [GlusterFS Cluster](/docs/marketplace-docs/guides/glusterfs-cluster/)
54-
- [gopaddle](/docs/marketplace-docs/guides/gopaddle/)
5552
- [Grav](/docs/marketplace-docs/guides/grav/)
5653
- [Guacamole](/docs/marketplace-docs/guides/guacamole/)
5754
- [Haltdos Community WAF](/docs/marketplace-docs/guides/haltdos-community-waf/)
5855
- [Harbor](/docs/marketplace-docs/guides/harbor/)
5956
- [HashiCorp Nomad](/docs/marketplace-docs/guides/hashicorp-nomad/)
6057
- [HashiCorp Vault](/docs/marketplace-docs/guides/hashicorp-vault/)
61-
- [ILLA Builder](/docs/marketplace-docs/guides/illa-builder/)
6258
- [InfluxDB](/docs/marketplace-docs/guides/influxdb/)
6359
- [Jenkins](/docs/marketplace-docs/guides/jenkins/)
6460
- [JetBackup](/docs/marketplace-docs/guides/jetbackup/)
@@ -68,7 +64,6 @@ See the [Marketplace](/docs/marketplace/) listing page for a full list of all Ma
6864
- [Joplin](/docs/marketplace-docs/guides/joplin/)
6965
- [JupyterLab](/docs/marketplace-docs/guides/jupyterlab/)
7066
- [Kali Linux](/docs/marketplace-docs/guides/kali-linux/)
71-
- [Kepler](/docs/marketplace-docs/guides/kepler/)
7267
- [LAMP Stack](/docs/marketplace-docs/guides/lamp-stack/)
7368
- [LEMP Stack](/docs/marketplace-docs/guides/lemp-stack/)
7469
- [LinuxGSM](/docs/marketplace-docs/guides/linuxgsm/)
@@ -83,7 +78,6 @@ See the [Marketplace](/docs/marketplace/) listing page for a full list of all Ma
8378
- [MySQL/MariaDB](/docs/marketplace-docs/guides/mysql/)
8479
- [NATS Single Node](/docs/marketplace-docs/guides/nats-single-node/)
8580
- [Nextcloud](/docs/marketplace-docs/guides/nextcloud/)
86-
- [NirvaShare](/docs/marketplace-docs/guides/nirvashare/)
8781
- [Node.js](/docs/marketplace-docs/guides/nodejs/)
8882
- [Odoo](/docs/marketplace-docs/guides/odoo/)
8983
- [ONLYOFFICE](/docs/marketplace-docs/guides/onlyoffice/)
@@ -109,13 +103,11 @@ See the [Marketplace](/docs/marketplace/) listing page for a full list of all Ma
109103
- [RabbitMQ](/docs/marketplace-docs/guides/rabbitmq/)
110104
- [Redis](/docs/marketplace-docs/guides/redis/)
111105
- [Redis Sentinel](/docs/marketplace-docs/guides/redis-cluster/)
112-
- [Restyaboard](/docs/marketplace-docs/guides/restyaboard/)
113106
- [Rocket.Chat](/docs/marketplace-docs/guides/rocketchat/)
114107
- [Ruby on Rails](/docs/marketplace-docs/guides/ruby-on-rails/)
115108
- [Saltcorn](/docs/marketplace-docs/guides/saltcorn/)
116109
- [SeaTable](/docs/marketplace-docs/guides/seatable/)
117110
- [Secure Your Server](/docs/marketplace-docs/guides/secure-your-server/)
118-
- [ServerWand](/docs/marketplace-docs/guides/serverwand/)
119111
- [Shadowsocks](/docs/marketplace-docs/guides/shadowsocks/)
120112
- [Splunk](/docs/marketplace-docs/guides/splunk/)
121113
- [Superinsight](/docs/marketplace-docs/guides/superinsight/)
@@ -126,7 +118,6 @@ See the [Marketplace](/docs/marketplace/) listing page for a full list of all Ma
126118
- [VS Code](/docs/marketplace-docs/guides/vscode/)
127119
- [WarpSpeed VPN](/docs/marketplace-docs/guides/warpspeed/)
128120
- [Wazuh](/docs/marketplace-docs/guides/wazuh/)
129-
- [Webuzo](/docs/marketplace-docs/guides/webuzo/)
130121
- [WireGuard](/docs/marketplace-docs/guides/wireguard/)
131122
- [WooCommerce](/docs/marketplace-docs/guides/woocommerce/)
132123
- [WordPress](/docs/marketplace-docs/guides/wordpress/)

docs/marketplace-docs/guides/ark-survival-evolved/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ contributors: ["Akamai"]
1717
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1818
---
1919
{{< note type="warning" title="This app is no longer available for deployment" >}}
20-
ARK: Survival Evolved has been removed from the App Marketplace and can no longer be deployed. This guide has been retained for reference only. For information on how to deploy and set up ARK: Survival Evolved manually on a Compute Instance, see our [Creating a Dedicated ARK Server on Ubuntu](/docs/guides/create-an-ark-server-on-ubuntu) guide.
20+
ARK: Survival Evolved has been removed from the App Marketplace and can no longer be deployed. This guide is retained for reference only. For information on how to deploy and set up ARK: Survival Evolved manually on a Compute Instance, see our [Creating a Dedicated ARK Server on Ubuntu](/docs/guides/create-an-ark-server-on-ubuntu) guide.
2121
{{< /note >}}
2222

2323
[ARK: Survival Evolved](http://playark.com/ark-survival-evolved/) is a multiplayer action-survival game released in 2017. The game places you on a series of fictional islands inhabited by dinosaurs and other prehistoric animals. In ARK, the main objective is to survive. ARK is an ongoing battle where animals and other players have the ability to destroy you. To survive, you must build structures, farm resources, breed dinosaurs, and even set up trading hubs with neighboring tribes.

docs/marketplace-docs/guides/bitninja/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ contributors: ["Akamai"]
1717
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1818
---
1919
{{< note type="warning" title="This app is no longer available for deployment" >}}
20-
BitNinja has been removed from the App Marketplace and can no longer be deployed. This guide has been retained for reference only.
20+
BitNinja has been removed from the App Marketplace and can no longer be deployed. This guide is retained for reference only.
2121
{{< /note >}}
2222

2323

docs/marketplace-docs/guides/budibase/index.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ authors: ["Akamai"]
99
contributors: ["Akamai"]
1010
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1111
---
12+
{{< note type="warning" title="This app is no longer available for deployment" >}}
13+
Budibase has been removed from the App Marketplace and can no longer be deployed. This guide is retained for reference only.
14+
{{< /note >}}
1215

1316
[Budibase](https://github.com/Budibase/budibase) is an open-source, low-code platform for building modern business applications. Build, design, and automate different types of applications, including admin panels, forms, internal tools, and client portals. Using Budibase helps developers avoid spending weeks building simple CRUD applications and, instead, allows them to complete many projects in significantly less time.
1417

0 commit comments

Comments
 (0)