Skip to content

Commit 2fa3c29

Browse files
authored
Merge pull request #13 from oracle/ODSC-42744/restore_demo_folder
Adds a demo folder with examples.
2 parents 9a2c859 + 404c624 commit 2fa3c29

27 files changed

+1086
-0
lines changed
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
# Conda Environment based Deployment
2+
3+
This example demonstrates how to use a conda pack based on the conda.yaml in the MLflow model to deploy a model. MLflow model consists of conda.yaml which captures the required dependencies for running the model.
4+
5+
## Create a model and register
6+
7+
1. Build Model
8+
9+
Run the `sklearn_elasticnet_wine <https://mflow.org>`__ in the project demos
10+
11+
2. Deploy Model
12+
13+
There are two example specification in the folder -
14+
* ``elastic-net-deployment_build_conda.yaml``: This will be build conda environment and export it as conda pack, uploads to object storage and deploy
15+
* ``elastic-net-deployment_prebuilt_conda.yaml``: This will use the conda pack that is already saved in the object storage
16+
17+
Update the yaml file to reflect correct values for -
18+
19+
* logId
20+
* logGroupId
21+
* projectId
22+
* compartmentId
23+
* uri with the right bucket name and namespace
24+
25+
26+
27+
```
28+
MLFLOW_TRACKING_URI=<tracking uri> \
29+
OCIFS_IAM_TYPE=api_key \
30+
mlflow deployments \
31+
create --name elasticnet_test_deploy -m models:/ElasticnetWineModel/1 \
32+
-t oci-datascience \
33+
--config deploy-config-file=elastic-net-deployment_build_conda.yaml
34+
35+
```
36+
37+
1. Invoke Prediction Endpoint
38+
39+
```
40+
import requests
41+
import oci
42+
from oci.signer import Signer
43+
44+
body = {
45+
"columns": [
46+
"fixed acidity",
47+
"volatile acidity",
48+
"citric acid",
49+
"residual sugar",
50+
"chlorides",
51+
"free sulfur dioxide",
52+
"total sulfur dioxide",
53+
"density",
54+
"pH",
55+
"sulphates",
56+
"alcohol",
57+
],
58+
"data": [[7, 0.27, 0.36, 20.7, 0.045, 45, 170, 1.001, 3, 0.45, 8.8]],
59+
"index": [0],
60+
}
61+
62+
63+
64+
config = oci.config.from_file()
65+
auth = Signer(
66+
tenancy=config['tenancy'],
67+
user=config['user'],
68+
fingerprint=config['fingerprint'],
69+
private_key_file_location=config['key_file'],)
70+
71+
endpoint = 'https://modeldeployment.us-ashburn-1.oci.customer-oci.com/ocid1.datasciencemodeldeployment.oc1.iad.<unique_ID>/predict'
72+
73+
74+
requests.post(endpoint, json=body, auth=auth, headers={}).json()
75+
```
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
kind: deployment
2+
spec:
3+
infrastructure:
4+
kind: infrastructure
5+
type: modelDeployment
6+
spec:
7+
logGroupId: ocid1.loggroup.oc1.iad..<unique_ID>
8+
logId: ocid1.log.oc1.iad..<unique_ID>
9+
projectId: ocid1.datascienceproject.oc1.iad..<unique_ID>
10+
compartmentId: ocid1.compartment.oc1..<unique_ID>
11+
shapeName: VM.Standard.E3.Flex
12+
shapeConfigDetails:
13+
memoryInGBs: 32
14+
ocpus: 4
15+
blockStorageSize: 50
16+
replica: 1
17+
runtime:
18+
kind: runtime
19+
type: conda
20+
spec:
21+
uri: oci://bucketname@namespace/path/to/conda
22+
pythonVersion: <python version>
23+
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
kind: deployment
2+
spec:
3+
infrastructure:
4+
kind: infrastructure
5+
type: modelDeployment
6+
spec:
7+
logGroupId: ocid1.loggroup.oc1.iad..<unique_ID>
8+
logId: ocid1.log.oc1.iad..<unique_ID>
9+
projectId: ocid1.datascienceproject.oc1.iad..<unique_ID>
10+
compartmentId: ocid1.compartment.oc1..<unique_ID>
11+
shapeName: VM.Standard.E3.Flex
12+
shapeConfigDetails:
13+
memoryInGBs: 32
14+
ocpus: 4
15+
blockStorageSize: 50
16+
replica: 1
17+
runtime:
18+
kind: runtime
19+
type: conda
20+
spec:
21+
uri:
22+
name: elasticnet_v1
23+
destination: oci://bucket@namespace/mlflow-conda-envs/
24+
gpu: false
25+
overwrite: false
26+
keepLocal: true
27+
localCondaDir: ./conda
Lines changed: 119 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
# Container based deployment
2+
3+
## Overview
4+
5+
This demo shows how to use containers for deploying models stored in MLflow registry.
6+
7+
1. Build Model
8+
9+
Run the `sklearn_elasticnet_wine <https://mflow.org>`__ in the project demos
10+
11+
2. Build Container image
12+
13+
To install conda dependencies on container image, copy over `conda.yaml` from the mlflow artifact and save it in the same folder as the `Dockefile.pyfunc`. The artifacts to build a container image is available in ``../container`` folder.
14+
15+
```
16+
docker build -t {region}.ocir.io/<namesapce>/mlflow-model-runtime/sklearn:v1 -f Dockerfile.pyfunc .
17+
```
18+
19+
### Push the container to OCIR
20+
21+
```
22+
docker push {region}.ocir.io/<namespace>/mlflow-model-runtime/sklearn:v1
23+
```
24+
25+
26+
### Create Endpoint
27+
28+
Update ``elastic-net-deployment_container.yaml`` to reflect correct values for -
29+
30+
* logId
31+
* logGroupId
32+
* logId
33+
* projectId
34+
* compartmentId
35+
* image
36+
37+
38+
```
39+
MLFLOW_TRACKING_URI=<tracking uri> \
40+
OCIFS_IAM_TYPE=api_key \
41+
mlflow deployments \
42+
create --name elasticnet_test_deploy_container -m models:/ElasticnetWineModel/1 \
43+
-t oci-datascience \
44+
--config deploy-config-file=elastic-net-deployment-container.yaml
45+
```
46+
47+
3. Invoke Prediction Endpoint
48+
49+
3.1 Using Python SDK
50+
51+
```
52+
import requests
53+
import oci
54+
from oci.signer import Signer
55+
56+
body = {
57+
"dataframe_split": {
58+
"columns": [
59+
"fixed acidity",
60+
"volatile acidity",
61+
"citric acid",
62+
"residual sugar",
63+
"chlorides",
64+
"free sulfur dioxide",
65+
"total sulfur dioxide",
66+
"density",
67+
"pH",
68+
"sulphates",
69+
"alcohol",
70+
],
71+
"data": [[7, 0.27, 0.36, 20.7, 0.045, 45, 170, 1.001, 3, 0.45, 8.8]],
72+
"index": [0],
73+
}
74+
}
75+
76+
77+
78+
config = oci.config.from_file()
79+
auth = Signer(
80+
tenancy=config['tenancy'],
81+
user=config['user'],
82+
fingerprint=config['fingerprint'],
83+
private_key_file_location=config['key_file'],)
84+
85+
endpoint = 'https://modeldeployment.us-ashburn-1.oci.customer-oci.com/ocid1.datasciencemodeldeployment.oc1.iad..<unique_ID>/predict'
86+
87+
88+
requests.post(endpoint, json=body, auth=auth, headers={}).json()
89+
90+
```
91+
92+
3.1 Using MLflow CLI
93+
94+
```
95+
96+
cat <<EOF> input.json
97+
{
98+
"dataframe_split": {
99+
"columns": [
100+
"fixed acidity",
101+
"volatile acidity",
102+
"citric acid",
103+
"residual sugar",
104+
"chlorides",
105+
"free sulfur dioxide",
106+
"total sulfur dioxide",
107+
"density",
108+
"pH",
109+
"sulphates",
110+
"alcohol"
111+
],
112+
"data": [[7, 0.27, 0.36, 20.7, 0.045, 45, 170, 1.001, 3, 0.45, 8.8]],
113+
"index": [0]
114+
}
115+
}
116+
EOF
117+
118+
mlflow deployments predict --name ocid1.datasciencemodeldeployment.oc1.iad..<unique_ID> -t oci-datascience -I ./input.json
119+
```
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Copyright (c) 2023 Oracle and/or its affiliates.
2+
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
3+
4+
FROM iad.ocir.io/namespace/image:tag
5+
6+
RUN yum install -y --setopt=skip_missing_names_on_install=False maven java-11-openjdk wget curl nginx sudo
7+
8+
# Data Science service extracts the model to /opt/ds/model/deployed_model
9+
RUN mkdir -p /opt/ds/model/deployed_model && \
10+
mkdir -p /opt/ml && \
11+
ln -s /opt/ml/model /opt/ds/model/deployed_model
12+
13+
RUN export JAVA_HOME=/usr/lib/jvm/$(ls /usr/lib/jvm/| grep java-11-openjdk*)
14+
ENV GUNICORN_CMD_ARGS="--timeout 60 -k gevent"
15+
# Set up the program in the image
16+
WORKDIR /opt/mlflow
17+
18+
RUN mvn --batch-mode dependency:copy -Dartifact=org.mlflow:mlflow-scoring:2.1.1:pom -DoutputDirectory=/opt/java
19+
RUN mvn --batch-mode dependency:copy -Dartifact=org.mlflow:mlflow-scoring:2.1.1:jar -DoutputDirectory=/opt/java/jars
20+
RUN cp /opt/java/mlflow-scoring-2.1.1.pom /opt/java/pom.xml
21+
RUN cd /opt/java && mvn --batch-mode dependency:copy-dependencies -DoutputDirectory=/opt/java/jars
22+
23+
ENV MLFLOW_DISABLE_ENV_CREATION="true"
24+
ENV DISABLE_NGINX=true
25+
26+
COPY conda.yaml /opt/conda.yaml
27+
RUN mamba env update --name oci-mlflow -f /opt/conda.yaml && pip install gevent
28+
29+
ENV NGINX_ROOT=/etc/nginx
30+
ENV NGINX_PID=/var/run/nginx.pid
31+
ENV NGINX_BIN=/usr/sbin/nginx
32+
ENV NGINX_USER=root
33+
34+
35+
EXPOSE 5001
36+
37+
COPY nginx.conf /etc/nginx/nginx.conf
38+
ENTRYPOINT [ "/bin/bash", "--login", "-c" ]
39+
CMD ["nginx -p $PWD && mlflow models serve -p 8080 -h 0.0.0.0 -m /opt/ds/model/deployed_model --env-manager local"]
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
user root;
2+
worker_processes auto;
3+
error_log /dev/stdout info;
4+
pid /var/run/nginx.pid;
5+
6+
7+
events {
8+
}
9+
10+
http {
11+
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
12+
'$status $body_bytes_sent "$http_referer" '
13+
'"$http_user_agent" "$http_x_forwarded_for"';
14+
15+
access_log /dev/stdout main;
16+
17+
tcp_nopush on;
18+
tcp_nodelay on;
19+
keepalive_timeout 65;
20+
types_hash_max_size 2048;
21+
22+
include /etc/nginx/mime.types;
23+
default_type application/octet-stream;
24+
25+
26+
server {
27+
listen 5001;
28+
client_body_temp_path /tmp/client_body_temp;
29+
proxy_temp_path /tmp/proxy_temp;
30+
31+
32+
location /predict {
33+
proxy_pass http://127.0.0.1:8080/invocations;
34+
}
35+
location /health {
36+
proxy_pass http://127.0.0.1:8080/health;
37+
}
38+
39+
}
40+
}
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
kind: deployment
2+
spec:
3+
infrastructure:
4+
kind: infrastructure
5+
type: modelDeployment
6+
spec:
7+
logGroupId: ocid1.loggroup.oc1.iad..<unique_ID>
8+
logId: ocid1.log.oc1.iad..<unique_ID>
9+
projectId: ocid1.datascienceproject.oc1.iad..<unique_ID>
10+
compartmentId: ocid1.compartment.oc1..<unique_ID>
11+
shapeName: VM.Standard.E3.Flex
12+
shapeConfigDetails:
13+
memoryInGBs: 32
14+
ocpus: 4
15+
blockStorageSize: 50
16+
replica: 1
17+
runtime:
18+
kind: runtime
19+
type: container
20+
spec:
21+
image: iad.ocir.io/<namespace>/mlflow-model-runtime/sklearn:v1
22+
serverPort: 5001
23+
healthCheckPort: 5001
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
{
2+
"dataframe_split": {
3+
"columns": [
4+
"mean radius",
5+
"mean texture",
6+
"mean perimeter",
7+
"mean area",
8+
"mean smoothness",
9+
"mean compactness",
10+
"mean concavity",
11+
"mean concave points"
12+
],
13+
"data": [
14+
[
15+
17.99,
16+
10.38,
17+
122.8,
18+
1001.0,
19+
0.1184,
20+
0.2776,
21+
0.3001,
22+
0.1471
23+
]
24+
],
25+
"index": [
26+
0
27+
]
28+
}
29+
}
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
name: pyspark_logistic_regression_dataflow_job
2+
3+
entry_points:
4+
main:
5+
parameters:
6+
seed: { type: float, default: 24 }
7+
command: "logistic_regression.py --seed {seed}"

0 commit comments

Comments
 (0)