Skip to content

Commit

Permalink
Merge pull request #8 from kameshsampath/updates
Browse files Browse the repository at this point in the history
Updates
  • Loading branch information
kameshsampath authored Dec 8, 2024
2 parents 1e542eb + 5e746d4 commit 3f51530
Show file tree
Hide file tree
Showing 7 changed files with 57 additions and 20 deletions.
3 changes: 2 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
"proba",
"scikit",
"selectbox",
"sklearn"
"sklearn",
"Undeploying"
]
}
4 changes: 2 additions & 2 deletions docs/explore_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,8 +121,8 @@ with st.expander("Data"):
df
# define and display
st.write("**X**")
x = df.drop("species", axis=1)
x
X = df.drop("species", axis=1)
X

st.write("**y**")
y = df.species
Expand Down
4 changes: 2 additions & 2 deletions docs/interactive_features.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,8 +136,8 @@ with st.expander("Data"):
df
# define and display
st.write("**X**")
x = df.drop("species", axis=1)
x
X = df.drop("species", axis=1)
X

st.write("**y**")
y = df.species
Expand Down
2 changes: 1 addition & 1 deletion docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ cd <directory where you want to create your project>
Create the application folder for the rest of the demo we will call this as `st-ml-app` and for easy reference we will export to an environment variable named `$TUTORIAL_HOME`,

```shell
export TUTORIAL_HOME='st-ml-app'
export TUTORIAL_HOME="$PWD/st-ml-app"
mkdir -p $TUTORIAL_HOME
cd $TUTORIAL_HOME
```
Expand Down
27 changes: 24 additions & 3 deletions docs/snowflake_deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,13 @@ snow object create schema \
--database='st_ml_app'
```

Download and import the [notebook](https://github.com/Snowflake-Labs/streamlit-oss-to-sis-bootstrap/blob/main/notebooks/sis_setup.ipynb){:target=_blank} and follow the instructions on the notebook to prepare the environment for deployment.
## Using Notebook

Here's my suggested revision to make the text clearer and more precise:

We will need a few more objects and will need to ingest the penguins data into our Snowflake database `st_ml_app`, specifically in the schema `data` and into a table named `penguins`. We will use [Snowflake Notebooks](https://www.snowflake.com/en/data-cloud/notebooks/){:target=_blank} for this purpose.

Download and import this [notebook](https://github.com/Snowflake-Labs/streamlit-oss-to-sis-bootstrap/blob/main/notebooks/sis_setup.ipynb){:target=_blank} and follow its instructions to prepare the environment for deployment.

## Deploying the App

Expand All @@ -76,8 +82,6 @@ snow init sis --template example_streamlit

### Update the App

__TODO__: Note on Copy

Edit and update the `$TUTORIAL_HOME/sis/streamlit_app.py` with,

```py title="streamlit_app.py" linenums="1" hl_lines="4-7 10-24 24 34-45"
Expand Down Expand Up @@ -375,6 +379,23 @@ snow streamlit deploy --replace \

There you go we have seamlessly deployed the application to SiS with a very little effort.

## Undeploying the Application

To drop the application run:

```shell
snow streamlit drop streamlit_penguin \
--database='st_ml_app' --schema='apps'
```

## Cleanup

To cleanup all resources created in this tutorial including the notebook run:

```shell
snow object drop database st_ml_app
```

## Summary
This chapter guided you through the process of transforming a locally running Streamlit application into a production-ready deployment within Snowflake. You learned the essential modifications needed for Snowflake compatibility, understood the configuration requirements, and mastered the deployment process. You now have a fully functional Streamlit application running in Snowflake's secure environment, accessible to your organization's users through Snowflake's interface.

Expand Down
15 changes: 14 additions & 1 deletion docs/train_and_predict.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,22 @@ In this chapter, we will:
As part of displaying the predictions we will use the following Streamlit components to make the output aesthetically appealing,

- [Progress Column Config](https://docs.streamlit.io/develop/api-reference/data/st.column_config/st.column_config.progresscolumn)
- [Sucess Message](https://docs.streamlit.io/develop/api-reference/status/st.success)
- [Success Message](https://docs.streamlit.io/develop/api-reference/status/st.success)
- [Container](https://docs.streamlit.io/develop/api-reference/layout/st.container)

Update `$TUTORIAL_HOME/requirements.txt` to be like

```
streamlit>=1.26.0
scikit-learn
```

Update the local Python Virtual Environment to add the `scikit-learn` package,

```shell
pip install -r requirements.txt
```

Edit and update the `$TUTORIAL_HOME/streamlit_app.py` with the following code,

```py title="streamlit_app.py" linenums="1" hl_lines="5 147-198"
Expand Down
22 changes: 12 additions & 10 deletions notebooks/sis_setup.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
{
"cell_type": "markdown",
"metadata": {
"name": "cell1",
"name": "md_setup_intro",
"resultHeight": 282,
"collapsed": false
},
Expand All @@ -21,7 +21,7 @@
"cell_type": "markdown",
"id": "2568d714-4541-4058-8bf1-7b0cbbdc7ab5",
"metadata": {
"name": "cell3",
"name": "md_setup_schemas",
"collapsed": false,
"resultHeight": 342
},
Expand All @@ -32,7 +32,7 @@
"id": "b28d9a98-b375-4db9-8219-0df97eb39f14",
"metadata": {
"language": "sql",
"name": "cell2",
"name": "sql_create_schemas",
"collapsed": false,
"resultHeight": 111
},
Expand All @@ -44,7 +44,7 @@
"cell_type": "markdown",
"id": "fc1af8cf-344c-46c5-9007-4b2dd0c28761",
"metadata": {
"name": "cell7",
"name": "md_external_stage",
"collapsed": false,
"resultHeight": 140
},
Expand All @@ -55,7 +55,9 @@
"id": "579e0531-4a76-493b-9089-22e0b4a59e6f",
"metadata": {
"language": "sql",
"name": "cell8"
"name": "sql_create_stage",
"collapsed": false,
"resultHeight": 111
},
"outputs": [],
"source": "-- add an external stage to a s3 bucket\nCREATE STAGE IF NOT EXISTS STAGES.ST_ML_APP_PENGUINS\n URL='s3://sfquickstarts/misc';\n\n-- default CSV file format and allow values to quoted by \"\nCREATE FILE FORMAT IF NOT EXISTS FILE_FORMATS.CSV\n TYPE='CSV'\n SKIP_HEADER=1\n FIELD_OPTIONALLY_ENCLOSED_BY = '\"';",
Expand All @@ -65,7 +67,7 @@
"cell_type": "markdown",
"id": "d7c9a1b2-01eb-4430-80dd-ad895d96a530",
"metadata": {
"name": "cell4",
"name": "md_load_data",
"collapsed": false,
"resultHeight": 128
},
Expand All @@ -82,14 +84,14 @@
"codeCollapsed": false
},
"outputs": [],
"source": "-- Create table to hold penguins data\nCREATE OR ALTER TABLE DATA.PENGUINS(\n SPECIES STRING NOT NULL,\n ISLAND STRING NOT NULL,\n BILL_LENGTH_MM NUMBER NOT NULL,\n BILL_DEPTH_MM NUMBER NOT NULL,\n FLIPPER_LENGTH_MM NUMBER NOT NULL,\n BODY_MASS_G NUMBER NOT NULL,\n SEX STRING NOT NULL\n);\n\n-- Load the data from penguins_cleaned.csv\nCOPY INTO DATA.PENGUINS\nFROM @STAGES.ST_ML_APP_PENGUINS/PENGUINS_CLEANED.CSV\nFILE_FORMAT=(FORMAT_NAME='FILE_FORMATS.CSV');",
"source": "-- Create table to hold penguins data\nCREATE OR ALTER TABLE DATA.PENGUINS(\n SPECIES STRING NOT NULL,\n ISLAND STRING NOT NULL,\n BILL_LENGTH_MM NUMBER NOT NULL,\n BILL_DEPTH_MM NUMBER NOT NULL,\n FLIPPER_LENGTH_MM NUMBER NOT NULL,\n BODY_MASS_G NUMBER NOT NULL,\n SEX STRING NOT NULL\n);\n\n-- Load the data from penguins_cleaned.csv\nCOPY INTO DATA.PENGUINS\nFROM @stages.st_ml_app_penguins/penguins_cleaned.csv\nFILE_FORMAT=(FORMAT_NAME='FILE_FORMATS.CSV');",
"execution_count": null
},
{
"cell_type": "markdown",
"id": "cd53f3db-b787-4343-8726-2c5e8322d106",
"metadata": {
"name": "cell6",
"name": "md_query_data",
"collapsed": false,
"resultHeight": 41
},
Expand All @@ -100,12 +102,12 @@
"id": "13ce4fc6-d28b-463f-85cf-b8dc42ae2fc0",
"metadata": {
"language": "python",
"name": "cell5",
"name": "py_query_data",
"collapsed": false,
"resultHeight": 335
},
"outputs": [],
"source": "from snowflake.snowpark.context import get_active_session\n\nsession = get_active_session()\ndf = session.table('st_ml_app.data.penguins')\ndf.show()",
"source": "from snowflake.snowpark.context import get_active_session\n\nsession = get_active_session()\ndf = session.table('st_ml_app.data.penguins')\ndf.show(10)",
"execution_count": null
}
]
Expand Down

0 comments on commit 3f51530

Please sign in to comment.