diff --git a/doc/tutorials/cloud/01_import_data.ipynb b/doc/tutorials/cloud/01_import_data.ipynb index 55680775..9e318576 100644 --- a/doc/tutorials/cloud/01_import_data.ipynb +++ b/doc/tutorials/cloud/01_import_data.ipynb @@ -69,7 +69,7 @@ "\n", "As a first step, we need to obtain the schema of the data (set of columns stored in parquet files with their types). You might have this information in advance (if this is your dataset), but if not, you need to analyze parquet files to figure out their schema.\n", "\n", - "One of the options of doing this are parquet-tools library wrapped into [docker container](https://hub.docker.com/r/nathanhowell/parquet-tools). To use it, you need to download one of parquet files locally, then run this docker container against this file. Using the same container, you can also peek into parquet files and looks its actual data.\n", + "One of the options of doing this is parquet-tools library wrapped into a [docker container](https://hub.docker.com/r/nathanhowell/parquet-tools). To use it, you need to download one of parquet files locally, then run this docker container against this file. Using the same container, you can also peek into parquet files and looks at its actual data.\n", "\n", "For the file above, I got the following schema information:\n", "\n", @@ -87,7 +87,7 @@ "}\n", "``` \n", "\n", - "From this schema we see that all the columns in parquet file has string type and optional (nullable).\n", + "From this schema we see that all the columns in parquet file have string type and optional (nullable).\n", "Let's create the table in our database for this data. The names of columns are not important, just the order and their types have to match with parquet file schema." ], "metadata": {