Skip to content

Commit

Permalink
chore: bump version number
Browse files Browse the repository at this point in the history
  • Loading branch information
mhamilton723 committed Aug 19, 2019
1 parent b0797b3 commit 3bb48b8
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 37 deletions.
35 changes: 8 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

[![Build Status](https://msazure.visualstudio.com/Cognitive%20Services/_apis/build/status/Azure.mmlspark?branchName=master)](https://msazure.visualstudio.com/Cognitive%20Services/_build/latest?definitionId=83120&branchName=master) [![codecov](https://codecov.io/gh/Azure/mmlspark/branch/master/graph/badge.svg)](https://codecov.io/gh/Azure/mmlspark) [![Gitter](https://badges.gitter.im/Microsoft/MMLSpark.svg)](https://gitter.im/Microsoft/MMLSpark?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)

[![Release Notes](https://img.shields.io/badge/release-notes-blue)](https://github.com/Azure/mmlspark/releases) [![Release Notes](https://img.shields.io/badge/version-0.17-blue)](https://github.com/Azure/mmlspark/releases) [![version](https://mmlspark.blob.core.windows.net/icons/badges/master_version3.svg)](#sbt)
[![Release Notes](https://img.shields.io/badge/release-notes-blue)](https://github.com/Azure/mmlspark/releases) [![Release Notes](https://img.shields.io/badge/version-0.18.0-blue)](https://github.com/Azure/mmlspark/releases) [![version](https://mmlspark.blob.core.windows.net/icons/badges/master_version3.svg)](#sbt)


MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework
Expand Down Expand Up @@ -129,9 +129,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
`--packages` option, examples:

```bash
spark-shell --packages Azure:mmlspark:0.17
pyspark --packages Azure:mmlspark:0.17
spark-submit --packages Azure:mmlspark:0.17 MyApp.jar
spark-shell --packages com.microsoft.ml.spark:mmlspark_2.11:0.18.0
pyspark --packages com.microsoft.ml.spark:mmlspark_2.11:0.18.0
spark-submit --packages com.microsoft.ml.spark:mmlspark_2.11:0.18.0 MyApp.jar
```

This can be used in other Spark contexts too. For example, you can use MMLSpark
Expand All @@ -146,14 +146,14 @@ cloud](http://community.cloud.databricks.com), create a new [library from Maven
coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-from-maven-pypi-or-spark-packages)
in your workspace.

For the coordinates use: `Azure:mmlspark:0.17`. Ensure this library is
For the coordinates use: `com.microsoft.ml.spark:mmlspark_2.11:0.18.0`. Ensure this library is
attached to all clusters you create.

Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.

You can use MMLSpark in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:

`https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.17.dbc`
`https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.18.0.dbc`

### Docker

Expand Down Expand Up @@ -185,39 +185,20 @@ the above example, or from python:
```python
import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages", "Azure:mmlspark:0.17") \
.config("spark.jars.packages", "com.microsoft.ml.spark:mmlspark_2.11:0.18.0") \
.getOrCreate()
import mmlspark
```

<img title="Script action submission" src="http://i.imgur.com/oQcS0R2.png" align="right" />

### HDInsight

To install MMLSpark on an existing [HDInsight Spark
Cluster](https://docs.microsoft.com/en-us/azure/hdinsight/), you can execute a
script action on the cluster head and worker nodes. For instructions on
running script actions, see [this
guide](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#use-a-script-action-during-cluster-creation).

The script action url is:
<https://mmlspark.azureedge.net/buildartifacts/0.17/install-mmlspark.sh>.

If you're using the Azure Portal to run the script action, go to `Script
actions``Submit new` in the `Overview` section of your cluster blade. In
the `Bash script URI` field, input the script action URL provided above. Mark
the rest of the options as shown on the screenshot to the right.

Submit, and the cluster should finish configuring within 10 minutes or so.

### SBT

If you are building a Spark application in Scala, add the following lines to
your `build.sbt`:

```scala
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.17"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.18.0"
```

### Building from source
Expand Down
6 changes: 3 additions & 3 deletions docs/R-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ To install the current MMLSpark package for R use:

```R
...
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.17.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.18.0.zip")
...
```

Expand All @@ -23,7 +23,7 @@ It will take some time to install all dependencies. Then, run:
library(sparklyr)
library(dplyr)
config <- spark_config()
config$sparklyr.defaultPackages <- "Azure:mmlspark:0.17"
config$sparklyr.defaultPackages <- "com.microsoft.ml.spark:mmlspark_2.11:0.18.0"
sc <- spark_connect(master = "local", config = config)
...
```
Expand Down Expand Up @@ -83,7 +83,7 @@ and then use spark_connect with method = "databricks":

```R
install.packages("devtools")
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.17.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.18.0.zip")
library(sparklyr)
library(dplyr)
sc <- spark_connect(method = "databricks")
Expand Down
10 changes: 4 additions & 6 deletions docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You can now select one of the sample notebooks and run it, or create your own.
In the above, `mcr.microsoft.com/mmlspark/release` specifies the project and image name that you
want to run. There is another component implicit here which is the _tag_ (=
version) that you want to use — specifying it explicitly looks like
`mcr.microsoft.com/mmlspark/release:0.17` for the `0.17` tag.
`mcr.microsoft.com/mmlspark/release:0.18.0` for the `0.18.0` tag.

Leaving `mcr.microsoft.com/mmlspark/release` by itself has an implicit `latest` tag, so it is
equivalent to `mcr.microsoft.com/mmlspark/release:latest`. The `latest` tag is identical to the
Expand All @@ -42,21 +42,19 @@ that you will probably want to use can look as follows:

```bash
docker run -it --rm \
-e ACCEPT_EULA=y \
-p 127.0.0.1:80:8888 \
-v ~/myfiles:/notebooks/myfiles \
mcr.microsoft.com/mmlspark/release:0.17
mcr.microsoft.com/mmlspark/release:0.18.0
```

In this example, backslashes are used to break things up for readability; you
can enter it as one long like. Note that in powershell, the `myfiles` local
path and line breaks looks a little different:

docker run -it --rm `
-e ACCEPT_EULA=y `
-p 127.0.0.1:80:8888 `
-v C:\myfiles:/notebooks/myfiles `
mcr.microsoft.com/mmlspark/release:0.17
mcr.microsoft.com/mmlspark/release:0.18.0

Let's break this command and go over the meaning of each part:

Expand Down Expand Up @@ -139,7 +137,7 @@ Let's break this command and go over the meaning of each part:
model.write().overwrite().save('myfiles/myTrainedModel.mml')
```

- **`mcr.microsoft.com/mmlspark/release:0.17`**
- **`mcr.microsoft.com/mmlspark/release:0.18.0`**

Finally, this specifies an explicit version tag for the image that we want to
run.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ package com.microsoft.ml.spark.cognitive

import java.net.URI

import com.microsoft.ml.spark.build.BuildInfo
import com.microsoft.ml.spark.core.contracts.HasOutputCol
import com.microsoft.ml.spark.core.schema.DatasetExtensions
import com.microsoft.ml.spark.io.http._
Expand Down Expand Up @@ -179,7 +180,7 @@ object URLEncodingUtils {
object CognitiveServiceUtils {

def setUA(req: HttpRequestBase): Unit = {
req.setHeader("User-Agent", "mmlspark/0.17")
req.setHeader("User-Agent", s"mmlspark/${BuildInfo.version}")
}
}

Expand Down

0 comments on commit 3bb48b8

Please sign in to comment.