Skip to content

Latest commit

 

History

History
59 lines (40 loc) · 3.03 KB

README.md

File metadata and controls

59 lines (40 loc) · 3.03 KB

Scala for Data Science

The enclosed notebooks and other materials are for the Scala Days 2016 and Strata London 2016 talks by Andy Petrella and Dean Wampler on why Scala is a great language for Data Science.

The talk includes a notebook for Spark Notebook, which provides a notebook metaphor for interactive Spark development using Scala. If you aren't familiar with the idea of a notebook interface, think of it as an enhanced REPL that makes it easy to edit and run (or rerun) code, plot results, mix in markdown-based documentation, etc.

However, if you don't want to go to the trouble of installing and using Spark Notebook, there are Markdown and PDF versions of the same content in the notebooks directory.

Use Existing Docker

A docker container exists with the Spark Notebook available with the current notebooks.

Pull it from docker hub:

docker pull datafellas/scala-for-data-science:1.0-spark2

Run it

docker run --rm -it --net=host -m 8g datafellas/scala-for-data-science:1.0-spark2 bash

Start the services

source start.sh

Use it

On Linux, go to http://localhost:9000.

On Mac/Win, you'll probably have to use the VM's IP/Name.

Install manually

Otherwise, install Spark Notebook, version 0.6.3 or later. You can use either Scala 2.10 or 2.11. In the commands below, we'll assume the root directory of this installation is /path/to/spark-notebook. Just use your real path instead. Due to a bug in library path handling, you must start Spark Notebook from this directory.

We'll also use /path/to/scala-for-data-science as the path to your local clone of this Git repo. Again, substitute the real path...

There is one environment variable that you must define, NOTEBOOKS_DIR. Run the following commands to define this variable and start Spark Notebook.

For Linux or OSX, use the following:

export NOTEBOOKS_DIR=/path/to/scala-for-data-science/notebooks
cd /path/to/spark-notebook
bin/spark-notebook

For Windows, use the following:

set NOTEBOOKS_DIR=c:\path\to\scala-for-data-science\notebooks
cd \path\to\spark-notebook
bin\spark-notebook

Open a browser window to localhost:9000. Then click the link to open the notebook WhyScala.

To evaluate all the cells in a notebook, use the Cell > Run All menu item. You can evaluate one cell at a time with the ▶︎ button on the toolbar, or use "shift+return". Both options run the currently-selected cell and advance to the next cell. Note that the notebook copy in the repo includes the output from a run.

Grab the slides for the rest of the presentation here.