Skip to content

Commit

Permalink
Merge branch 'master' into train-test-split-updates
Browse files Browse the repository at this point in the history
  • Loading branch information
briangrahamww authored Dec 13, 2019
2 parents 934c480 + 9e223e4 commit 43be57c
Show file tree
Hide file tree
Showing 8 changed files with 135 additions and 36 deletions.
38 changes: 34 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@
language: python
dist: xenial
sudo: required

env:
matrix:
- PYPI_BUILD="complete"
- PYPI_BUILD="only_plotting"
- PYPI_BUILD="only_postgres"
- PYPI_BUILD="basic"

matrix:
include:
- os: linux
Expand All @@ -27,8 +35,8 @@ install:
# os dependencies and set up matplotlib backend
# travis does not support python on osx - install via miniconda
- |
if [ $TRAVIS_OS_NAME = 'osx' ]; then
echo "Installing brew packages..."
if [ $TRAVIS_OS_NAME = 'osx' ]; then
echo "Installing brew packages..."
brew install graphviz
echo "Setting up conda environment..."
wget http://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh || exit 1
Expand All @@ -51,9 +59,31 @@ install:
fi
echo "Installing requirements"
pip install -r requirements.txt
if [ $PYPI_BUILD = 'complete' ]; then
echo "Installing postgres and plotting requirements"
pip install psycopg2>=2.8.3
pip install psycopg2_binary>=2.8.2
pip install pygraphviz>=1.5
elif [ $PYPI_BUILD = 'only_postgres' ]; then
echo "Installing postgres requirements"
pip install psycopg2>=2.8.3
pip install psycopg2_binary>=2.8.2
elif [ $PYPI_BUILD = 'only_plotting' ]; then
echo "Installing plotting requirements"
pip install pygraphviz>=1.5
fi
script:
- python -m pytest
- |
if [ $PYPI_BUILD = 'complete' ]; then
python -m pytest
elif [ $PYPI_BUILD = 'only_plotting' ]; then
python -m pytest -m "not postgres"
elif [ $PYPI_BUILD = 'only_postgres' ]; then
python -m pytest -m "not plotting"
else
python -m pytest -m "not optional"
fi
branches:
only:
Expand Down
84 changes: 67 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

## Primrose at a glance

`Primrose` is a simple **Python** framework for executing **in-memory** workflows defined by directed acyclic graphs (**DAGs**) via configuration files. Data in `primrose` flows from one node to another while **avoiding serialization**, except for when explicitly specified by the user. `Primrose` nodes are designed for **simple batch-based machine learning workflows**, which have datasets small enough to fit into a single machine's memory.
`Primrose` is a simple **Python** framework for executing **in-memory** workflows defined by directed acyclic graphs (**DAGs**) via configuration files. Data in `primrose` flows from one node to another while **avoiding serialization**, except for when explicitly specified by the user. `Primrose` nodes are designed for **simple batch-based machine learning workflows**, which have datasets small enough to fit into a single machine's memory.

## Table of Contents
We suggest reading the documentation in the following order:
Expand All @@ -27,43 +27,49 @@ We suggest reading the documentation in the following order:

## Introduction

`Primrose` is a Python framework for quick, simple machine learning and recommender deployments developed by the data science team at [WW](https://www.weightwatchers.com/us/). It is essentially a workflow management tool which is specialized for the needs of machine learning tasks with small to medium sized datasets (≤ 100GB). Like many orchestration tools, `Primrose` *nodes* are defined in a [directed-acyclic-graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) which defines dependencies and control flow.
`Primrose` is a Python framework for quick, simple machine learning and recommender deployments developed by the data science team at [WW](https://www.weightwatchers.com/us/). It is essentially a workflow management tool which is specialized for the needs of machine learning tasks with small to medium sized datasets (≤ 100GB). Like many orchestration tools, `Primrose` *nodes* are defined in a [directed-acyclic-graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) which defines dependencies and control flow.

Here's an example DAG showing data cleaning, model training, and model serialization:

<p align="center">
<img src="img/hello_world_tennis.png" width="500">
</p>
</p>

It exists within an ecosystem of other great open source workflow management tools (like [Airflow](https://airflow.apache.org/), [Luigi](https://luigi.readthedocs.io/en/stable/), [Kubeflow](https://www.kubeflow.org/docs/about/kubeflow/) or [Prefect](https://docs.prefect.io/guide/)) while carving it's own niche based on the following design goals:

1. **Avoid unnecessary serialization:** `Primrose` keeps data in-memory between task steps, and only performs (de)serialization operations when explicitly requested by the user. Data is transported between nodes through use of a `DataObject` abstraction, which contextually delivers the correct data to each `Primrose` node at runtime. As a consequence of this design choice, `Primrose` runs on a single machine and can be deployed as a job within a single container, like any other Python script or cron job. In addition to operating on persistent data passed between nodes, `Primrose` can also be used to call external services in a manner similar to a [Luigi](https://luigi.readthedocs.io/en/stable/) job. In this way, Spark jobs or Hadoop scripts can be called and the framework simply dictates dependencies.

* *As a comparison...* many solutions in this space are focused on long-running jobs which may be distributed across several computing nodes. Furthermore, to facilitate parallelization, save states for redundancy, and process datasets which are too large for memory, orchestrators often require that data is serialized between each workflow task. For smaller datasets, the IO time associated with these steps can be much longer than the time spent in computation.

* *Primrose is not...* a solution which scales across clusters or a complex dependency management solution with dynamic DAGs (yet).

2. **Batch processesing for ML:** `Primrose` was built to facilitate frequent batches of model training or predictions that must read and write from/to multiple sources. Rather than requiring users to define their DAG structure in Python code, `Primrose` adopts a `configuration-as-code` approach. `Primrose` users create implementations of node objects once, then any DAG structural modifications or parameterization changes are processed through configuration json files. This way, deployment changes to DAG operations (such as modifying a DAG to serve model predictions instead of training) can be handled purely through configuration files. This avoids the need to build new Python scripts for production modifications. Furthermore, `Primrose` nodes are based on common machine learning tasks to make data scientist's lives easier. This cuts down on development time for building new models and maximizes code re-use among projects and teams. See the modeling examples in the source and documentation for more info!
* *As a comparison...* in `Primrose`, users simply need to specify in their configuration file that they want common ML operations to act on the `DataObject`. These ML operations can certainly be implemented by users in Luigi or Airflow, but we found operations such as test-train splits or classifier cross-validation to be so common that they warranted nodes pre-dedicated to these operations. [Prefect](https://docs.prefect.io/guide/) has made some great strides in this area, and we encourage users to check their solution out.

* *Primrose is not...* an auto-ml tool or machine-learning toolkit which implements its own algorithms. Any Python machine learning library can be used with `Primrose`, simply by building model or pipeline nodes that implement the user's choice of library.
* *As a comparison...* in `Primrose`, users simply need to specify in their configuration file that they want common ML operations to act on the `DataObject`. These ML operations can certainly be implemented by users in Luigi or Airflow, but we found operations such as test-train splits or classifier cross-validation to be so common that they warranted nodes pre-dedicated to these operations. [Prefect](https://docs.prefect.io/guide/) has made some great strides in this area, and we encourage users to check their solution out.

3. **Simplicity:**
* *Primrose is not...* an auto-ml tool or machine-learning toolkit which implements its own algorithms. Any Python machine learning library can be used with `Primrose`, simply by building model or pipeline nodes that implement the user's choice of library.

3. **Simplicity:**

**Standardization of deployments:** `Primrose` is meant to help make deployment and model building as simple as possible. From a developer operations perspective, it requires no external scheduler or cluster to run deployments. `Primrose` code can simply be containerized with a `primrose` Python entrypoint, and deployed as a job on a k8s or any other container management service.

**Standardization of deployments:** `Primrose` is meant to help make deployment and model building as simple as possible. From a developer operations perspective, it requires no external scheduler or cluster to run deployments. `Primrose` code can simply be containerized with a `primrose` Python entrypoint, and deployed as a job on a k8s or any other container management service.

**Standardization of development:** From a software engineering perspective, another advantage of `Primrose` stems form the standardization of model and recommender code. Modifying feature engineering pipelines or adding recommender features is simplified by writing additions to self-contained `Primrose` nodes and making additions to a configuration file.

* *As a comparison...* `Primrose` can be leveraged as a piece of a larger ETL job (a `Primrose` job could be a job within an Airflow DAG), or run on it's own as a self-contained, single node ETL job. Some orchestration solutions (Airflow, for example) require running persistent clusters and services for managing jobs.
* *Primrose is not...* able to manage its own job scheduling or timing. This is left to user using k8s job scheduling or manual cron job assignments on a virtual machine.
* *As a comparison...* `Primrose` can be leveraged as a piece of a larger ETL job (a `Primrose` job could be a job within an Airflow DAG), or run on it's own as a self-contained, single node ETL job. Some orchestration solutions (Airflow, for example) require running persistent clusters and services for managing jobs.

* *Primrose is not...* able to manage its own job scheduling or timing. This is left to user using k8s job scheduling or manual cron job assignments on a virtual machine.


There are many solutions in this space, and we encourage users to explore other options that may be most appropriate for their workflows. We view `Primrose` as a simple solution for managing production ML jobs.


## Getting Started

`Primrose` has a couple of optional tools:
* a postgres database reader
* a plotting tool

These require a few external dependencies, prior to its installation. If interested in their functionality, follow the appropriate instructions for your OS below. Otherwise, you can proceed with the basic package installation.

### Installation

You can install the latest `Primrose` release via pypi
Expand All @@ -77,7 +83,51 @@ cd primrose
python setup.py install
```

`Primrose` has a few external dependencies which need to be installed in addition to the python requirements. Follow the appropriate instructions for your OS below.
To install the complete `Primrose` package (after dependencies have been installed):
```
pip install primrose[postgres, plotting]
```

To install `Primrose` with just the postgres option:

```
pip install primrose[postgres]
```

To install `primrose` with just the plotting option:

```
pip install primrose[plotting]
```

### External dependenices

**Postgres**

#### MacOSX

We recommend using homebrew to manage OS level external packages. If you do not already have homebrew installed, please visit [their website](https://brew.sh/).

Instructions:
1. Use homebrew to install `postgresql` library.
```
brew install postgresql
```

2. Use `pip` to install `psycopg2`
```
pip install psycopg2
```

#### Debian/Ubuntu

Instructions:
1. Install the `libpq-dev` library
```
apt-get install libpq-dev
```

**Plotting**

#### MacOSX

Expand Down
14 changes: 9 additions & 5 deletions primrose/configuration/configuration_dag.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from primrose.node_factory import NodeFactory
from primrose.base.conditional_path_node import AbstractConditionalPath


class ConfigurationDag():

def __init__(self, config):
Expand Down Expand Up @@ -113,7 +114,7 @@ def descendents(self, source):

def paths(self, source, target):
"""return the paths, if any, from a given source node to a given target node
Args:
source (str): name of node which is starting point of path
target (str): name of node which is end point of path
Expand Down Expand Up @@ -156,7 +157,7 @@ def check_connected_components(self):
ConfigurationError if multiuple connected components
"""
connected_components = nx.connected_components(self.G)
connected_components = nx.connected_components(self.G)
n = sum([1 for c in connected_components])
if n > 1:
raise ConfigurationError("Found multiple connected components: %s" % str(list( nx.connected_components(self.G) )))
Expand Down Expand Up @@ -279,7 +280,7 @@ def check_dag(self):

self.check_connected_components()

self.check_for_cycles()
self.check_for_cycles()

def plot_dag(self, filename, traverser, node_size=500, label_font_size=12, text_angle=0, image_width=16, image_height=12):
"""plot the DAG to image file
Expand All @@ -289,7 +290,7 @@ def plot_dag(self, filename, traverser, node_size=500, label_font_size=12, text_
title (str): title to add to chart
node_size (int): node size
label_font_size (int): font size
text_angle (int): angle to rotate. This is angle in degrees counter clockwise from east
text_angle (int): angle to rotate. This is angle in degrees counter clockwise from east
image_width (int): width of image in inches
image_height (int): heightof image in inches
Expand Down Expand Up @@ -321,7 +322,10 @@ def plot_dag(self, filename, traverser, node_size=500, label_font_size=12, text_
import pydot
from networkx.drawing.nx_pydot import graphviz_layout
except ImportError: # pragma: no cover
raise ImportError("This example needs Graphviz and pydot")
raise ImportError(
"This example needs Graphviz and pydot."
"Please refer to the Plotting requirements in the README"
)

# pos = nx.spring_layout(G)
# pos = nx.circular_layout(G)
Expand Down
14 changes: 12 additions & 2 deletions primrose/readers/postgres_helper.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,22 @@
"""Postgres helper
Author(s):
Author(s):
Wassym Kalouache ([email protected])
Carl Anderson ([email protected])
"""
import os
import psycopg2
from primrose.readers.database_helper import get_env_val

try:
import psycopg2
HAS_PSYCOPG2 = True

except ImportError:
HAS_PSYCOPG2 = False


class PostgresHelper():
'''
some utility methods for connecting to postgres
Expand Down Expand Up @@ -43,6 +50,9 @@ def create_db_connection():
db (connection): postgres db object
"""
if not HAS_PSYCOPG2:
raise ImportError("psycopg2 is necessary to establish connection")

host, port, username, password, database = PostgresHelper.extract_postgres_credentials()
conn = psycopg2.connect(dbname=database, user=username, password=password, host=host, port=port, sslmode='require')
return conn
Expand Down
3 changes: 0 additions & 3 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,12 @@ pytest~=3.10.1
testfixtures>=6.8.2
boto3>=1.9.0
matplotlib>=2.1.2
psycopg2_binary>=2.8.2
protobuf>=3.8.0
mysql-connector-python>=8.0.16
psycopg2>=2.8.3
pytest-cov>=2.6.0
scikit-learn>=0.21.2
atlas>=0.27.0
google-cloud-storage>=1.16.0
pygraphviz>=1.5
jstyleson==0.0.2
Jinja2>=2.10
PyYAML>=5.1
Expand Down
4 changes: 4 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,8 @@
'Source': 'https://github.com/ww-tech/primrose',
},
entry_points={"console_scripts": ["primrose = primrose.__init__:cli"]},
extras_require={
'postgres': ["psycopg2>=2.8.3", "psycopg2_binary>=2.8.2"],
'plotting': ["pygraphviz>=1.5"]
}
),
9 changes: 5 additions & 4 deletions test/test_configuration_dag.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,8 @@ def test_type_bad2():
dag.create_dag()
assert "Did not find doesnotexist destination in" in str(e)


@pytest.mark.optional
@pytest.mark.plotting
def test_plot_dag():

class TestPostprocess(AbstractNode):
Expand Down Expand Up @@ -348,7 +349,7 @@ def test_starting_nodes():
"filename": "hello_world_predictions.csv"
}
}
}
}
}
configuration = Configuration(config_location=None, is_dict_config=True, dict_config=config)

Expand Down Expand Up @@ -394,7 +395,7 @@ def test_nodes_of_type():
"filename": "hello_world_predictions.csv"
}
}
}
}
}
configuration = Configuration(config_location=None, is_dict_config=True, dict_config=config)

Expand Down Expand Up @@ -447,7 +448,7 @@ def test_paths():
"filename": "hello_world_predictions.csv"
}
}
}
}
}
configuration = Configuration(config_location=None, is_dict_config=True, dict_config=config)

Expand Down
5 changes: 4 additions & 1 deletion test/test_postgres_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@
def test_necessary_config():
assert len(PostgresReader.necessary_config({})) == 1


@pytest.mark.optional
@pytest.mark.postgres
def test_run(monkeypatch):
config = {
"implementation_config": {
Expand Down Expand Up @@ -52,4 +55,4 @@ def fake_df(query, con):
dd = data_object.get('mynode', rtype=DataObjectResponseType.KEY_VALUE.value)
assert 'query_0' in dd
df = dd['query_0']
assert list(df.T.to_dict().values())[0] == {'Name': "Tom", "Age": 20}
assert list(df.T.to_dict().values())[0] == {'Name': "Tom", "Age": 20}

0 comments on commit 43be57c

Please sign in to comment.