Skip to content

Commit

Permalink
Merge pull request #13 from alteryx/quickstart-installation_update
Browse files Browse the repository at this point in the history
Documentation Updates and Relevant Links added
  • Loading branch information
NabilFayak authored Aug 2, 2023
2 parents bd7993b + 2b81b77 commit 75fccc3
Show file tree
Hide file tree
Showing 4 changed files with 94 additions and 201 deletions.
26 changes: 24 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,30 @@
CheckMates is an Alteryx Open Source library which catches and warns of problems with your data and problem setup before modeling.

## Installation

```bash
python -m pip install checkmates
```
## Start
#### Load and validate example data
```python
from checkmates import (
IDColumnsDataCheck
)
import pandas as pd

id_data_check_name = IDColumnsDataCheck.name
X_dict = {
"col_1": [1, 1, 2, 3],
"col_2": [2, 3, 4, 5],
"col_3_id": [0, 1, 2, 3],
"Id": [3, 1, 2, 0],
"col_5": [0, 0, 1, 2],
"col_6": [0.1, 0.2, 0.3, 0.4],
}
X = pd.DataFrame.from_dict(X_dict)
id_cols_check = IDColumnsDataCheck(id_threshold=0.95)
print(id_cols_check.validate(X))
```

## Next Steps

Expand All @@ -14,7 +36,7 @@ Read more about CheckMates on our [documentation page](#):

The CheckMates community is happy to provide support to users of CheckMates. Project support can be found in four places depending on the type of question:
1. For usage questions, use [Stack Overflow](#) with the `CheckMates` tag.
2. For bugs, issues, or feature requests start a [Github issue](#).
2. For bugs, issues, or feature requests start a [Github issue](https://github.com/alteryx/CheckMates/issues).
3. For discussion regarding development on the core library, use [Slack](#).
4. For everything else, the core developers can be reached by email at [email protected]

Expand Down
105 changes: 9 additions & 96 deletions contributing.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
## Contributing to the Codebase

#### 0. Look at Open Issues
We currently utilize GitHub Issues as our project management tool for datachecks. Please do the following:
* Look at our [open issues](#)
#### 0. Look at Open Issues
We currently utilize GitHub Issues as our project management tool for checkmates. Please do the following:
* Look at our [open issues](https://github.com/alteryx/CheckMates/issues)
* Find an unclaimed issue by looking for an empty `Assignees` field.
* If this is your first time contributing, issues labeled ``good first issue`` are a good place to start.
* If your issue is labeled `needs design` or `spike` it is recommended you provide a design document for your feature
Expand All @@ -11,19 +11,18 @@ We currently utilize GitHub Issues as our project management tool for datachecks


#### 1. Clone repo
The code is hosted on GitHub, so you will need to use Git to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. Additionally, you must make sure that the version of Python you use is at least 3.8. Using `conda` you can use `conda create -n datachecks python=3.8` and `conda activate datachecks` before the following steps.
The code is hosted on GitHub, so you will need to use Git to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. Additionally, you must make sure that the version of Python you use is at least 3.8.
* clone with `git clone [https://github.com/alteryx/CheckMates.git]`
* install in edit mode with:
```bash
# move into the repo
cd datachecks
cd checkmates
# installs the repo in edit mode, meaning changes to any files will be picked up in python. also installs all dependencies.
make installdeps-dev
```

<!--- Note that if you're on Mac, there are a few extra steps you'll want to keep track of.
* In order to run on Mac, [LightGBM requires the OpenMP library to be installed](https://datachecks.alteryx.com/en/stable/install.html#Mac), which can be done with HomeBrew by running `brew install libomp`
* We've seen some installs get the following warning when importing datachecks: "UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError". [A known workaround](https://stackoverflow.com/a/61531555/841003) is to run `brew reinstall readline xz` before installing the python version you're using via pyenv. If you've already installed a python version in pyenv, consider deleting it and reinstalling. v3.8.2 is known to work. --->
* We've seen some installs get the following warning when importing checkmates: "UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError". [A known workaround](https://stackoverflow.com/a/61531555/841003) is to run `brew reinstall readline xz` before installing the python version you're using via pyenv. If you've already installed a python version in pyenv, consider deleting it and reinstalling. v3.9.7 is known to work. --->
#### 2. Implement your Pull Request
Expand Down Expand Up @@ -78,12 +77,12 @@ Note that if you're building docs locally, the warning suppression code at `docs

* We use GitHub Actions to run our PR checkin tests. On creation of the PR and for every change you make to your PR, you'll need a maintainer to click "Approve and run" on your PR. This is a change [GitHub made in April 2021](https://github.blog/2021-04-22-github-actions-update-helping-maintainers-combat-bad-actors/).

* We ask that all contributors sign our contributor license agreement (CLA) the first time they contribute to datachecks. The CLA assistant will place a message on your PR; follow the instructions there to sign the CLA.
* We ask that all contributors sign our contributor license agreement (CLA) the first time they contribute to checkmates. The CLA assistant will place a message on your PR; follow the instructions there to sign the CLA.

Add a description of your PR to the subsection that most closely matches your contribution:
* Enhancements: new features or additions to DataChecks.
* Enhancements: new features or additions to CheckMates.
* Fixes: things like bugfixes or adding more descriptive error messages.
* Changes: modifications to an existing part of DataChecks.
* Changes: modifications to an existing part of CheckMates.
* Documentation Changes
* Testing Changes

Expand All @@ -96,92 +95,6 @@ If your work includes a [breaking change](https://en.wiktionary.org/wiki/breakin
* Description of your breaking change
```

### 4. Updating our conda package

We maintain a conda package [package](#) to give users more options of how to install datachecks.
Conda packages are created from recipes, which are yaml config files that list a package's dependencies and tests. Here is
datachecks's latest published [recipe](#).
GitHub repositories containing conda recipes are called `feedstocks`.

If you opened a PR to datachecks that modifies the packages in `dependencies` within `pyproject.toml`, or if the latest dependency bot
updates the latest version of one of our packages, you will see a CI job called `build_conda_pkg`. This section describes
what `build_conda_pkg` does and what to do if you see it fails in your pr.

#### What is build_conda_pkg?
`build_conda_pkg` clones the PR branch and builds the conda package from that branch. Since the conda build process runs our
entire suite of unit tests, `build_conda_pkg` checks that our conda package actually supports the proposed change of the PR.
We added this check to eliminate surprises. Since the conda package is released after we release to PyPi, it's possible that
we released a dependency version that is not compatible with our conda recipe. It would be a pain to try to debug this at
release-time since the PyPi release includes many possible PRs that could have introduced that change.
#### How does `build_conda_pkg` work?
`build_conda_pkg` will clone the `master` branch of the feedstock as well as you datachecks PR branch. It will
then replace the recipe in the `master` branch of the feedstock with the current
latest [recipe](#) in datachecks.
It will also modify the [source](#)
field of the local copy of the recipe and point it at the local datachecks clone of your PR branch.
This has the effect of building our conda package against your PR branch!
#### Why does `build_conda_pkg` use a recipe in datachecks as opposed to the recipe in the feedstock `master` branch?
One important fact to know about conda is that any change to the `master` branch of a feedstock will
result in a new version of the conda package being published to the world!
With this in mind, let's say your PR requires modifying our dependencies.
If we made a change to `master`, an updated version of datachecks's latest conda package would
be released. This means people who installed the latest version of datachecks prior to this PR would get different dependency versions
than those who installed datachecks after the PR got merged on GitHub. This is not desirable, especially because the PR would not get shipped
to PyPi until the next release happens. So there would also be a discrepancy between the PyPi and conda versions.
By using a recipe stored in the datachecks repo, we can keep track of the changes that need to be made for the next release without
having to publish a new conda package. Since the recipe is also "unique" to your PR, you are free to make whatever changes you
need to make without disturbing other PRs. This would not be the case if `build_conda_pkg` ran from the `master` branch of the
feedstock.
#### What to do if you see `build_conda_pkg` is red on your PR?
It depends on the kind of PR:
**Case 1: You're adding a completely new dependency**

In this case, `build_conda_pkg` is failing simply because a dependency is missing. Adding the dependency to the recipe should
make the check green. To add the dependency, modify the recipe located at `.github/meta.yaml`.

If you see that adding the dependency causes the build to fail, possibly because of conflicting versions, then iterate until
the build passes. The team will verify if your changes make sense during PR review.

**Case 2: The latest dependency bot created a PR**
If the latest dependency bot PR fails `build_conda_pkg`, it means our code doesn't support the latest version
of one of our dependencies. This means that we either have to cap the max allowed version in our requirements file
or update our code to support that version. If we opt for the former, then just like in Case 1, make the corresponding change
to the recipe located at `.github/meta.yaml`
#### What about the `check_versions` CI check?
This check verifies that the allowed versions listed in `pyproject.toml` match those listed in
the conda recipe so that the PyPi requirements and conda requirements don't get out of sync.

## Code Style Guide

* Keep things simple. Any complexity must be justified in order to pass code review.
* Be aware that while we love fancy python magic, there's usually a simpler solution which is easier to understand!
* Make PRs as small as possible! Consider breaking your large changes into separate PRs. This will make code review easier, quicker, less bug-prone and more effective.
* In the name of every branch you create, include the associated issue number if applicable.
* If new changes are added to the branch you're basing your changes off of, consider using `git rebase -i base_branch` rather than merging the base branch, to keep history clean.
* Always include a docstring for public methods and classes. Consider including docstrings for private methods too. We use the [Google docstring convention](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings), and use the [`sphinx.ext.napoleon`](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) extension to parse our docstrings.
* Although not explicitly enforced by the Google convention, keep the following stylistic conventions for docstrings in mind:
- First letter of each argument description should be capitalized.
- Docstring sentences should end in periods. This includes descriptions for each argument.
- Types should be written in lower-case. For example, use "bool" instead of "Bool".
- Always add the default value in the description of the argument, if applicable. For example, "Defaults to 1."
* Use [PascalCase (upper camel case)](https://en.wikipedia.org/wiki/Camel_case#Variations_and_synonyms) for class names, and [snake_case](https://en.wikipedia.org/wiki/Snake_case) for method and class member names.
* To distinguish private methods and class attributes from public ones, those which are private should be prefixed with an underscore
* Any code which doesn't need to be public should be private. Use `@staticmethod` and `@classmethod` where applicable, to indicate no side effects.
* Only call public methods in unit tests.
* All code must have unit test coverage. Use mocking and monkey-patching when necessary.
* Keep unit tests as fast as possible. In particular, avoid calling `fit`. Mocking can help with this.
* When you're working with code which uses a random number generator, make sure your unit tests set a random seed.
* Use `np.testing.assert_almost_equal` when comparing floating-point numbers, to avoid numerical precision issues, particularly cross-platform.
* Use `os.path` tools to keep file paths cross-platform.
* Our rule of thumb is to favor traditional inheritance over a mixin pattern.

## GitHub Issue Guide

* Make the title as short and descriptive as possible.
Expand Down
97 changes: 48 additions & 49 deletions docs/source/release_notes.rst
Original file line number Diff line number Diff line change
@@ -1,39 +1,40 @@
Release Notes
-------------
.. **Future Releases**
.. * Enhancements
.. * Fixes
.. * Changes
.. * Documentation Changes
.. * Testing Changes
**Future Releases**
* Enhancements
* Fixes
* Changes
* Documentation Changes
* Updated readme.md, contrubuting.md, and releases.md to reflect CheckMates package installation, quickstart, and useful links :pr:`13`
* Testing Changes

**v0.1.0 July 28, 2023**
* Enhancements
* updated pyproject to v0.1.0 for first release and added project urls :pr:`8`
* added pdm.lock and .python-version to .gitignore :pr:`8`
* Added repo specific token for workflows :pr:`2`
* PDM Packaging ready for deployment :pr:`2`
* Added testing workflow for pytest :pr:`2`
* Transfer over base `Data Checks` and `IDColumnData Checks` from the `EvalML` repo :pr:`1`
* Added in github workflows that are relevant to `DataChecks`, from `EvalML` repository, and modified to fit `DataChecks` wherever possible :pr:`1`
* Implemented linters and have them successfully running :pr:`1`
* Fixes
* Cleanup files and add release workflow :pr:`6`
* Fixed pytest failures :pr:`1`
* Workflows are now up and running properly :pr:`1`
* Changes
* Irrelevant workflows removed (`minimum_dependency_checker`) :pr:`2`
* Removed all `EvalML` dependencies and unnecessary functions/comments from `utils`, `tests`, `exceptions`, and `datachecks` :pr:`1`
* Updated comments to reflect `DataChecks` repository :pr:`1`
* Restructured file directory to categorize data checks between `datacheck_meta` and `checks` :pr:`1`
* Restructured pdm packaging to only be relevant to `DataChecks`, now to be renamed to `CheckMate` :pr:`1`
* Documentation Changes
* Documentation refactored to now fit `CheckMates` :pr:`11`
* Documentation refactored to now fit `Checkers` :pr:`4`
* Documentation refactored to now fit `CheckMate` :pr:`2`
* Testing Changes
* Automated testing within github actions :pr:`2`
* Removed integration testing due to irrelevance with `datacheck_meta` and `checks` :pr:`1`
* Enhancements
* updated pyproject to v0.1.0 for first release and added project urls :pr:`8`
* added pdm.lock and .python-version to .gitignore :pr:`8`
* Added repo specific token for workflows :pr:`2`
* PDM Packaging ready for deployment :pr:`2`
* Added testing workflow for pytest :pr:`2`
* Transfer over base `Data Checks` and `IDColumnData Checks` from the `EvalML` repo :pr:`1`
* Added in github workflows that are relevant to `DataChecks`, from `EvalML` repository, and modified to fit `DataChecks` wherever possible :pr:`1`
* Implemented linters and have them successfully running :pr:`1`
* Fixes
* Cleanup files and add release workflow :pr:`6`
* Fixed pytest failures :pr:`1`
* Workflows are now up and running properly :pr:`1`
* Changes
* Irrelevant workflows removed (`minimum_dependency_checker`) :pr:`2`
* Removed all `EvalML` dependencies and unnecessary functions/comments from `utils`, `tests`, `exceptions`, and `datachecks` :pr:`1`
* Updated comments to reflect `DataChecks` repository :pr:`1`
* Restructured file directory to categorize data checks between `datacheck_meta` and `checks` :pr:`1`
* Restructured pdm packaging to only be relevant to `DataChecks`, now to be renamed to `CheckMate` :pr:`1`
* Documentation Changes
* Documentation refactored to now fit `CheckMates` :pr:`11`
* Documentation refactored to now fit `Checkers` :pr:`4`
* Documentation refactored to now fit `CheckMate` :pr:`2`
* Testing Changes
* Automated testing within github actions :pr:`2`
* Removed integration testing due to irrelevance with `datacheck_meta` and `checks` :pr:`1`

**v0.0.2 July 26, 2023**
* Enhancements
Expand All @@ -49,22 +50,20 @@ Release Notes
* Automated testing within github actions :pr:`2`

**v0.0.1 July 18, 2023**

* Enhancements
* Transfer over base `Data Checks` and `IDColumnData Checks` from the `EvalML` repo :pr:`1`
* Added in github workflows that are relevant to `DataChecks`, from `EvalML` repository, and modified to fit `DataChecks` wherever possible :pr:`1`
* Implemented linters and have them successfully running :pr:`1`
* Fixes
* Fixed pytest failures :pr:`1`
* Workflows are now up and running properly :pr:`1`
* Changes
* Removed all `EvalML` dependencies and unnecessary functions/comments from `utils`, `tests`, `exceptions`, and `datachecks` :pr:`1`
* Updated comments to reflect `DataChecks` repository :pr:`1`
* Restructured file directory to categorize data checks between `datacheck_meta` and `checks` :pr:`1`
* Restructured pdm packaging to only be relevant to `DataChecks`, now to be renamed to `CheckMate` :pr:`1`
* Testing Changes
* Removed integration testing due to irrelevance with `datacheck_meta` and `checks` :pr:`1`
* Enhancements
* Transfer over base `Data Checks` and `IDColumnData Checks` from the `EvalML` repo :pr:`1`
* Added in github workflows that are relevant to `DataChecks`, from `EvalML` repository, and modified to fit `DataChecks` wherever possible :pr:`1`
* Implemented linters and have them successfully running :pr:`1`
* Fixes
* Fixed pytest failures :pr:`1`
* Workflows are now up and running properly :pr:`1`
* Changes
* Removed all `EvalML` dependencies and unnecessary functions/comments from `utils`, `tests`, `exceptions`, and `datachecks` :pr:`1`
* Updated comments to reflect `DataChecks` repository :pr:`1`
* Restructured file directory to categorize data checks between `datacheck_meta` and `checks` :pr:`1`
* Restructured pdm packaging to only be relevant to `DataChecks`, now to be renamed to `CheckMate` :pr:`1`
* Testing Changes
* Removed integration testing due to irrelevance with `datacheck_meta` and `checks` :pr:`1`

**v0.0.0 July 3, 2023**

* *GitHub Repo Created*
* *GitHub Repo Created*
Loading

0 comments on commit 75fccc3

Please sign in to comment.