Development Diary #002: Something about automation #140
Closed
StjerneIdioten
announced in
Developer Diaries
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there!
This third dev diary came around a lot quicker than I first anticipated. That's because setting up the build system has driven me crazy and I needed somewhere to put down my thoughts about it.
Now with that out of the way I can begin my ranting. If anyone has any suggestions or hints to improve any of what I am about to describe, then feel free. I am mostly a novice when it comes to this and I feel like I've been scouring the internet for days on end without stumbling upon that golden "fits my usecase entirely"-solution that I hoped for.
Local Testing
While developing the exporter and GE-Core module, I would like to be able to run the unit testing locally to confirm that I don't break things as I am working on it. I can run pytest easily enough by just firing the command
pytest
on my local machine or even just rely on my IDE to run it (I use pycharm) and while that is all good, it actually only tests that my development version is able to run the tests. Unless I package the build and install it in a clean environment, I won't know if a released version of the build will also be able to run flawlessly outside of my dev environment.Most of the errors that can arise between my dev version and the release is path errors, import errors or dependency errors. So to fix this I should really package it and install it in a clean environment (such as a virtualenv) before testing with pytest every time. If I wish to support multiple versions of python such as 3.7, 3.8 and 3.9 then I should also test against these versions separately (at least when getting ready to push a new release) and soon this whole testing environment setup would get quite big and require me to write some code to automate since there is no way in hell that I will do all of that manually. Luckily there exists tools for this so I don't have to write some boilerplate code to use for future projects as well.
Introducing Tox
Now what exactly is Tox. Their official description states the following:
Which sounds exactly like what I need. Tox itself runs its configurations from either a tox.ini file or integrated directly into the existing files that I am already using for the project such as setup.cfg or pyproject.toml and I chose to setup tox through the latter saving myself an extra file to keep track off.
Tox is very powerful and its configuration format allows for writing a lot of tests (or even just automated commands) with a small amount of code due to its factoring system, which I wont go too much into detail with. But basically you can define a set of python environments that it should run tests for such as 3.7, 3.8 and 3.9 and then several factors such as a "test" or "doc" factor and ask tox to run these together into a test matrix where it would then test running tests and generating documentation in those three python versions, thus running 6 commands in total.
I am no expert on it yet, but I've implemented a few commands in tox that I use locally alongside my IDE to run tests and generate documentation. I've even separated the project requirements into different requirements files that I then feed into tox for the different factors I run. Since the packages needed for running the unittesting are different than the packages needed for building the documentation. I also have a command for doing a test deploy to the test.pypi.org repository from local, which I can then fetch into a virtual environment from the test repository to confirm things are working exactly as they should.
I wont go into too much detail about the configuration itself, but basically the "factors" are what's defined before the colons and several factors can have the same commands defined for them. Thus allowing you to reuse all of the shared setup there might be. It can also be seen that I have a little tidbit defining something to do with a github action, but I will get into that later. Per default Tox also disallows any environment variables that you don't explicitly define as being allowed, such that you are sure that your setup doesn't rely on some undocumented thing unique only to your system. So for example I am passing some env with login information for test.pypi.org for Twine in the "deploy" factor or in the deps section I am passing in different requirements files. Currently I am only testing on python 3.9, but this can easily change now due to using Tox.
Continuous Integration
Ideally at some point I won't be the only person writing code and fixing bugs on this project it also isn't feasible for any single developer to be responsible for the building and uploading to the deployment channels (most likely just pypi, but who knows) so I would very much like to make use of the CI github makes available (github actions) to repositories hosted on their platform. In this way it can function as the centralized point where code is tested, packaged and uploaded.
Testing
Ideally I want to keep things as simple as possible. So what better way than to utilize Tox as well on the CI server side to run the same tests that developers run locally, with the only change that it will be run across several different platforms, python versions and maybe even Blender version (For the exporter later on), to guarantee maximum compatibility. This leads me back to the little bit I had in the tox configuration about "gh-actions". It turns out that there is a plugin for tox which provides integrations for use with GH-actions.
In essences github supplies you with the option of running your CI on one of three runner types; a windows, ubuntu or macos machine. And this plugin allows for this to be part of your tox configuration, so you could run your testing suite on all platforms, on different python version, in parallel. To begin with I will probably just use the linux runner and only run a few python versions. Github provides a certain number of runner minutes per month, which I believe is 3000 for public repositories. And running a full blown test suite across many different OS's and python versions would probably chew through those minutes in no time in the early development stage 😉
Building
For now the GE-Core library wont contain any special compiled code as it is all python, thus the installation is very straight forward and I will be able to upload both an sdist and a generic wheel for all platforms. But I've thought about implementing some form of C/C++ code for specific parts of the library (such as maybe interacting with binary files, since the underlying code would need to be kept secret for legal reasons) and this needs to be compiled per platform and requires specific build tools. In this case the same runners that are used for testing the entire suite could be used for compiling platform specific versions.
Pipeline
I could make a fancy diagram showing the pipeline I wish for and maybe I will at some point. To begin with I will just write a few words about it but first a bit about how developing the exporter itself has been working so far. I went with two persistent branches, master and develop.
The master branch has the release workflow attached, which automatically bumps version numbers on the basis of the commit history (which follows a commit style convention), generates build artefacts (in the exporters case, zips everything up) and finally publishes it to the releases on github with a changelog. It is only supposed to receive merges from the develop branch and any hotfix branches created in the case of finding a breaking bug somewhere.
The develop branch is where all feature branches gets merged into and the next release is prepared. When it is deemed ready it would be merged into master for the release. It also generated releases itself with the dev suffix to signify it being experimental releases. This means that develop also functioned as sort of a staging area and any testing I had to run locally on the develop branch, before I merged it with master.
The pipeline I will try to go with from now on relies on the fact that unit testing is present. So I will still have the master branch contain the current release code and develop as a staging area for a bigger release. But the master branch will run the full test suite before accepting a merge. With the testing you could technically also just merge everything into master but as I said, that might entail a lot of action minutes being used on setup/teardown every time. The tools for running the testing is also available to anyone with the repository, so any surprises should really only come from platform or python version specific things.
Unit testing should still be run on the develop branch, but it would be less broad in scope as to only ensure that the individual features being developed simultaneously aren't interfering with each other. So maybe only spinning up one runner with one python version.
Semantic Release
I am personally very fond of using this for my repositories. While figuring out the python packaging stuff I've tried to find something that would allow me to run automated linting, syntax checking, building and uploading all the while still adhering to the fact that I want to automatically manage version numbering through using properly formatted commit messages, as the Semantic Release guys say:
Which is ideally what I want. It is also tedious to manage version numbering yourself and it is easier to let the code speak for itself, as long as people do proper commits (which can be enforced)
I've ended up using this for my repository again as well, since I couldn't find any python specific workflow that was flexible enough to facilitate the versioning features I want.
Although Semantic Release is originally for javascript projects, it is flexible enough and has enough plugins that it can easily be adapted to any case. There is even a plugin for doing exactly what I want and uploading to PyPi. I could also roll my own tox configuration to prepare a build and just run this through Semantic Releases' exec plugin, which allows you to run arbitrary binaries at any of it's steps.
End Result
Hopefully the end result will be an easily maintainable automated build pipeline on the repository, which can "guarantee" that the end product is compatible with the python versions I've stated. Instead of me just saying that it didn't crash anything, when I did some half-assed testing. Extrapolating to the exporter itself it will hopefully also be possible to give this same guarantee with respect to Blender versions.
Next Time
Previous time I might have mentioned opening up the GE-core repo, but the build stuff took longer than I expected. Hopefully next time will have more of actual implementation details and less about all of the boring stuff that goes around setting up the development tools 😉 And sorry for the wall of text, I haven't spent a lot of time on graphics for this one 😄
Beta Was this translation helpful? Give feedback.
All reactions