-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automized Server tests #669
base: master
Are you sure you want to change the base?
Conversation
f4a8027
to
4ed4add
Compare
This is wonderful, thank you! Could we please add a test that checks for a multiline query in the request body? There was a bug we had to fix to make it work with the Rumble magic in Jupyter notebooks. After this, we could merge. This is a perfect bootstrap of server tests. I think installing Spark in this way is good and I am happy with this. I guess the maven test command might create a jar where it is included or something, which explains why it is not needed there. |
.github/workflows/maven.yml
Outdated
- name: Spotless check | ||
run: mvn spotless:check | ||
- name: Install Spark and run server tests | ||
run: mkdir -p /opt; wget -q -O /opt/spark.tgz https://downloads.apache.org/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz ; tar -xzf /opt/spark.tgz -C /opt/ ; rm /opt/spark.tgz ; export SPARK_HOME=/opt/spark-2.4.5-bin-hadoop2.7 ; export PATH=$PATH:/opt/spark-2.4.5-bin-hadoop2.7/bin; bash server-test-script.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we could split this into several intermediate step for ease of read?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For readability I have initially aimed to use multi-line commands which are supposedly supported by workflow .yml. However, when tested, it would only run the first command and skip the rest. So I have opted for chaining them via semi-colons.
The reason I have put the entire Spark installation and testing in a single step is to make it clear that this is the only step that uses the Spark installation.
I will nonetheless, go over the alternatives once more to make this more readable. I would be glad to hear any preferences you may have(no matter weak or strong) on how to structure this.
We'll have to keep in mind to bump the version here as well (the minor update to 1.6.1 broke the tests but should be an easy fix). |
If you could share the test that failed earlier I could add it as our test. Otherwise I'm assuming any test containing a new line character "\n" in the request should do. Is this correct? This works: The parser fails with the newline character added: |
The version changes should not be a problem anymore with my recent commits. |
@ghislainfourny the newline added with "\n" breaks the query for some reason. Could you provide any insights or could you share the query that you intended to run? |
@CanBerker this PR was somehow forgotten, it would be nice if we merge this if all tests pass. |
Firstly, I have moved spotless checks before all other tests so formatting issues can be detected first without wait.
Secondly, I have implemented a bash script with automized server tests. This works on my local machine.
However, since the workflow does not install spark-submit to the test machine. The server initialization fails.I wanted to get @ghislainfourny's feedback on how to proceed.Edit: I have updated the script to install Spark during testing
Fixes #664