Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Baremetal e2e scripts #248

Merged
merged 26 commits into from
Oct 28, 2021
Merged

Conversation

mukrishn
Copy link
Collaborator

Description

  • Modifications to run multiple benchmark resources for baremetal cluster, so included conditional checks for operators.
  • New workload for baremetal upgrade, that would bring up multiple application pods, services and active http traffic between them while upgrade.

Fixes

#208

Developed by @jdowni000, I just fixed few things and corrected conflicts as he is on PTO. PR #225 can be marked as obsolete.

jdowni000 and others added 16 commits August 10, 2021 16:21
* Updated workloads to run on baremetal

* first commit: skipping cleanup on baremetal

* first commit: skipping cleanup on baremetal

* fixed spacing

* Checking if cloud is on bareMetal. If it is, we're no longer deleting the benchmark-operator namespace in between runs

Co-authored-by: jdowni000 <[email protected]>
* first commit: adding calculations for allocatable CPU per MCP

* Changing approach to creating new n number mcps depending on node count

* removing old machineConfig_pool func

* removing bash script testing code left in common.sh

* fixed a couple naming convention issues

* small fixes from testing in actual cluster

* fixed applying mcp.yaml to use envsubst

* more issues resolved from testing

* more fixes in while loops while testing

* completed deployment fix

* changing function name, adding check for mcps and ns if exists

* adding json creation and use of mb

* adding logic to allow MCP_SIZE and MCP_NODE_COUNT to be set as a variable

* changing env var MCP_SIZE to TOTAL_MCPS

* fixed spacing in all of new function

* adding mb_pod.yml and sending mb operations to pod instead of cli

* removing response.csv file as it may be too large depending on how many mcps are genereated

* added var to set to choose whether MCPs are created or not

* just fix...its Friday...its fixed

* fixed if statemnet with elif

* creating logic to check env var inputs

* adding if condition for sleep depending on size of node_count

* adding resources to mb-pod

* adding logic to check for sample app pods to be ready before mb-pod deployment

* Update common.sh

mb-pod and sample app
@mohit-sheth mohit-sheth added the ok to test Kick off our CI framework label Sep 15, 2021
@mohit-sheth
Copy link
Collaborator

rerun all

@mohit-sheth
Copy link
Collaborator

/rerun all

@comet-perf-ci
Copy link
Collaborator

Results for e2e-benchmarking CI Tests

Workload Test Result Runtime
router-perf-v2 ingress-performance.sh FAIL 00:04:17
etcd-perf run_etcd_tests_fromgit.sh PASS 00:02:42
scale-perf run_scale_fromgit.sh FAIL 00:17:23
network-perf smoke_test.sh PASS 00:07:23
kube-burner run_clusterdensity_test_fromgit.sh PASS 00:07:32
kube-burner run_maxnamespaces_test_fromgit.sh PASS 00:03:54
kube-burner run_maxservices_test_fromgit.sh PASS 00:03:00
kube-burner run_nodedensity_test_fromgit.sh PASS 00:03:28
kube-burner run_nodedensity-heavy_test_fromgit.sh PASS 00:04:12
storage-perf run_storage_tests_fromgit.sh PASS 00:02:35
upgrade-perf run_upgrade_fromgit.sh PASS 00:00:56


#If using baremetal we use different query to find worker nodes
if [[ "${isBareMetal}" == "true" ]]; then
log "Colocating uperf pods for baremetal"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not really colocating are we?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, we are not. we randomly pick 2 worker nodes. I will delete this log.

kubectl apply -f benchmark-operator/resources/kube-burner-role.yml
log "Waiting for benchmark-operator to be running"
oc wait --for=condition=available "deployment/benchmark-controller-manager" -n benchmark-operator --timeout=300s
if [[ "${isBareMetal}" == "false" ]]; then

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh i think we should just leave the operator around each time and change this to apply to all cluster types. I don't see a reason why we need to delete/recreate the operator

Copy link
Collaborator Author

@mukrishn mukrishn Sep 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Member

@rsevilla87 rsevilla87 Sep 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once the operator is deployed, we shouldn't delete it.
I'd modify the current code to always try to deploy the operator. With the current make deploy implementation, if the operator is already running it won't be redeployed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed delete command

workloads/network-perf/common.sh Outdated Show resolved Hide resolved
@rsevilla87 rsevilla87 mentioned this pull request Sep 16, 2021
@mukrishn
Copy link
Collaborator Author

@mohit-sheth @rsevilla87 @whitleykeith any more suggestions/comments ?

Copy link

@whitleykeith whitleykeith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@mohit-sheth
Copy link
Collaborator

/rerun all

@comet-perf-ci
Copy link
Collaborator

Results for e2e-benchmarking CI Tests

Workload Test Result Runtime
router-perf-v2 ingress-performance.sh FAIL 00:06:05
etcd-perf run_etcd_tests_fromgit.sh PASS 00:03:26
scale-perf run_scale_fromgit.sh PASS 00:16:01
network-perf smoke_test.sh PASS 00:05:12
kube-burner run_clusterdensity_test_fromgit.sh PASS 00:06:05
kube-burner run_maxnamespaces_test_fromgit.sh PASS 00:04:00
kube-burner run_maxservices_test_fromgit.sh PASS 00:02:58
kube-burner run_poddensity-heavy_test_fromgit.sh PASS 00:04:01
kube-burner run_nodedensity_test_fromgit.sh PASS 00:04:35
kube-burner run_nodedensity-heavy_test_fromgit.sh PASS 00:04:52
upgrade-perf run_upgrade_fromgit.sh PASS 00:00:56

Copy link
Collaborator

@mohit-sheth mohit-sheth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mukrishn
Copy link
Collaborator Author

@rsevilla87 fixed conflicts and tested, ptal.

@rsevilla87
Copy link
Member

/rerun all

@comet-perf-ci
Copy link
Collaborator

Results for e2e-benchmarking CI Tests

Workload Test Result Runtime
router-perf-v2 ingress-performance.sh FAIL 00:01:07
etcd-perf run_etcd_tests_fromgit.sh PASS 00:02:44
scale-perf run_scale_fromgit.sh PASS 00:12:54
network-perf smoke_test.sh PASS 00:05:53
kube-burner run_clusterdensity_test_fromgit.sh PASS 00:03:12
kube-burner run_maxnamespaces_test_fromgit.sh PASS 00:03:02
kube-burner run_maxservices_test_fromgit.sh PASS 00:02:14
kube-burner run_poddensity-heavy_test_fromgit.sh PASS 00:03:34
kube-burner run_nodedensity_test_fromgit.sh PASS 00:03:04
kube-burner run_nodedensity-heavy_test_fromgit.sh PASS 00:03:47
upgrade-perf run_upgrade_fromgit.sh PASS 00:00:55

@rsevilla87
Copy link
Member

/rerun all router-perf-v2:ingress-performance.sh

@comet-perf-ci
Copy link
Collaborator

Results for e2e-benchmarking CI Tests

Workload Test Result Runtime
router-perf-v2 ingress-performance.sh PASS 00:09:42

@rsevilla87 rsevilla87 merged commit dd11ead into cloud-bulldozer:master Oct 28, 2021
@rsevilla87 rsevilla87 self-requested a review October 28, 2021 11:29
vishnuchalla pushed a commit that referenced this pull request Sep 6, 2023
Changes for baremetal

Co-authored-by: jdowni000 <[email protected]>
Co-authored-by: Marko Karg <[email protected]>
Co-authored-by: jdowni000 <[email protected]>
Co-authored-by: Raul Sevilla <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ok to test Kick off our CI framework
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants