Skip to content

Commit

Permalink
Merge pull request #126 from practo/1.19.0-support
Browse files Browse the repository at this point in the history
Kubernetes 1.19.0 support
  • Loading branch information
alok87 authored Dec 21, 2020
2 parents 288febf + 4645c13 commit 2e2e547
Show file tree
Hide file tree
Showing 1,940 changed files with 280,813 additions and 245,221 deletions.
3 changes: 3 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,9 @@ manifest-list: push
version:
@echo $(VERSION)

generate:
./hack/update-codegen.sh

test: $(BUILD_DIRS)
@docker run \
-i \
Expand Down
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,12 +223,12 @@ git pull origin master
- Build and push the image to `hub.docker.com/practodev`. Note: practodev push access is required.
```
git fetch --tags
git tag v1.2.0
git tag v1.3.0
make push
```
Note: For every tag major and major minor versions tags also available. For example: `v1` and `v1.2`
Note: For every tag major and major minor versions tags also available. For example: `v1` and `v1.3`
- Create a Release in Github. Refer [this](https://github.com/practo/k8s-worker-pod-autoscaler/releases/tag/v1.2.0) and create a release. Release should contain the Changelog information of all the issues and pull request after the last release.
- Create a Release in Github. Refer [this](https://github.com/practo/k8s-worker-pod-autoscaler/releases/tag/v1.3.0) and create a release. Release should contain the Changelog information of all the issues and pull request after the last release.
- Publish the release in Github 🎉
Expand All @@ -248,6 +248,12 @@ $ make build
making bin/darwin_amd64/workerpodautoscaler

$ bin/darwin_amd64/workerpodautoscaler run --kube-config /home/user/.kube/config

```
- Generate CRD generated code at `pkg/apis` and `pkg/generated` using:
```
make generate
```
- To add a new dependency use `go mod vendor`
Expand Down
13 changes: 13 additions & 0 deletions UPGRADE.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# Upgrade Worker Pod Autoscaler

## Upgrade from v1.2 to v1.3

### Breaking changes
Updates all the kubernetes dependencies with `v1.19`. It should work for the cluster with older versions, but Kubernetes supports patches and fixes for [last 3 minor releases](https://kubernetes.io/docs/setup/release/version-skew-policy/). It also updates the CRD definitions.

### Recommended Actions
```
kubectl apply -f ./artifacts/crd.yaml
```
### Changes
- [v1.3.0](https://github.com/practo/k8s-worker-pod-autoscaler/releases/tag/v1.3.0)


## Upgrade from v1.1 to v1.2

### Breaking changes
Expand Down
5 changes: 3 additions & 2 deletions artifacts/clusterrole.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: workerpodautoscaler
name: workerpodautoscaler
rules:
- apiGroups:
- apiextensions.k8s.io
Expand All @@ -12,7 +12,8 @@ rules:
- apiGroups:
- k8s.practo.dev
resources:
- workerpodautoscalers
- workerpodautoscalers
- workerpodautoscalers/status
verbs:
- list
- update
Expand Down
123 changes: 65 additions & 58 deletions artifacts/crd.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
apiVersion: apiextensions.k8s.io/v1
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: workerpodautoscalers.k8s.practo.dev
spec:
conversion:
strategy: None
group: k8s.practo.dev
names:
kind: WorkerPodAutoScaler
Expand All @@ -15,62 +13,71 @@ spec:
- wpas
singular: workerpodautoscaler
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
required:
- spec
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required:
- minReplicas
- maxReplicas
- queueURI
- targetMessagesPerWorker
oneOf:
- required:
- deploymentName
- required:
- replicaSetName
properties:
deploymentName:
type: string
description: 'Name of the Kubernetes Deployment in the same namespace as WPA object'
replicaSetName:
type: string
description: 'Name of the Kubernetes ReplicaSet in the same namespace as WPA object'
maxDisruption:
type: string
nullable: true
description: 'Amount of disruption that can be tolerated in a single scale down activity. Number of pods or percentage of pods that can scale down in a single down scale down activity'
maxReplicas:
type: integer
format: int32
description: 'Maximum number of workers you want to run'
minReplicas:
type: integer
format: int32
description: 'Minimum number of workers you want to run'
queueURI:
type: string
description: 'Full URL of the queue'
targetMessagesPerWorker:
type: integer
format: int32
description: 'Target ratio between the number of queued jobs(both available and reserved) and the number of workers required to process them. For long running workers with visible backlog, this value may be set to 1 so that each job spawns a new worker (upto maxReplicas)'
secondsToProcessOneJob:
type: number
format: float
nullable: true
description: 'For fast running workers doing high RPM, the backlog is very close to zero. So for such workers scale up cannot happen based on the backlog, hence this is a really important specification to always keep the minimum number of workers running based on the queue RPM. (highly recommended, default=0.0 i.e. disabled).'
version: v1
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required:
- spec
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required:
- minReplicas
- maxReplicas
- queueURI
- targetMessagesPerWorker
oneOf:
- required:
- deploymentName
- required:
- replicaSetName
properties:
deploymentName:
type: string
description: 'Name of the Kubernetes Deployment in the same namespace as WPA object'
replicaSetName:
type: string
description: 'Name of the Kubernetes ReplicaSet in the same namespace as WPA object'
maxDisruption:
type: string
nullable: true
description: 'Amount of disruption that can be tolerated in a single scale down activity. Number of pods or percentage of pods that can scale down in a single down scale down activity'
maxReplicas:
type: integer
format: int32
description: 'Maximum number of workers you want to run'
minReplicas:
type: integer
format: int32
description: 'Minimum number of workers you want to run'
queueURI:
type: string
description: 'Full URL of the queue'
targetMessagesPerWorker:
type: integer
format: int32
description: 'Target ratio between the number of queued jobs(both available and reserved) and the number of workers required to process them. For long running workers with visible backlog, this value may be set to 1 so that each job spawns a new worker (upto maxReplicas)'
secondsToProcessOneJob:
type: number
format: float
nullable: true
description: 'For fast running workers doing high RPM, the backlog is very close to zero. So for such workers scale up cannot happen based on the backlog, hence this is a really important specification to always keep the minimum number of workers running based on the queue RPM. (highly recommended, default=0.0 i.e. disabled).'
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
6 changes: 5 additions & 1 deletion cmd/workerpodautoscaler/run.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package main

import (
"context"
"fmt"
"net/http"
"os"
Expand Down Expand Up @@ -118,6 +119,9 @@ func (v *runCmd) run(cmd *cobra.Command, args []string) {
hook := promlog.MustNewPrometheusHook("wpa_", klog.WarningSeverityLevel)
klog.AddHook(hook)

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

// // set up signals so we handle the first shutdown signal gracefully
stopCh := signals.SetupSignalHandler()

Expand Down Expand Up @@ -183,7 +187,7 @@ func (v *runCmd) run(cmd *cobra.Command, args []string) {
customClient, resyncPeriod, informers.WithNamespace(namespace))

controller := workerpodautoscalercontroller.NewController(
kubeClient, customClient,
ctx, kubeClient, customClient,
kubeInformerFactory.Apps().V1().Deployments(),
kubeInformerFactory.Apps().V1().ReplicaSets(),
customInformerFactory.K8s().V1().WorkerPodAutoScalers(),
Expand Down
32 changes: 9 additions & 23 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -3,33 +3,19 @@ module github.com/practo/k8s-worker-pod-autoscaler
go 1.15

require (
github.com/aws/aws-sdk-go v1.34.3
github.com/beanstalkd/go-beanstalk v0.0.0-20200526060843-1cc502ecaf3c
github.com/aws/aws-sdk-go v1.36.12
github.com/beanstalkd/go-beanstalk v0.1.0
github.com/golang/mock v1.4.4
github.com/imdario/mergo v0.3.11 // indirect
github.com/mitchellh/go-homedir v1.1.0
github.com/practo/klog/v2 v2.2.1
github.com/practo/promlog v1.0.0
github.com/prometheus/client_golang v1.7.0
github.com/spf13/cobra v1.0.0
github.com/prometheus/client_golang v1.9.0
github.com/spf13/cobra v1.1.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.7.1
k8s.io/api v0.0.0
k8s.io/apimachinery v0.0.0
k8s.io/client-go v0.0.0
k8s.io/code-generator v0.0.0-20190612205613-18da4a14b22b
)

replace (
github.com/gogo/protobuf => github.com/gogo/protobuf v1.3.0
golang.org/x/crypto => golang.org/x/crypto v0.0.0-20181025213731-e84da0312774
golang.org/x/net => golang.org/x/net v0.0.0-20190206173232-65e2d4e15006
golang.org/x/sync => golang.org/x/sync v0.0.0-20181108010431-42b317875d0f
golang.org/x/sys => golang.org/x/sys v0.0.0-20190209173611-3b5209105503
golang.org/x/text => golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db
golang.org/x/tools => golang.org/x/tools v0.0.0-20190313210603-aa82965741a9
k8s.io/api => k8s.io/api v0.0.0-20190819141258-3544db3b9e44
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.0.0-20190819143637-0dbe462fe92d
k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20190817020851-f2f3a405f61d
k8s.io/client-go => k8s.io/client-go v0.0.0-20190819141724-e14f31a72a77
k8s.io/code-generator => k8s.io/code-generator v0.0.0-20190612205613-18da4a14b22b
k8s.io/api v0.19.0
k8s.io/apimachinery v0.19.0
k8s.io/client-go v0.19.0
k8s.io/code-generator v0.19.0
)
Loading

0 comments on commit 2e2e547

Please sign in to comment.