Skip to content
This repository has been archived by the owner on Mar 10, 2021. It is now read-only.

Issue on verifying kubernetes componentstatuses #42

Open
hrishin opened this issue Sep 1, 2019 · 3 comments
Open

Issue on verifying kubernetes componentstatuses #42

hrishin opened this issue Sep 1, 2019 · 3 comments

Comments

@hrishin
Copy link

hrishin commented Sep 1, 2019

Context

After provisoning etcd cluster step, guide moves to bring kube-api-server, scheduler, and controller-manager up & running by using the following command.

./scripts/setup-controller-services
[...]
for c in controller-0 controller-1 controller-2; do vagrant ssh $c -- kubectl get componentstatuses; done

Upon executing the verification step it may show the following error

A Vagrant environment or target machine is required to run this
command. Run `vagrant init` to create a new Vagrant environment. Or,
get an ID of a target machine from `vagrant global-status` to run
this command on. A final option is to change to a directory with a

Wondering anyone else is also facing the same issue? If so, it may be worth to update the guide?

@Slach
Copy link
Contributor

Slach commented Sep 1, 2019

which host OS is use?

@hrishin
Copy link
Author

hrishin commented Sep 1, 2019

Its fedora 29 workstation

uname --all
Linux **** 5.2.7-100.fc29.x86_64 ************ x86_64 x86_64 x86_64 GNU/Linux

@Slach
Copy link
Contributor

Slach commented Sep 2, 2019

vagrant status | grep controller

what show?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants