-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
External access to cluster #14
Comments
Hi @khauser , as kafka works, it couldn't be proxied, it's only possible to advertise one ip by node to zookeeper. You could advertise kafka public ip, setting "broker public ip" catalog field to true, but you have a scale limitation with the number of hosts you have. |
Okay, with adding a port-mapping that works as expected. Maybe the label 'io.rancher.scheduler.global: 'true'' should be set automatically when "broker public ip" is true. This option couldn't be changed after the catalog is created. Wouldn't it also be possible to advertise one IP plus a port to zookeeper as descibed here Running a Multi-Broker Apache Kafka 0.8 Cluster on a Single Node and also here with docker? There is also a nice "new" feature in docker-compose/docker where scaling would select ports from a defined port range. Rancher at the moment doesn't support it I think :(.
|
Just as a thought and workaround maybe: What I also tried now is to add a second kafka stack and map the port to 9093 to have 2 additional node on each host. Naturally this fails now because of existing brokerIds. |
Unfortunately, feature that you mention about docker compose is not supported by rancher. I think a simpler workaround would be have smaller but more hosts... |
More hosts will ingrease your system maintenance. And when it's possible to reduce that, it should be done. Docker port range: rancher/rancher#1673 |
Excuse me @khauser , may be i misunderstood you. I could understand that more host could increase maintenance, but kafka works how it works. IHMO, make a quick and dirty approach to kafka deployment is not the answer. |
@rawmind0 I used your kafka cluster setup but not in rancher used it kubernetes i am not able to find service.properties file and zoo.cfg file ? |
@abhi-dwivedi , what do you mean?? How are you deploying kafka cluster, with k8s-kafka as sidecar?? |
@rawmind0 actually i am not using rancher i am using kubernetes cluster on my AWS ec2 . and use your rc and svc file but not able find these files on my containers. if possible can you ping me at my email id [email protected] so we can discuss it directly |
Actually i am getting this error 👍 |
Seems that your kubernetes etcd services is not resolving as |
Hey i am lil new in kubernetes could you let me know how can i get the value for conf_node_ip? |
By default it should be available at you hosts ip, https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/manifests/etcd.manifest |
sorry to bother you so much but |
If etcd is running on host network, like default, should be anyone of the ip of master...Where etcd is running... Anyway, you have alternative here. Helm charts for kafka and zookeeper, https://github.com/kubernetes/charts/tree/master/incubator/kafka https://github.com/kubernetes/charts/tree/master/incubator/zookeeper |
Hi, I try to use your catalog entries to setup a kafka cluster (template 1.0.0-rancher1) also with zookeeper (template 3.4.10-rancher1).
As I read "Kafka can now be accessed over the Rancher network.", is it also possible to acccess the kafka cluster from outsite? We like to use it in different rancher environments or completely from outsite.
I already tried with having 2 hosts, with 3 scaled kafka container and a LoadBalancer on each host mapping port 9092. But kafka doesn't react when sending a message.
So the initial connection works.
Do you have any idea?
The text was updated successfully, but these errors were encountered: