Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

External access to cluster #14

Open
khauser opened this issue Jan 11, 2018 · 15 comments
Open

External access to cluster #14

khauser opened this issue Jan 11, 2018 · 15 comments

Comments

@khauser
Copy link

khauser commented Jan 11, 2018

Hi, I try to use your catalog entries to setup a kafka cluster (template 1.0.0-rancher1) also with zookeeper (template 3.4.10-rancher1).
As I read "Kafka can now be accessed over the Rancher network.", is it also possible to acccess the kafka cluster from outsite? We like to use it in different rancher environments or completely from outsite.

I already tried with having 2 hosts, with 3 scaled kafka container and a LoadBalancer on each host mapping port 9092. But kafka doesn't react when sending a message.

telnet kafka01.mydomain.de 9092
Trying 123.12.12.123...
Connected to kafka01.mydomain.de.

So the initial connection works.

Do you have any idea?

@rawmind0
Copy link
Owner

Hi @khauser ,

as kafka works, it couldn't be proxied, it's only possible to advertise one ip by node to zookeeper. You could advertise kafka public ip, setting "broker public ip" catalog field to true, but you have a scale limitation with the number of hosts you have.

@khauser
Copy link
Author

khauser commented Jan 11, 2018

Okay, with adding a port-mapping that works as expected. Maybe the label 'io.rancher.scheduler.global: 'true'' should be set automatically when "broker public ip" is true. This option couldn't be changed after the catalog is created.

Wouldn't it also be possible to advertise one IP plus a port to zookeeper as descibed here Running a Multi-Broker Apache Kafka 0.8 Cluster on a Single Node and also here with docker?

There is also a nice "new" feature in docker-compose/docker where scaling would select ports from a defined port range. Rancher at the moment doesn't support it I think :(.

version: '2'
services:
    nginx:
        image: nginx:latest
        ports:
            - "8091-8095:80"
D:\Temp>docker-compose up -d
Creating network "temp_default" with the default driver
Creating temp_nginx_1 ... done

D:\Temp>docker-compose scale nginx=5
WARNING: The scale command is deprecated. Use the up command with the --scale flag instead.
WARNING: The "nginx" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Starting temp_nginx_1 ... done
Creating temp_nginx_2 ... done
Creating temp_nginx_3 ... done
Creating temp_nginx_4 ... done
Creating temp_nginx_5 ... done

D:\Temp>docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
792560f5d9e4        nginx:latest        "nginx -g 'daemon of…"   14 seconds ago      Up 12 seconds       0.0.0.0:8093->80/tcp   temp_nginx_4
febc12d11a19        nginx:latest        "nginx -g 'daemon of…"   14 seconds ago      Up 12 seconds       0.0.0.0:8092->80/tcp   temp_nginx_5
f303c44f0914        nginx:latest        "nginx -g 'daemon of…"   14 seconds ago      Up 12 seconds       0.0.0.0:8091->80/tcp   temp_nginx_2
30c32db42d44        nginx:latest        "nginx -g 'daemon of…"   15 seconds ago      Up 13 seconds       0.0.0.0:8095->80/tcp   temp_nginx_3
a311ac4f9dc3        nginx:latest        "nginx -g 'daemon of…"   31 seconds ago      Up 30 seconds       0.0.0.0:8094->80/tcp   temp_nginx_1

@khauser
Copy link
Author

khauser commented Jan 11, 2018

Just as a thought and workaround maybe: What I also tried now is to add a second kafka stack and map the port to 9093 to have 2 additional node on each host. Naturally this fails now because of existing brokerIds.
But what if this line broker.id={{getv "/self/container/service_index"}} in rancher-kafka would be adjusted to also integrate the stack name or another stack-unique identifier?

@rawmind0
Copy link
Owner

'io.rancher.scheduler.global: 'true'' is a bad idea because may you don't want to scale up your kafka when you scale up your hosts.

Unfortunately, feature that you mention about docker compose is not supported by rancher.

I think a simpler workaround would be have smaller but more hosts...

@khauser
Copy link
Author

khauser commented Jan 12, 2018

'io.rancher.scheduler.global: 'true'' should be used together with scheduling. For your zk_cluster it's already implemented.

More hosts will ingrease your system maintenance. And when it's possible to reduce that, it should be done.

Docker port range: rancher/rancher#1673

@rawmind0
Copy link
Owner

Excuse me @khauser , may be i misunderstood you. io.rancher.scheduler.global: 'true' with a host label as in zookeeper, would be implemented without any problem. You could still control when scale the service adding or not label to the hosts.

I could understand that more host could increase maintenance, but kafka works how it works. IHMO, make a quick and dirty approach to kafka deployment is not the answer.

@abhi-dwivedi
Copy link

@rawmind0 I used your kafka cluster setup but not in rancher used it kubernetes i am not able to find service.properties file and zoo.cfg file ?

@rawmind0
Copy link
Owner

rawmind0 commented May 2, 2018

@abhi-dwivedi , what do you mean?? How are you deploying kafka cluster, with k8s-kafka as sidecar??
server.properties should be at /opt/kafka/config/server.properties and zoo.cfg should be at zookeeper container, /opt/zk/conf/zoo.cfg

@abhi-dwivedi
Copy link

@rawmind0 actually i am not using rancher i am using kubernetes cluster on my AWS ec2 .

and use your rc and svc file but not able find these files on my containers.

if possible can you ping me at my email id [email protected] so we can discuss it directly

@abhi-dwivedi
Copy link

Actually i am getting this error 👍
'kafka-7fttr' Monit 5.20.0 started
'confd' process is not running
'confd' trying to restart
'confd' start: '/opt/tools/confd/bin/service-conf.sh start'
'confd' failed to start (exit status -1) -- '/opt/tools/confd/bin/service-conf.sh start': Program timed out -- etcd.kubernetes.: Name does not resolve

@rawmind0
Copy link
Owner

rawmind0 commented May 2, 2018

Seems that your kubernetes etcd services is not resolving as etcd.kubernetes. Confd needs to connect to etcd to generate dynamic configuration. By default, using etcd.kubernetes as etcd service name, but you could overwrite it with CONF_NODE_IP env variable in k8s-kafka container.

@abhi-dwivedi
Copy link

Hey i am lil new in kubernetes could you let me know how can i get the value for conf_node_ip?
its will be great help

@rawmind0
Copy link
Owner

rawmind0 commented May 2, 2018

By default it should be available at you hosts ip, https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/manifests/etcd.manifest

@abhi-dwivedi
Copy link

sorry to bother you so much but
i have 4 node and 2 master work as cluster.
But how can i configure CONF_NODE_IP ? in kafka and zk

@rawmind0
Copy link
Owner

rawmind0 commented May 2, 2018

If etcd is running on host network, like default, should be anyone of the ip of master...Where etcd is running...

Anyway, you have alternative here. Helm charts for kafka and zookeeper, https://github.com/kubernetes/charts/tree/master/incubator/kafka https://github.com/kubernetes/charts/tree/master/incubator/zookeeper

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants