Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Implementation of enable_icc option #69

Merged
merged 6 commits into from
Nov 12, 2024

Conversation

swagatbora90
Copy link
Contributor

@swagatbora90 swagatbora90 commented Oct 9, 2024

Issue #, if available:

Description of changes:

Docker currently supports configuration of intercontainer connectivity between containers attached to the same network bridge. By default, it enables inter-connectivity. However, to disable this behavior, it supports a special network option during network create called com.docker.network.bridge.enable_icc. CNI also allows intra-bridge connectivity, but there is no option to configure that behavior,i.e, disable it.

This PR adds support for the com.docker.network.bridge.enable_icc docker network option. Specially, it will handle the case when the enable_icc is set to false, by adding iptable rules disabling traffic in and out of the same bridge.

This PR also:

  1. Removes filtering of "com.docker.network.bridge.enable_ip_masquerade" option since this is now being handled in nerdctl

  2. Refactors the code for network create to separate out driver specific logic and help with testing

Testing done:

  1. Unit test
  2. E2E test to be added

Steps to test locally:

  • Create bridge with enable_icc=false and run containers
% sudo DOCKER_HOST=unix:///var/run/finch.sock docker network create --driver bridge --opt com.docker.network.bridge.enable_icc=false test_icc

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker network create --driver bridge --opt com.docker.network.bridge.enable_icc=false test_icc

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker   run --name container1 --network test_icc busybox sleep 3600

% sudo DOCKER_HOST=unix:///var/run/finch.sock docker   run --name container2 --network test_icc busybox sleep 3600
  • Try to ping between containers
% sudo DOCKER_HOST=unix:///var/run/finch.sock docker  exec container1 ping -c 4 container2
PING container2 (10.4.2.5): 56 data bytes

--- container2 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
exec failed with exit code 1
  • Check iptable, specifically FINCH-ISOLATE-CHAIN
%     sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 389 packets, 44527 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    4   336 FINCH-ISOLATE-CHAIN  all  --  *      *       0.0.0.0/0            0.0.0.0/0
   24  2016 CNI-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* CNI firewall plugin rules (ingressPolicy: same-bridge) */
   24  2016 CNI-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* CNI firewall plugin rules */
...
...
...
Chain FINCH-ISOLATE-CHAIN (1 references)
 pkts bytes target     prot opt in     out     source               destination
    4   336 DROP       all  --  br-dd61a7c86eba br-dd61a7c86eba  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
  • Stop containers and remove network bridge. And check iptables again
Chain FINCH-ISOLATE-CHAIN (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
  • I've reviewed the guidance in CONTRIBUTING.md

License Acceptance

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@swagatbora90 swagatbora90 force-pushed the enable-icc-option branch 2 times, most recently from f84d537 to 556f836 Compare October 9, 2024 18:50
@swagatbora90 swagatbora90 changed the title [DRAFT] Initial implementation of enable_icc option Implementation of enable_icc option Oct 17, 2024
@swagatbora90 swagatbora90 changed the title Implementation of enable_icc option feat: Implementation of enable_icc option Oct 17, 2024
@swagatbora90 swagatbora90 marked this pull request as ready for review October 18, 2024 00:11
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
internal/service/network/bridge_driver.go Outdated Show resolved Hide resolved
Shubhranshu153
Shubhranshu153 previously approved these changes Oct 29, 2024
@Shubhranshu153
Copy link
Contributor

Approved with some comments, LGTM just a couple of nits and clarifying questions.

Copy link
Member

@henry118 henry118 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave a few comments. Overall I am still concerned with potential race conditions, such as:

  • concurrent network creates; or
  • concurrent network create and remove.

Prior to this change nerdctl seems able to handle the races with locked dirs. However I don't think the protection could be extended to our iptable setup/teardowns.

internal/service/network/driver/bridge.go Outdated Show resolved Hide resolved
internal/service/network/create.go Show resolved Hide resolved
@swagatbora90
Copy link
Contributor Author

Leave a few comments. Overall I am still concerned with potential race conditions, such as:

* concurrent network creates; or

* concurrent network create and remove.

Prior to this change nerdctl seems able to handle the races with locked dirs. However I don't think the protection could be extended to our iptable setup/teardowns.

Added a per-network mutex map to address some of the race concerns: 9fb2cf3. The idea is that for the same network name, multiple concurrent requests for Create/Removal will be thread safe since we keep a map of mutex per network name. Also, operations within the same Create/Request calls are now atomic as the lock is held until the operation completes.

henry118
henry118 previously approved these changes Nov 8, 2024
Copy link
Member

@henry118 henry118 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks

Shubhranshu153
Shubhranshu153 previously approved these changes Nov 8, 2024
@swagatbora90 swagatbora90 dismissed stale reviews from Shubhranshu153 and henry118 via 53137ab November 12, 2024 21:50
@swagatbora90 swagatbora90 merged commit 5fd2e3e into runfinch:main Nov 12, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants