Automatic Interconnection of Underlay and Overlay Subnets

If a cluster has both Underlay and Overlay subnets, by default, Pods under the Overlay subnet can access Pods' IPs in the Underlay subnet through a gateway using NAT. However, Pods in the Underlay subnet need to configure node routing to access Pods in the Overlay subnet.

To achieve automatic interconnection between Underlay and Overlay subnets, you can manually modify the YAML file of the Underlay subnet. Once configured, Kube-OVN will also use an additional Underlay IP to connect the Underlay subnet and the ovn-cluster logical router, setting the corresponding routing rules to enable interconnection.

Procedure

  1. Go to Administrator.

  2. In the left navigation bar, click on Cluster Management > Resource Management.

  3. Enter Subnet to filter resource objects.

  4. Click on ⋮ > Update next to the Underlay subnet to be modified.

  5. Modify the YAML file, adding the field u2oInterconnection: true in the Spec.

  6. Click Update.

Note: Existing compute components in the Underlay subnet need to be recreated for the changes to take effect.

Isolation Between Underlay Subnets with u2oInterconnection Enabled

When multiple Underlay subnets have u2oInterconnection: true enabled, traffic between them no longer goes through the physical gateway but is routed directly via the internal OVN network.

If you need to isolate two Underlay subnets while both have u2oInterconnection enabled, you must first configure the kube-ovn-controller parameter, then configure the subnet isolation.

Step 1: Configure kube-ovn-controller

Modify the kube-ovn-controller Deployment to disable connection tracking skip for destination logical port IPs:

kubectl edit deployment kube-ovn-controller -n kube-system

Add or modify the following argument:

spec:
  template:
    spec:
      containers:
      - name: kube-ovn-controller
        args:
        - --ls-ct-skip-dst-lport-ips=false
CAUTION

--ls-ct-skip-dst-lport-ips controls whether to skip connection tracking (conntrack) for traffic destined to logical port IPs. The default value is true, which skips conntrack to improve performance. Setting it to false does not affect functionality but may slightly impact performance.

However, for Underlay subnets with ACL-based isolation, you must set it to false. Otherwise, gateway-to-Pod traffic will fail (e.g., ping requests reach the Pod but replies are dropped), because ACL isolation uses allow-related which requires conntrack state; without it, replies cannot be identified as "related" and get dropped.

Step 2: Configure Subnet Isolation

Configure the subnet with the following parameters:

spec:
  u2oInterconnection: true
  private: true
  allowSubnets:
  - 10.0.0.0/24    # CIDR of the subnet allowed for inbound access
  - 172.16.0.0/16  # Node network CIDR (REQUIRED)

Parameters:

  • private: true: Enables subnet isolation. This restricts inbound traffic to only the subnets specified in allowSubnets.
  • allowSubnets: An array of CIDR strings specifying which subnets are allowed for inbound access.
CAUTION

You must include the node network CIDR in allowSubnets. Otherwise, nodes will not be able to communicate with Pods in this subnet, which may cause health checks, log collection, and other node-to-pod traffic to fail.

NOTE

Setting private: true only restricts inbound traffic to the subnet. It does not affect outbound traffic from Pods within the subnet.