Terraform for AKS Application Gateway Ingress Controller 502 and its subnet peering issue

106 views Asked by At

its works fine when:

  1. everything build on azure portal
  2. enable agic in kubernetes service networking
  3. import self signed ssl cert
  4. host table mapping

When deploy by terrform:

  1. deploy a web page, using application gateway ingress controller
  2. deploy aks, vnet, subnet by terraform
  3. enable agic by ingress_application_gateway blocks

so now, got a auto-generated vnet, application gateway and public ip in the auto-generated resource group, and a resource group A contains the aks and the application gateway subnet vnet

I am thinking about peering the aks vnet and the vnet that contains application gateway subnet, however they are in the same address space. Any idea or good way to fix

resource "azurerm_kubernetes_cluster" "aks_cluster" {
  name                = "${var.aks_name}"
  location            = azurerm_resource_group.resource_group.location
  resource_group_name = azurerm_resource_group.resource_group.name
  dns_prefix          = "aksxxx"

  kubernetes_version        = var.aks_version
  automatic_channel_upgrade = "stable"
  private_cluster_enabled   = false
  node_resource_group       = "${var.rg_name}-node-group"
  
  sku_tier = "Free"

  oidc_issuer_enabled       = true
  workload_identity_enabled = true

  network_profile {
    network_plugin = "azure"
    dns_service_ip = "10.20.0.10"
    service_cidr   = "10.20.0.0/16"
  }

  ingress_application_gateway {
    subnet_id = data.azurerm_subnet.appgwsubnet.id
    #subnet_cidr = "10.225.0.0/16"
    gateway_name = "appgw-ingress"
  }

  default_node_pool {
    name                 = "defaultnp"
    vm_size              = "Standard_B2ms"
    orchestrator_version = var.aks_version
    # vnet_subnet_id       = data.azurerm_subnet.aks_node_subnet.id
    type                 = "VirtualMachineScaleSets"
    enable_auto_scaling  = true
    node_count           = 1 
    min_count            = 1 
    max_count            = 2

    node_labels = {
      role = "general"
    }
  }

  identity {
    type         = "SystemAssigned"
    #identity_ids = [azurerm_user_assigned_identity.aks_service_pricipal.id]
  }

  lifecycle {
    ignore_changes = [default_node_pool[0].node_count]
  }

  depends_on = [
    azurerm_role_assignment.aks_role_assignment
  ]

  tags = {
    "managed_by" = "terraform"
  }
}

# # # =================== Node pool NSG ===========
resource "azurerm_network_security_group" "aks_nodepool_nsg" {
  name                = "nsg-${var.aks_nodepool_name}"
  location            = "${var.azure_region_map["az1"]}"
  resource_group_name = azurerm_resource_group.resource_group.name
  tags = {
    "managed_by" = "terraform"
  }
}

# # # =================== Node pool Vnet =====================
resource "azurerm_virtual_network" "aks_node_vnet" {
  name                = "vnet-${var.aks_nodepool_name}"
  location            = "${var.azure_region_map["az1"]}"
  resource_group_name = azurerm_resource_group.resource_group.name
  address_space       = ["10.224.0.0/12"]

  subnet {
    name           = "aks-subnet"
    address_prefix = "10.224.0.0/16"
    security_group = azurerm_network_security_group.aks_nodepool_nsg.id
  }

  subnet {
    name = "ingress-appgateway-subnet"
    address_prefix = "10.225.0.0/16"
  }

  tags = {
    "managed_by" = "terraform"
  }
}

data "azurerm_subnet" "appgwsubnet" {
  name                 = "ingress-appgateway-subnet"
  resource_group_name  = azurerm_resource_group.resource_group.name
  virtual_network_name = azurerm_virtual_network.aks_node_vnet.name
  #address_prefixes     = ["10.225.0.0/24"]
}

auto-generated rg for vent, contains the apic enter image description here same rg for aks and vnet enter image description here

1

There are 1 answers

0
Vinay B On BEST ANSWER

I tried to provision the Application Gateway AKS cluster with the terraform configuration and I was able to achieve the requirement successfully.

Azure does not allow VNet peering between virtual networks that have overlapping address spaces. Each VNet in Azure should have a distinct address space. If there is an overlap, you have to reconfigure your network to make sure that the VNets use unique and non-overlapping CIDR blocks.

It looks like you are using Terraform to set up a Kubernetes cluster and a virtual network that includes a default node pool and an Application Gateway. To fix the 502 error, you have to check that your Application Gateway can properly direct traffic to your AKS cluster.

My demo terraform configuration:

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "demorg-vk"
  location = "West Europe"
}

resource "azurerm_virtual_network" "example" {
  name                = "demovk-network"
  address_space       = ["10.0.0.0/8"]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource "azurerm_subnet" "example_aks" {
  name                 = "demovk-aks-subnet"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = ["10.1.0.0/16"]
}

resource "azurerm_subnet" "example_appgw" {
  name                 = "demovk-appgw-subnet"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = ["10.2.0.0/16"]
}

resource "azurerm_kubernetes_cluster" "example_aks" {
  name                = "demovk-aks-cluster"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  dns_prefix          = "exampleaks"

  default_node_pool {
    name             = "default"
    node_count       = 1
    vm_size          = "Standard_DS2_v2"
    vnet_subnet_id   = azurerm_subnet.example_aks.id
  }

  identity {
    type = "SystemAssigned"
  }

  network_profile {
    network_plugin = "azure"
    service_cidr   = "10.3.0.0/16"
    dns_service_ip = "10.3.0.10"
  }
}

resource "azurerm_public_ip" "example_appgw_ip" {
  name                = "demovk-appgw-ip"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_application_gateway" "example_appgw" {
  name                = "demovk-appgw"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  sku {
    name     = "Standard_v2"
    tier     = "Standard_v2"
    capacity = 2
  }

  gateway_ip_configuration {
    name      = "gateway-ip-configuration"
    subnet_id = azurerm_subnet.example_appgw.id
  }

  frontend_ip_configuration {
    name                 = "frontend-ip-configuration"
    public_ip_address_id = azurerm_public_ip.example_appgw_ip.id
  }

  frontend_port {
    name = "http-port"
    port = 80
  }

  backend_address_pool {
    name = "backend-address-pool"
  }

  backend_http_settings {
    name                  = "http-settings"
    cookie_based_affinity = "Disabled"
    path                  = "/"
    port                  = 80
    protocol              = "Http"
  }

  http_listener {
    name                           = "http-listener"
    frontend_ip_configuration_name = "frontend-ip-configuration"
    frontend_port_name             = "http-port"
    protocol                       = "Http"
  }

  request_routing_rule {
    name                       = "http-routing-rule"
    rule_type                  = "Basic"
    http_listener_name         = "http-listener"
    backend_address_pool_name  = "backend-address-pool"
    backend_http_settings_name = "http-settings"
    priority                   = 100  # Define a priority for the rule
  }
}

I have given this for demo purposes with the AKS cluster and not included Ingress controller as Issuse was mentioned as network peering IP Addresses.

Output:

enter image description here

enter image description here

enter image description here

enter image description here