Prep Lab 2.3: Use GitLab K8s Agent to Integrate The Cluster with GitLab

Target Outcomes
  1. Integrate a Kubernetes Cluster with GitLab using the GitLab Agent.
  2. Create the integration at a group level so it works for all subgroups and projects.
  3. [Dev Environments Only] Enable applications deployed to the EKS cluster using Auto DevOps will have full dynamic review environments support, which requires:
    1. Can resolve External, Dynamic DNS names (via the dynamic DNS service nip.io combined with the Ingress).
    2. With SSL urls (via cert-manager).

Configure and Install Gitlab Agent for Kubernetes for CI Push

Keyboard Time: 20 mins, Automation Wait Time: 20 mins

Scenarios: Instructor-Led, Self-Paced

Guides Through: Installing the agent for Kubernetes

Tip
This specific agent configuration combined with the values and placement of the CI/CD variables starting with “KUBE_” accomplish CI Push integration for the entire heirarchy of projects under ‘classgroup’. This mimics the way that GitLabs legacy certificate connection method was able to integrate and entire heirarchy of projects - so this specific configuration may be useful in your actual development processes. A cluster integrated in this way will not appear in the group list of Kubernetes Clusters in the GitLab UI.

IMPORTANT: This Guide assumes you have a runner available to configure the agent.

Target Outcomes
  1. Agent registration and installation.
  2. Configuration of the class group and all subgroups to use Traditional Runner CD Push to a Kubernetes cluster.

This guide uses the GitLab CI/CD workflow and the Single project approach for agent install and cluster management.

  1. While in ‘classgroup’:

    Guides Through: Create a new project based on the Cluster Management Template

    1. In ‘classgroup’, near the top to the left of Search GitLab, Click + (a small button) and then Click New project/repository
    2. Click Create from template
    3. On the ‘Create from template’ page, Locate GitLab Cluster Mangement and to the right of this name, Click Use template (button)
    4. On the next page, for ‘Project name’ Type Cluster Management
    5. Near the bottom of the page Click Create project (button)

    IMPORTANT: From here on these instructions will refer to this as ‘classgroup/cluster-management’.

  2. Register the agent by

    Guides Through: Create an agent configuration file and Register the agent with GitLab

    1. Open ‘classgroup/cluster-management’ (would be the default state from the previous step above)

    2. In the top of the left navigation bar, Click Cluster Management (The title of the project in the nav bar)

    3. Near the top right of the page Click Web IDE (button)

    4. After opening in the Web IDE, Locate the word Edit

      In the next step, you do NOT have to recreate the directory - all missing subdirectories will be automatically created.

  3. Next to ‘Edit’, Click the file with a plus sign icon in the path specify .gitlab/agents/spot2az-agent1/config.yaml and Click Create file.

    1. In the Web IDE file selector, Click the new file ‘config.yaml

    2. Place these contents in the file - substitute your actual class group name for the text _replace_with_path_to_classgroup_ for instance for a group https://gitlab.com/awesomeclassgroup you would specify only awesomeclassgroup.

    Dispite the id: yaml parameter name this is the text group path, not the group id.

    #id: = Full group path without instance url 
    # and without leading or trailing slashes.
    # for https://gitlab.com/this/is/an/example, id would be:
    # - id: this/is/an/example 
       
    ci_access:
      groups:
      - id: _replace/with/path/to/classgroup_
         
    observability:
      logging:
        level: debug
    

    Important: The above configuration represents Traditional Runner CD Push access to every project below the root level group classgroup the scope of projects can be made tighter with more group levels by continuing to specify a GitLab group hiearchy like classgroup/mysubgroup Also use the projects: key word instead of groups: if scoping right to a project. Documentation: Authorize the agent.

    1. Click Create commit…

    2. IMPORTANT: Select Commit to master branch (non-default selection)

    3. Click Commit

      Ignore pipeline failures.

    4. To exit the editor, in the left navigation bar at the top, Click Cluster Management (the project name)

    5. From the left sidebar, Select Infrastructure > Kubernetes clusters.

    6. Click Connect a cluster (button). (DO NOT Click the down arrow next to the button)

    7. In the field Select an agent or enter a new name to create new Click the dropdown list arrow, select spot2az-agent1 and Click Register (button)

    8. GitLab generates a registration token for this agent. Securely store this secret token. You need it to install the agent in your cluster and to update the agent to another version.

    9. IMPORTANT: Copy the command under Recommended installation method. You need it when you use the one-liner installation method to install the agent in your cluster. (or you can leave this popup open to copy the command in the next section)

    10. Do not close the dialog nor browser window until you have successfully run the command in the next steps.

  4. Install the agent by:

    Guides Through: Install the agent in the cluster

    1. Click here to open the EC2 Instances Console in us-east-2

    2. In the EC2 Instances console, locate the instance named EKSBastion

    3. Right click the instance, select => Connect => Session Manager => Connect (button)

      Remember The Above Sequence
      kubectl and helm are now available on your path and the bastion instance already has administrative permissions to the cluster. Remember the above sequence for gaining access to CLI based cluster admin.
    4. After the command prompt appears, Paste the ‘Recommended installation method’ command from the previous page. Note: Success is indicated by about 9 lines of logging with no errors.

    5. Select the browser tab you previous left open (‘classgroup/cluster-management’ > Cluster Mangement > Kubernetes), If the popup ‘Connect a Kuberntes cluster’ is still displaying Click Close

    6. Refresh the page.

    7. Under ‘Agents’ your agent named spot2az-agent1 (or whatever your actual name is) should have ‘Connected’ in the ‘Connection status’ column. If it does not show connected yet, keep refreshing until it does.

      KUBE_CONTEXT and KUBE_NAMESPACE are used for both agent registration and agent usage in Auto DevOps - therefore we are configuring it at the top group level for which we would like the agent to be usable for Auto DevOps for all downbound groups and projects.

    8. Navigate away from the Cluster Management project, to ‘classgroup’ (There should be a clickable breadcrumb trail on teh cluster page to go directly)

    9. Click Settings > CI/CD (Be sure you do this from the ‘classgroup’, not a project).

      IMPORTANT: This menu is nest under “Setings” it is NOT the direct menu choice “CI/CD”

    10. Next to ‘Variables’ Click Expand

    11. Click Add variable once for each table row and specify the variables settings as indicated in the table. Be sure to substitute your actual classgrouppath in KUBE_CONTEXT.

      Use the variable references in KUBE_NAMESPACE exactly (literally) as documented in the table.

      Key Value Protect Mask
      KUBE_CONTEXT classgroup/cluster-management:spot2az-agent1 No No
      KUBE_NAMESPACE $CI_PROJECT_NAME-$CI_PROJECT_ID No No
      AUTO_DEPLOY_IMAGE_VERSION v2.25.0 No No
      DAST_AUTO_DEPLOY_IMAGE_VERSION v2.25.0 No No

These variables references in KUBE_NAMESPACE ensure that all branches in all projects in the downbound group hiearchy remain unique and therefore isolated on Kubernetes.

Warning
AUTO_DEPLOY_IMAGE_VERSION and DAST_AUTO_DEPLOY_IMAGE_VERSION version pegging is necessary at this time to prevent errors with deployment. This may be able to be removed when this issue is resolved: Error when ingressClassName is supported by API but not by the ingress controller
Accomplished Outcomes
  1. Agent registration and installation.
  2. Configuration of the class group and all subgroups to use Traditional Runner CD Push to a Kubernetes cluster.

Configuring Ingress with Built-in nip.io SSL for Auto DevOps

Keyboard Time: 20 mins, Automation Wait Time: 30 min

Scenarios: Instructor-Led, Self-Paced

Update Cluster Management Project to Install the NGINX Ingress and Cert Manager

Not For Production
Production application setups would generally not use this specific Ingress install, nor cert-manager or nip.io - these are all used for the convenience of quick demo and training setups.
  1. In ‘classgroup/cluster-management’ Start the Web IDE.

  2. In the Web IDE file navigation, Click .gitlab-ci.yml

  3. Under the job ‘.base’, Locate the line starting with image:

  4. At the end of the line ensure the portion after the colon (“:”) is set to v1.6.0 or higher. Do not change it if it is higher.

    Final result: image: “registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications:v1.6.0”

  5. In the Web IDE file navigation, Click helmfile.yaml

    In the following commands ONLY delete the hash character (#) character to preserve the needed yaml indentations

  6. To enable the ingress controller Uncomment the line - path: applications/ingress/helmfile.yaml

  7. To enable cert manager with nip.io SSL Uncomment the line - path: applications/cert-manager/helmfile.yaml

  8. To configure cert-manager, Edit applications/cert-manager/helmfile.yaml (file is in the subdirectory path ‘applications/cert-manager’)

  9. IMPORTANT: At the bottom of the file find email: example@example.com and change example@example.com to a valid email address.

  10. To configure ingress, Edit applications/ingress/values.yaml

  11. At the bottom add the following (ENSURE that the indenting of config: aligns with podAnnotations: above it)

      config:
        # pass the X-Forwarded-* headers directly from the upstream
        use-forwarded-headers: "true"
    

    The result should roughly be (assuming the template has not changed since this writing):

    controller:
      stats:
        enabled: true
      podAnnotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "10254"
      config:
        # pass the X-Forwarded-* headers directly from the upstream
        use-forwarded-headers: "true" 
    
  12. Click Create commit…

  13. IMPORTANT: Select Commit to master branch (non-default selection)

  14. Click Commit

  15. In the very bottom left of the page, immediately after the text ‘Pipeline’ RIGHT Click [the pipeline number which is preceeded with a #] and Select Open in new tab, Click [the new tab] (Pipelines can also be viewed by existing the IDE and from the project view in the left navigation Click CI/CD => Pipelines and Click [the status badge] or [pipeline #] for the latest running pipeline)

  16. Wait for the job “diff” to complete successfully by watching the pipeline that was just created by your commit.

    Some Possible CI Errors
    In the CI log of the cluster-management project’s “diff” job:
    
    
    ```
    error: no context exists with the name: "gitlab-learn-labs/gitops/classgroup-abc:spot2az-agent1"
    ```
    
    Can be caused by:
    
    The agent path is not correct in the classgroup level variable **KUBE_CONTEXT** (or the variable was not set), notice in this case it leaves out the name of the project “cluster-management”, the correct path would be gitlab-learn-labs/gitops/classgroup-abc**/cluster-management**:spot2az-agent1
    
  17. When the job “diff” has completed successfully, from the pipeline view, on the job “Sync”, Click [the play icon]

    Note: You can also use the bastion host to run kubectl get pods --all-namespaces and look for some pods starting with certmanager and some starting with ingress in the namespace gitlab-managed-apps.

  18. To see the new AWS load balancer created by the ingress chart, in the ‘AWS EC2 Console’, in the left hand navigation, under ‘Load balancing’, Click Load Balancers

  19. If there is more than one load balancer, examine their ‘Tag’ details until you find one with the Key = kubernetes.io/cluster/spot2azuseast2 with Value = owned

  20. Once you have located the appropriate load balancer, in the details tabs Click Description

  21. Find ‘DNS Name’ and Copy <the Load Balancer DNS Name>

  22. In a web browser open https://dnschecker.org

  23. Under ‘DNS CHECK’, over top ‘example.com’, Paste <the Load Balancer DNS Name>

  24. Click Search

  25. If you setup EKS for 2 availability zones, you should see 2 IP addresses being returned by most or all locations and green checkmarks after the IPs.

  26. If there are red X’s instead of IPs, wait and keep clicking “Search” until you see IP addresses.

  27. Copy one of the <the Load Balancer IP>'s

  28. On https://dnschecker.org, Paste the <the Load Balancer IP> over the <the Load Balancer DNS Name> and append .nip.io (take note of both dots)

    The complete string should be <the Load Balancer IP>.nip.io

  29. Click Search

    If nip.io is fast, you will see it resolves to the same IP address. Auto DevOps URLs and SSL will not work correctly until this is resolving, but configuration can continue and it will likely be done by the time all configuration is done.

  30. If there are red X’s instead of IPs, wait and keep clicking “Search” until you see IP addresses.

  31. Copy the DNS name <the Load Balancer IP>.nip.io

  32. In the GitLab group ‘classgroup’ Click Settings > CI/CD

    KUBE_INGRESS_BASE_DOMAIN is used for Auto DevOps - therefore we are configuring it at the top group level for which we would like the cluster in question to be usable for Auto DevOps for all downbound groups and projects.

  33. Next to ‘Variables’ Click Expand

  34. Click Add variable once for each table row (or update the variables if they are already there)

    Key Value Protect Mask
    KUBE_INGRESS_BASE_DOMAIN <the Load Balancer IP>.nip.io No No
Accomplished Outcomes
  1. Integrate a Kubernetes Cluster with GitLab using the GitLab Agent.
  2. Create the integration at a group level so it works for all subgroups and projects.
  3. [Dev Environments Only] Enable applications deployed to the EKS cluster using Auto DevOps will have full dynamic review environments support, which requires:
    1. Can resolve External, Dynamic DNS names (via the dynamic DNS service nip.io combined with the Ingress).
    2. With SSL urls (via cert-manager).