Keyboard Time: 20 mins, Automation Wait Time: 20 mins
Scenarios: Instructor-Led, Self-Paced
Guides Through: Installing the agent for Kubernetes
IMPORTANT: This Guide assumes you have a runner available to configure the agent.
This guide uses the GitLab CI/CD workflow and the Single project approach for agent install and cluster management.
While in ‘classgroup’:
Guides Through: Create a new project based on the Cluster Management Template
IMPORTANT: From here on these instructions will refer to this as ‘classgroup/cluster-management’.
Register the agent by
Guides Through: Create an agent configuration file and Register the agent with GitLab
Open ‘classgroup/cluster-management’ (would be the default state from the previous step above)
In the top of the left navigation bar, Click Cluster Management (The title of the project in the nav bar)
Near the top right of the page Click Web IDE (button)
After opening in the Web IDE, Locate the word Edit
In the next step, you do NOT have to recreate the directory - all missing subdirectories will be automatically created.
Next to ‘Edit’, Click the file with a plus sign icon
in the path specify .gitlab/agents/spot2az-agent1/config.yaml
and Click Create file.
In the Web IDE file selector, Click the new file ‘config.yaml’
Place these contents in the file - substitute your actual class group name for the text _replace_with_path_to_classgroup_
for instance for a group https://gitlab.com/awesomeclassgroup you would specify only awesomeclassgroup
.
Dispite the
id:
yaml parameter name this is the text group path, not the group id.
#id: = Full group path without instance url
# and without leading or trailing slashes.
# for https://gitlab.com/this/is/an/example, id would be:
# - id: this/is/an/example
ci_access:
groups:
- id: _replace/with/path/to/classgroup_
observability:
logging:
level: debug
Important: The above configuration represents Traditional Runner CD Push access to every project below the root level group classgroup the scope of projects can be made tighter with more group levels by continuing to specify a GitLab group hiearchy like classgroup/mysubgroup Also use the projects: key word instead of groups: if scoping right to a project. Documentation: Authorize the agent.
Click Create commit…
IMPORTANT: Select Commit to master branch (non-default selection)
Click Commit
Ignore pipeline failures.
To exit the editor, in the left navigation bar at the top, Click Cluster Management (the project name)
From the left sidebar, Select Infrastructure > Kubernetes clusters.
Click Connect a cluster (button). (DO NOT Click the down arrow next to the button)
In the field Select an agent or enter a new name to create new Click the dropdown list arrow, select spot2az-agent1 and Click Register (button)
GitLab generates a registration token for this agent. Securely store this secret token. You need it to install the agent in your cluster and to update the agent to another version.
IMPORTANT: Copy the command under Recommended installation method. You need it when you use the one-liner installation method to install the agent in your cluster. (or you can leave this popup open to copy the command in the next section)
Do not close the dialog nor browser window until you have successfully run the command in the next steps.
Install the agent by:
Guides Through: Install the agent in the cluster
In the EC2 Instances console, locate the instance named EKSBastion
Right click the instance, select => Connect => Session Manager => Connect (button)
After the command prompt appears, Paste the ‘Recommended installation method’ command from the previous page. Note: Success is indicated by about 9 lines of logging with no errors.
Select the browser tab you previous left open (‘classgroup/cluster-management’ > Cluster Mangement > Kubernetes), If the popup ‘Connect a Kuberntes cluster’ is still displaying Click Close
Refresh the page.
Under ‘Agents’ your agent named spot2az-agent1 (or whatever your actual name is) should have ‘Connected’ in the ‘Connection status’ column. If it does not show connected yet, keep refreshing until it does.
KUBE_CONTEXT and KUBE_NAMESPACE are used for both agent registration and agent usage in Auto DevOps - therefore we are configuring it at the top group level for which we would like the agent to be usable for Auto DevOps for all downbound groups and projects.
Navigate away from the Cluster Management project, to ‘classgroup’ (There should be a clickable breadcrumb trail on teh cluster page to go directly)
Click Settings > CI/CD (Be sure you do this from the ‘classgroup’, not a project).
IMPORTANT: This menu is nest under “Setings” it is NOT the direct menu choice “CI/CD”
Next to ‘Variables’ Click Expand
Click Add variable once for each table row and specify the variables settings as indicated in the table. Be sure to substitute your actual classgrouppath in KUBE_CONTEXT.
Use the variable references in KUBE_NAMESPACE exactly (literally) as documented in the table.
Key | Value | Protect | Mask |
---|---|---|---|
KUBE_CONTEXT | classgroup/cluster-management:spot2az-agent1 | No | No |
KUBE_NAMESPACE | $CI_PROJECT_NAME-$CI_PROJECT_ID | No | No |
AUTO_DEPLOY_IMAGE_VERSION | v2.25.0 | No | No |
DAST_AUTO_DEPLOY_IMAGE_VERSION | v2.25.0 | No | No |
These variables references in KUBE_NAMESPACE ensure that all branches in all projects in the downbound group hiearchy remain unique and therefore isolated on Kubernetes.
Keyboard Time: 20 mins, Automation Wait Time: 30 min
Scenarios: Instructor-Led, Self-Paced
In ‘classgroup/cluster-management’ Start the Web IDE.
In the Web IDE file navigation, Click .gitlab-ci.yml
Under the job ‘.base’, Locate the line starting with image:
At the end of the line ensure the portion after the colon (“:”) is set to v1.6.0 or higher. Do not change it if it is higher.
Final result: image: “registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications:v1.6.0”
In the Web IDE file navigation, Click helmfile.yaml
In the following commands ONLY delete the hash character (#) character to preserve the needed yaml indentations
To enable the ingress controller Uncomment the line - path: applications/ingress/helmfile.yaml
To enable cert manager with nip.io SSL Uncomment the line - path: applications/cert-manager/helmfile.yaml
To configure cert-manager, Edit applications/cert-manager/helmfile.yaml (file is in the subdirectory path ‘applications/cert-manager’)
IMPORTANT: At the bottom of the file find email: example@example.com and change example@example.com to a valid email address.
To configure ingress, Edit applications/ingress/values.yaml
At the bottom add the following (ENSURE that the indenting of config: aligns with podAnnotations: above it)
config:
# pass the X-Forwarded-* headers directly from the upstream
use-forwarded-headers: "true"
The result should roughly be (assuming the template has not changed since this writing):
controller:
stats:
enabled: true
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
config:
# pass the X-Forwarded-* headers directly from the upstream
use-forwarded-headers: "true"
Click Create commit…
IMPORTANT: Select Commit to master branch (non-default selection)
Click Commit
In the very bottom left of the page, immediately after the text ‘Pipeline’ RIGHT Click [the pipeline number which is preceeded with a #] and Select Open in new tab, Click [the new tab] (Pipelines can also be viewed by existing the IDE and from the project view in the left navigation Click CI/CD => Pipelines and Click [the status badge] or [pipeline #] for the latest running pipeline)
Wait for the job “diff” to complete successfully by watching the pipeline that was just created by your commit.
In the CI log of the cluster-management project’s “diff” job:
```
error: no context exists with the name: "gitlab-learn-labs/gitops/classgroup-abc:spot2az-agent1"
```
Can be caused by:
The agent path is not correct in the classgroup level variable **KUBE_CONTEXT** (or the variable was not set), notice in this case it leaves out the name of the project “cluster-management”, the correct path would be gitlab-learn-labs/gitops/classgroup-abc**/cluster-management**:spot2az-agent1
When the job “diff” has completed successfully, from the pipeline view, on the job “Sync”, Click [the play icon]
Note: You can also use the bastion host to run kubectl get pods --all-namespaces
and look for some pods starting with certmanager and some starting with ingress in the namespace gitlab-managed-apps.
To see the new AWS load balancer created by the ingress chart, in the ‘AWS EC2 Console’, in the left hand navigation, under ‘Load balancing’, Click Load Balancers
If there is more than one load balancer, examine their ‘Tag’ details until you find one with the Key = kubernetes.io/cluster/spot2azuseast2 with Value = owned
Once you have located the appropriate load balancer, in the details tabs Click Description
Find ‘DNS Name’ and Copy <the Load Balancer DNS Name>
In a web browser open https://dnschecker.org
Under ‘DNS CHECK’, over top ‘example.com’, Paste <the Load Balancer DNS Name>
Click Search
If you setup EKS for 2 availability zones, you should see 2 IP addresses being returned by most or all locations and green checkmarks after the IPs.
If there are red X’s instead of IPs, wait and keep clicking “Search” until you see IP addresses.
Copy one of the <the Load Balancer IP>'s
On https://dnschecker.org, Paste the <the Load Balancer IP> over the <the Load Balancer DNS Name> and append .nip.io (take note of both dots)
The complete string should be <the Load Balancer IP>.nip.io
Click Search
If nip.io is fast, you will see it resolves to the same IP address. Auto DevOps URLs and SSL will not work correctly until this is resolving, but configuration can continue and it will likely be done by the time all configuration is done.
If there are red X’s instead of IPs, wait and keep clicking “Search” until you see IP addresses.
Copy the DNS name <the Load Balancer IP>.nip.io
In the GitLab group ‘classgroup’ Click Settings > CI/CD
KUBE_INGRESS_BASE_DOMAIN is used for Auto DevOps - therefore we are configuring it at the top group level for which we would like the cluster in question to be usable for Auto DevOps for all downbound groups and projects.
Next to ‘Variables’ Click Expand
Click Add variable once for each table row (or update the variables if they are already there)
Key | Value | Protect | Mask |
---|---|---|---|
KUBE_INGRESS_BASE_DOMAIN | <the Load Balancer IP>.nip.io | No | No |