| ์ผ | ์ | ํ | ์ | ๋ชฉ | ๊ธ | ํ |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | |
| 7 | 8 | 9 | 10 | 11 | 12 | 13 |
| 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| 21 | 22 | 23 | 24 | 25 | 26 | 27 |
| 28 | 29 | 30 | 31 |
- ํ์ดํผ๋ ์
- hyperledger fabric
- javascript ์ด๊ธ
- ์๋ฐ์คํฌ๋ฆฝํธ
- ํ์ด์ฌ ์๊ณ ๋ฆฌ์ฆ
- DataStructure
- javascript ๊ฒ์
- ๊น
- hyperledger
- ์ ๋ก์ด
- mysql
- js
- algorithum
- ํ๋ก๊ทธ๋๋ฐ
- al
- ์ปดํจํฐ์ฌ์ด์ธ์ค
- vs code
- ๋ธ๋ก์ฒด์ธ๊ฐ๋ก
- ๋ธ๋ก๋ชฝํค
- Nodejs ํ๋ก์ ํธ
- SQL
- Blockmonkey
- Javascript
- ์ปดํจํฐ๊ณตํ๊ฐ๋ก
- ๋ธ๋ก์ฒด์ธ
- ์ํ์ฝ๋ฉ nodejs
- nodejs
- ๊ด๊ณํ๋ฐ์ดํฐ๋ฒ ์ด์ค
- javascirpt
- ์ํ์ฝ๋ฉ
- Today
- Total
Blockmonkey
EKS ์ฌ์ฉํ๊ธฐ ๋ณธ๋ฌธ
๐ Kubernetes ๊ตฌ์ฑ ์ํคํ ์ณ

๐น Master Node (Control Plane)
Kubernetes ์ ์ฒด ํด๋ฌ์คํฐ๋ฅผ ๊ด๋ฆฌํ๋ ๋๋ ์ญํ
- API Server: kubectl ๊ฐ์ ์์ฒญ์ ๋ฐ์๋ค์ด๋ ์ฐฝ๊ตฌ
- Scheduler: ์ด๋ค Pod์ ์ด๋ค Node์ ๋ฐฐ์นํ ์ง ๊ฒฐ์
- Controller Manager: ์ํ ์ ์ง๋ฅผ ์ํ ์๋ ์กฐ์ (์: Pod์ด ์ฃฝ์ผ๋ฉด ๋ค์ ๋์)
- etcd: Kubernetes์ ์ค์ ์ ๋ณด, ์ํ ๋ฑ์ ์ ์ฅํ๋ Key-Value ์ ์ฅ์
๐น Worker Nodes
์ค์ ์ฑ์ด ๋ฐฐํฌ๋๋ ์ปดํจํ ์์ (EC2 ๋ฑ)
- Master์ ๋ช ๋ น์ ๋ฐ๋ผ Pod์ ์คํํ๊ณ ๊ด๋ฆฌ
- ํ ๊ฐ ์ด์์ Worker Node ์กด์ฌ ๊ฐ๋ฅ (์ค์ผ์ผ๋ง ๋์)
๐น Pods
Kubernetes์์ ๊ฐ์ฅ ์์ ์คํ ๋จ์ (์ปจํ ์ด๋ ํ ๊ฐ or ์ฌ๋ฌ ๊ฐ ํฌํจ ๊ฐ๋ฅ)
- Spring App, ArgoCD, Redis ๋ฑ ์ค์ ์ฑ ์คํ๋๋ ๊ณณ
- Pod์ด ์ฃฝ์ผ๋ฉด ์๋์ผ๋ก ๋ค์ ์์ฑ๋จ (by ReplicaSet ๋ฑ)
๐น Service
Pod๋ค์ ์ถ์ํํด์, ์์ ์ ์ธ ์ ๊ทผ ๊ฒฝ๋ก ์ ๊ณต
- Pod IP๋ ๋ฐ๋ ์ ์์ผ๋ฏ๋ก ๊ณ ์ ๋ IP/๋๋ฉ์ธ ์ญํ
- ClusterIP: ํด๋ฌ์คํฐ ๋ด๋ถ ํต์ ์ฉ
- NodePort: ์ธ๋ถ์์ ์ ๊ทผ ๊ฐ๋ฅ (๋ ธ๋ IP + ํฌํธ)
- LoadBalancer: ํด๋ผ์ฐ๋ ๋ก๋๋ฐธ๋ฐ์ ํ ๋น (์ธ๋ถ ํธ๋ํฝ์ฉ)
๐ Kubernetes - EKS ๊ตฌ์ถํด๋ณด๊ธฐ
- ์ฌ์ ์ค๋น์ฌํญ
- AWS CLI ์ค์น ๋ฐ ์ค์
- Helm ์ค์น
- brea install helm ๋ช ๋ น์ด๋ฅผ ํตํด Helm ์ค์น
- VPC ๊ตฌ์ฑํ๊ธฐ
- 2 Region ์ด์ ๊ตฌ์ฑ
*๊ธฐํ ์ฃผ์์ฌํญ
EKS์ฉ VPC ์๋ธ๋ท์๋ ์๋ ๋๊ฐ ํ๊ทธ๋ฅผ ํ์์ ์ผ๋ก ๋ถ์ฌ์ค์ผ ๋์ํ๋ค.
| kubernetes.io/cluster/eks-test-cluster | shared |
| kubernetes.io/role/elb | 1 |
- EKS IAM ROLE ์์ฑํ๊ธฐ
- ํ์ ์๊ตฌ ์ ์ฑ
- EKSWorkerNodePolicy, EKSCNIPolicy, EC2ContainerReadOnlyPolicy
- Example
- ํ์ ์๊ตฌ ์ ์ฑ
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeInternetGateways", "ec2:DescribeVpcs", "ec2:DescribeVpcPeeringConnections", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeInstances", "ec2:DescribeNetworkInterfaces", "ec2:DescribeTags", "ec2:GetCoipPoolUsage", "ec2:DescribeCoipPools", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeListenerCertificates", "elasticloadbalancing:DescribeSSLPolicies", "elasticloadbalancing:DescribeRules", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeListenerAttributes" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "cognito-idp:DescribeUserPoolClient", "acm:ListCertificates", "acm:DescribeCertificate", "iam:ListServerCertificates", "iam:GetServerCertificate", "waf-regional:GetWebACL", "waf-regional:GetWebACLForResource", "waf-regional:AssociateWebACL", "waf-regional:DisassociateWebACL", "wafv2:GetWebACL", "wafv2:GetWebACLForResource", "wafv2:AssociateWebACL", "wafv2:DisassociateWebACL", "shield:GetSubscriptionState", "shield:DescribeProtection", "shield:CreateProtection", "shield:DeleteProtection" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:AuthorizeSecurityGroupIngress", "ec2:RevokeSecurityGroupIngress" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:CreateSecurityGroup" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:CreateTags" ], "Resource": "arn:aws:ec2:*:*:security-group/*", "Condition": { "StringEquals": { "ec2:CreateAction": "CreateSecurityGroup" }, "Null": { "aws:RequestTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "ec2:CreateTags", "ec2:DeleteTags" ], "Resource": "arn:aws:ec2:*:*:security-group/*", "Condition": { "Null": { "aws:RequestTag/elbv2.k8s.aws/cluster": "true", "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "ec2:AuthorizeSecurityGroupIngress", "ec2:RevokeSecurityGroupIngress", "ec2:DeleteSecurityGroup" ], "Resource": "*", "Condition": { "Null": { "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateTargetGroup" ], "Resource": "*", "Condition": { "Null": { "aws:RequestTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:CreateListener", "elasticloadbalancing:DeleteListener", "elasticloadbalancing:CreateRule", "elasticloadbalancing:DeleteRule" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:AddTags", "elasticloadbalancing:RemoveTags" ], "Resource": [ "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*", "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*", "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*" ], "Condition": { "Null": { "aws:RequestTag/elbv2.k8s.aws/cluster": "true", "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:AddTags", "elasticloadbalancing:RemoveTags" ], "Resource": [ "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*", "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*", "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*", "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*" ] }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:AddTags" ], "Resource": [ "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*", "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*", "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*" ], "Condition": { "StringEquals": { "elasticloadbalancing:CreateAction": [ "CreateTargetGroup", "CreateLoadBalancer" ] }, "Null": { "aws:RequestTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:SetIpAddressType", "elasticloadbalancing:SetSecurityGroups", "elasticloadbalancing:SetSubnets", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:DeleteTargetGroup" ], "Resource": "*", "Condition": { "Null": { "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" } } }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets" ], "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*" }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:SetWebAcl", "elasticloadbalancing:ModifyListener", "elasticloadbalancing:AddListenerCertificates", "elasticloadbalancing:RemoveListenerCertificates", "elasticloadbalancing:ModifyRule" ], "Resource": "*" } ] }
- EKS - Cluster ์์ฑ - eks-test-cluster
- ์ค์
- Custom Configuration์ผ๋ก ์งํ.
- EKS Auto Mode : ํด๋ฌ์คํฐ ๊ด๋ฆฌ๋ชจ๋๋ก ALB, EBS ์ฐ๊ฒฐ ๊ด๋ฆฌ ๋ฑ ์์์ ํด์ฃผ๋ ์ต์ → ๋นํ์ฑํ
- Kubernetes version settings : ์ฟ ๋ฒ๋คํฐ์ค ๋ฒ์ ์ ์ค์ ํ๋ ๋ถ๋ถ
- Standard : 14๊ฐ์ ๋์ ๋ฒ์ ์ง์ → ์ดํ ์๋์ ๊ทธ๋ ์ด๋ (๋น์ฉ์์)
- Extended : 26๊ฐ์ ๋์ ์ง์ ์ฐ์ฅ → 14๊ฐ์ ํ ์๊ธ๋ฐ์
- ๊ฐ๋ฐ์ฉ : Standard / ์ค์ ์ด์ ์ : Extended ๋ชจ๋ ๊ถ์ฅ
- Auto Mode Compute : Node๋ฅผ ์๋, ์๋ ์ถ๊ฐ ๊ด๋ฆฌ ์ค์
- ์ถํ Helm, ArgoCD๋ฅผ ํตํด ์งํํ๋ฏ๋ก → ๋นํ์ฑํ
- ์ค์
- EKS - Node Group ์์ฑ - eks-test-node-group
- Cluster - Compute - Node groups - Add Node Group ์์ ์์ฑ.
- ์ต์ ์ธ์คํด์ค ์ฑ๋ฅ์ small ์ด๋ค.
- Argo CD๋ฑ ์ถ๊ฐํ๋ ค๋ฉด Node 2๊ฐ ์ด์ ํ ๊ฒ. (t3.small ๊ธฐ์ค)
- EKS Kubectl ์ฐ๊ฒฐ ๋ฐ ๊ฒ์ฆ
- ์ฐ๊ฒฐ ๋ฐ ๊ฒ์ฆ
- ์ฐ๊ฒฐ
aws eks update-kubeconfig --region **{aws-region}** --name **{eks-cluster-name}**- ๊ฒ์ฆ
// Node ๋ชฉ๋ก ํ์ธ kubectl get nodes
- ์ฐ๊ฒฐ ๋ฐ ๊ฒ์ฆ
- ECR ์ ํ ๋ฐ ์ด๋ฏธ์ง ์ ๋ก๋ ์งํ
- Kubernetes Resource ์์ฑ (Deployment + Service)
- ๊ธฐ๋ณธ ํ ํ๋ฆฟ ์์
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: spring-app
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-container
image: 211125374993.dkr.ecr.ap-southeast-1.amazonaws.com/eks-test-ecr:latest
ports:
- containerPort: 8080
imagePullPolicy: Always
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
type: LoadBalancer
selector:
app: spring-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
- ์ ์ฉํ๊ธฐ ๋ฐ ์ ์ํ๊ธฐ
# ์ ์ฉ
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
# ์ ์
kubectl get svc ์
๋ ฅ ํ, EXTERNAL-IP ์ ์ ์ํ์ฌ ํ์ธ
๐ Kubernetes - EKS์ CICD ์ ํ ์ ์งํํ์
| ๊ตฌ์ฑ | ๊ธฐ๋ฅ |
| Github Action (CI) | Github Push → ECR์ ์ด๋ฏธ์ง ๋ฑ๋ก (CI) |
| ArgoCD (CD) | ์ ๊ณผ์ ์ ๊ฑฐ์น๋ฉด, Helm Repository์ Sed ๋ช ๋ น์ด๋ฅผ ํตํด Commit์ ์์ฑํ๊ณ , ์ด๋ฅผ ๊ฐ์งํ์ฌ ๋ฐฐํฌ๋ฅผ ์งํํ๋ค (CD) |
| Helm | Service, Deployment, pv, pvc ๋ฑ ๋ฐฐํฌ์ ๊ด๋ จ๋ ๋ด์ฉ์ yaml ํ์ผ๋ก ๊ด๋ฆฌํ๋ค. |
EKS with Github Action (CI) ์ ํ ํ๊ธฐ
์ฌ์ ์ค๋น์ฌํญ
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_REGION ์ ๋ณด
- AWS_ACCOUNT_ID
1. .github/workflow/deploy.yml์ ์์ฑ, (์ ๊ฐ๋ค์ Github Secret์ ๋ฑ๋ก์ด ํ์ํ๋ค)
name: Deploy Spring App to ECR
on:
push:
branches:
- main
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: ecr-login
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to ECR
env:
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
run: |
docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:latest .
docker tag $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:${{ github.sha }}
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:latest
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:${{ github.sha }}
- name: Update Kubernetes deployment yaml
env:
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
run: |
export NEW_IMAGE="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:${{ github.sha }}"
echo "New Image: $NEW_IMAGE"
sed -i "s|image:.*|image: $NEW_IMAGE|" ./k8s/deployment.yaml
- name: Push updated deployment.yaml
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
git add ./k8s/deployment.yaml
git commit -m "Update image to ${{ github.sha }}"
git push
2. ๊ฒ์ฆ
- Github ์ push ํด ์ปค๋ฐ์ ์ฌ๋ ธ์ ๋, ECR์ ์ด๋ฏธ์ง ์์ฑ๋๋ฉด ์ฑ๊ณต!
EKS with ArgoCD (CD) ์ ํ ํ๊ธฐ
ArgoCD๋ Github์ ์ฐ๋ํ์ฌ, Helm Deployment Script๋ฅผ ์ฝ๊ณ ์ด๋ฅผ ์๋ ๋ฐฐํฌ(Continuous Deployment) ํด์ฃผ๋ ์ญํ ์ ์ํํ๋ค. ์ฌ์ ์ Github Access Token์ด ํ์ํ๋ค.
1. ์ฟ ๋ฒ๋คํฐ์ค์ ArgoCD ๋ค์์คํ์ด์ค ์์ฑ
$ kubectl create namespace argocd
2. ์ฟ ๋ฒ๋คํฐ์ค์ ArgoCD ์ค์น
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
3. ๊ฒ์ฆ
kubectl get pods -n argocd
# Pending์ธ ๊ฒฝ์ฐ ์์ธ ํ์ธ
kubectl describe pod {pod name} -n argocd
# ArgoCD๋ฅผ 443 -> LocalHost 8080์ผ๋ก ํฌํธํฌ์๋ฉ
kubectl port-forward svc/argocd-server -n argocd 8080:443
# ArgoCD๋ฅผ ์ด๊ธฐ ๋น๋ฐ๋ฒํธ ํ์ธ (๋น๋ฐ๋ฒํธ ๋ฆฌํด๋จ)
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode
4. ArgoCD์ Github ๋ฅผ ๋ฑ๋กํ์ (์ปค๋ฐ์ ํ์ธํ๋ ค๋ฉด ์ ๊ทผ๊ถํ์ด ์์ด์ผํ์ง ์๊ฒ ๋๊ฐ?)
- ArgoCD์ ์ ์ํ์ฌ Settings - Repository Tab์์ ์งํ
- Github Secret Token ํ์ (Username & Secret ์์ฑ)
- ์๋์ฒ๋ผ Connection ๋ชฉ๋ก ๋จ๋ฉด ์ฑ๊ณต


5. ArgoCD์ Application ๋ฑ๋ก (Continuous Deployment๋ฅผ ์ํํ Application ๋์์ ๋ฑ๋กํ๋ค)
- Applications - + New App ๋ฒํผ ํด๋ฆญํด์ ์์ฑ ์์ํ๊ณ , ์๋ ๋ช ๋ น์ด๋ฅผ ์ ๋ ฅ ํ, Create ๋ฒํผ์ ๋๋ฌ ์์ฑํ๋ค.
| Application Name | spring-server (์์ ๋กญ๊ฒ) |
| Project | default |
| Sync Policy | Manual (์๋๋ฐฐํฌ) || Auto (์๋๋ฐฐํฌ) |
| Prune Propagation Policy | background |
| Repository URL | ์ ์ฐ๊ฒฐํ Github ์ ์ฅ์ ํด๋ฆญ |
| Revision (๋ธ๋์น๋ช ) | main |
| Path (k8s ์คํฌ๋ฆฝํธ ํด๋ ๋ช ) | k8s |
| Cluster | https://kubernetes.default.svc (EKS ๋ด๋ถ ๊ธฐ๋ณธ ํด๋ฌ์คํฐ ์ฃผ์) |
| Namespace | default || ์ํ๋ ๋ค์์คํ์ด์ค (๋์ ์ฑ์ด ๋ฐฐํฌ๋ ๋ค์์คํ์ด์ค) |

6. ArgoCD๋ฅผ ์ธ๋ถ ์ ๊ทผ์ด ๊ฐ๋ฅํ๊ฒ ClusterIP -> LoadBalancer ํ์ ์ผ๋ก ๋ณ๊ฒฝํ๋ค.
# ๋ณ๊ฒฝํ๊ธฐ
$ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
# ๊ฒ์ฆ
$ kubectl get svc -n argocd
*๊ธฐ๋ณธ์ ์ผ๋ก ๋ณ๋์ ์ค์ ์ด ์๋ค๋ฉด, ArgoCD๋ ๊นํ๋ธ์ Deployment.yaml ํ์ผ์ ๋ณ๋์ฌํญ์ 3๋ถ ๋ง๋ค Pollingํด ๋ณ๊ฒฝ์ฌํญ์ ํ์ธํ๊ณ ๋๊ธฐํํ๋ค.
๐ Helm ์ค์ (Yaml ํ์ผ ๊ด๋ฆฌ)
- ์ฌ์ ์ค๋น์ฌํญ
- Helm ์ฉ Github Repository ์์ฑ (ex> test-infra)
- Helm ์ค์น
// ์ค์นํ์ธ
$ helm version
// ์ค์น MAC
$ brew install helm
Helm ์ด๊ธฐํ
$ helm create {์ฑ ๋ช
์นญ}
Helm Chart ํด๋ ๊ตฌ์กฐ (์ฐธ๊ณ ์ฉ)
| Chart.yaml | Chart ์ด๋ฆ, ๋ฒ์ ๋ฑ ๋ฉํ ์ ๋ณด |
| values.yaml | ์ฌ์ฉ์ ์ค์ ๊ฐ (image, env ๋ฑ) ๊ด๋ฆฌ |
| charts/ | ์ข ์ ์ฐจํธ (dependency) ํด๋ |
| templates/deployment.yaml | ์ ํ๋ฆฌ์ผ์ด์ Deployment ๋ฆฌ์์ค ์์ฑ (์ปจํ ์ด๋ ์คํ) |
| templates/service.yaml | ์๋น์ค(Service) ๋ฆฌ์์ค ์์ฑ (๋คํธ์ํฌ ์ฐ๊ฒฐ) |
| templates/ingress.yaml | Ingress ๋ฆฌ์์ค ์์ฑ (๋๋ฉ์ธ ์ฐ๊ฒฐ์ฉ) |
| templates/hpa.yaml | Horizontal Pod Autoscaler (์๋ ์ค์ผ์ผ๋ง) ๋ฆฌ์์ค |
| templates/serviceaccount.yaml | ServiceAccount ๋ฆฌ์์ค ์์ฑ (๊ถํ ๋ถ์ฌ), AWS IAM ์ฐ๋ ์ ํ์ |
| templates/_helpers.tpl | ํ ํ๋ฆฟ ๊ณตํต ํจ์ (Label, ์ด๋ฆ ๋ฑ ์ฌ์ฌ์ฉ) |
| templates/NOTES.txt | ์ค์น ๊ฒฐ๊ณผ ์๋ด๋ฌธ, ์์ด๋ ๋ฌด๊ด |
| templates/tests/test-connection.yaml | ํฌ๋ฆ ํ ์คํธ ๋ฆฌ์์ค (helm test ์ฉ) |
๊ธฐ๋ณธ ํฌ๋ฆ ํ ํ๋ฆฟ ์์ฑํด๋ณด๊ธฐ
# ๐ templates/deployment.yaml
apiVersion: apps/v1 ### ๋ฒ์ ์ ๋ณด
kind: Deployment ### Deployment Type ๋ช
์
### ์ด๋ฆ ์ค์ (Kubectl์์ ๊ด๋ฆฌํ ๋ ์๋ณ์)
metadata:
name: spring-app
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: spring-app
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-container
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 8080
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: "{{ $value }}"
{{- end }}
imagePullPolicy: Always
# ๐ templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: spring-app
spec:
selector:
app: spring-app
type: {{ .Values.service.type }}
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: 8080
# ๐ Chart.yaml
apiVersion: v2
name: spring-app
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
# ๐ values.yaml
# Default values for spring-app.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### ์ด๋ฏธ์ง ๋ ์ง์คํธ๋ฆฌ ๊ฐ ์ค์
image:
repository: 211125374993.dkr.ecr.ap-southeast-1.amazonaws.com/eks-test-ecr
tag: latest
### ๋ฐฐํฌํ POD ๊ฐฏ์ ์ค์
replicaCount: 1
### ์๋น์ค ํ์
์ค์
service:
type: LoadBalancer
port: 80
### ์ปจํ
์ด๋ ํ๊ฒฝ๋ณ์
env:
SPRING_PROFILES_ACTIVE: "prod"
AWS_REGION: "ap-southeast-1"
TEST_VALUE: "100"
์ ๊ณผ์ ์์ ArgoCD๋ Github Repository๋ฅผ ์ฐ๊ฒฐํ ๋ Application ๊ณผ ์ฐ๊ฒฐํ ๊ฒ์ ์ด์ Helm Repository๋ก ๋ณ๊ฒฝํด์ค๋ค.
Github Action์์ Helm Repsoitory ๋ฅผ Sed ๋ช ๋ น์ด๋ก ์ปค๋ฐํ๋๋ก ์ค์ ํ๋ค.
- ํธ์ํ ๋ ์ด๋ฏธ์ง๋ Commit Hash๋ก ์งํํ๋ฉฐ,
- ArgoCD Deployment Script์ ํด์๊ฐ๋ ์ปค๋ฐํด์ ๊ฐ์ผ๋ก ๋ณ๊ฒฝ์ณ์ค๋ค.
name: Deploy Spring App to ECR
on:
push:
branches:
- main
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: ecr-login
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to ECR
env:
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
run: |
COMMIT_HASH=$(echo $GITHUB_SHA | cut -c1-7)
docker build --no-cache -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:latest .
docker tag $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:$COMMIT_HASH
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:latest
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/eks-test-ecr:$COMMIT_HASH
- name: Update Helm values.yaml in Helm Repo
env:
GH_HELM_REPO_PAT: ${{ secrets.GH_HELM_REPO_PAT }}
run: |
COMMIT_HASH=$(echo $GITHUB_SHA | cut -c1-7)
git clone https://x-access-token:${GH_HELM_REPO_PAT}@github.com/MG8-Project/eks-test-infra.git
cd eks-test-infra/spring-app
sed -i "s/tag: .*/tag: $COMMIT_HASH/" values.yaml
git config user.name "github-actions"
git config user.email "actions@github.com"
git commit -am "Update image tag to $COMMIT_HASH"
git push https://x-access-token:${GH_HELM_REPO_PAT}@github.com/MG8-Project/eks-test-infra.git HEAD:main
์ค์ ์ ๋๋ฌ๋ค !
ํ์ฌ๊น์ง ๊ตฌ์ฑํ ๊ตฌ์กฐ๋ฅผ ์ ๋ฆฌํด๋ณด์.
1. Application Repository์ Code Commit์ด ์ด๋ฃจ์ด์ง๋ค.
2. GIthub Action Build in ECR ๊ณผ์ ์ ๊ฑฐ์ณ, Docker๋ก ๋ง์์ ECR ์ด๋ฏธ์ง ๋ ์ง์คํธ๋ฆฌ์ ์ ์ฅ๋๋ค.
3. ์ดํ, Github Action์ Helm Repository์ Sed ๋ช ๋ น์ด๋ฅผ ํตํด ์ปค๋ฐ์ ๋ ๋ฆฐ๋ค.
4. ArgoCD๋ Helm Repository๋ฅผ ๋ฐ๋ผ๋ณด๊ณ ์๋ค๊ฐ, ์ ์ปค๋ฐ ํธ๋ฆฌ๊ฑฐ๋ฅผ ๋ฐ๊ณ Deployment Script๋ฅผ ์ฝ์ด ๋ฑ๋ก๋ Application์ ๋ฐฐํฌํ๋ค.
์๋๋, Helm์ Ingress ์ค์ ์ ํตํด ์ํ๋ ๋๋ฉ์ธ์ผ๋ก ๋ฑ๋กํ๋ ๋ฐฉ๋ฒ์ด๋ค.
ํ์์ ์งํํ์ ! ์๋ง ๋์ฒด๋ก ํ์ํ ๊ฒ์ผ๋ก ์์๋์ด ํ ํฌ์คํ ์ ํฌํจํ๋ค.
๐ Helm - Ingress (๋๋ฉ์ธ & Https์ค์ )
Ingress ๋ ์ธ๋ถ ์์ฒญ์ ํด๋ฌ์คํฐ ๋ด๋ถ์ ์๋น์ค์์ ์ด๋ป๊ฒ ์ฒ๋ฆฌํ ์ง ์ ํด๋ ๊ท์น์ ๋ชจ์์ด๋ฉฐ ํธ๋ํฝ ๋ก๋๋ฐธ๋ฐ์ฑ, SSL ์ธ์ฆ์์ฒ๋ฆฌ, ๋๋ฉ์ธ๊ธฐ๋ฐ ๊ฐ์ ํธ์คํ ์ ๊ณต ๋ฑ์ ์ญํ ์ ์ํํ๋ค.
Ingress Controller๋ Ingress๊ฐ ๋์ํ๊ธฐ ์ํด ํด๋ฌ์คํฐ์์ ์คํ๋๊ณ ์์ฒญ์ ๋ฐ๋ผ ์ฒ๋ฆฌํ๋ ์ปจํธ๋กค๋ฌ ํ๋ก๊ทธ๋จ์ด๋ค, ์ฆ Ingress๋ ํธ๋ํฝ ์ฒ๋ฆฌ ๊ท์น(Rule)์ด๊ณ , Ingress Controller๋ ์ค์ ๋ผ์ฐํ ์ํ ํ๋ก๊ทธ๋จ์ด๋ค.
1. EKS ctl ์ค์น
- Ingress Controller ๋ AWS Type, Nginx Type๋ฑ ์ฌ๋ฌ๊ฐ์ง๊ฐ ์์ผ๋, AWS LoadBalancer ์ฌ์ฉ์ ์ํด aws-load-balancer-webhook-service๋ฅผ ์ฌ์ฉ
$ brew tap weaveworks/tap
$ brew install weaveworks/tap/eksctl
2. IAM Role ์์ฑ
- AmazonEKSLoadBalancerControllerRole (IAM ์ ์ฑ ์ ์๋ ๊ถํ์ ๋ณต๋ถ)
3. Service Account ์์ฑ (IRSA, IAM Roles for Service Account)
*Service Account๋?
Kubernetes์์ Pod๊ฐ ์๋ฒ์ ์ ๊ทผ ์ ์ฌ์ฉํ๋ ๊ณ์ ์ผ๋ก, ๊ธฐ๋ณธ์ ์ผ๋ก ๋ชจ๋ Pod๋ default Service Account๋ฅผ ์์ ํจ. ๋ฐ๋ผ์, ์ธ๋ถ ๋ฆฌ์์ค(AWS)์ ์ ๊ทผํ๊ธฐ ์ํ ์ ์ ํ ๊ถํ์ด ํ์ํ๋ฏ๋ก ์ฐ๊ฒฐํ๋ ๊ฒ
eksctl create iamserviceaccount \
--cluster eks-test-cluster \
--namespace kube-system \
--name aws-load-balancer-controller \ # ์๋น์ค ์ด์นด์ดํธ ์ด๋ฆ
--attach-role-arn arn:aws:iam::<ACCOUNT_ID>:role/AmazonEKSLoadBalancerControllerRole \
--approve
4. Helm์ ํตํด aws-load-balancer-controller ์ค์น
ํด๋ฌ์คํฐ๋ช , ๋ฆฌ์ , vpc, vpcId, ์์์ ์์ฑํ service account ๋ฅผ ๋ถ์ฌ์ค์ผํ๋ค.
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=eks-test-cluster \
--set region=ap-southeast-1 \
--set vpcId=vpc-0fa9b98d41c276bdf \
--set serviceAccount.name=aws-load-balancer-controller
### ์ด๋ฏธ ์กด์ฌํ๋ ๊ฒฝ์ฐ Service Account Update
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=eks-test-cluster \
--set region=ap-southeast-1 \
--set vpcId=vpc-0fa9b98d41c276bdf \
--set serviceAccount.name=aws-load-balancer-controller \
--set serviceAccount.create=false
### ๊ฒ์ฆ -> ๋ด๊ฐ ์ค์ ํ serviceAccount ๊ฐ ์ ์ ์ถ๋ ฅ๋๋์ง ํ์ธํ๋ค.
kubectl get pod -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller -o jsonpath="{.items[0].spec.serviceAccountName}"
### ์์ ๋ฐ ์ฌ์ ์ฉ ์ (restart aws-load-balancer-controller)
kubectl rollout restart deployment aws-load-balancer-controller -n kube-system
### ๋ก๊ทธํ์ธ
kubectl logs -n kube-system deployment/aws-load-balancer-controller
5. Helm Chart ์์ Application ์ฉ Ingress ์์ฑ
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-app-ingress
annotations:
alb.ingress.kubernetes.io/group.name: shared-alb-group
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.ingress.certificateArn }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-path: /health # LoadBalancer Health Check
spec:
ingressClassName: alb
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: spring-app
port:
number: 80
6. ArgoCD์ฉ Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
alb.ingress.kubernetes.io/group.name: shared-alb-group
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-1:211125374993:certificate/9382fcc5-245e-46be-a1ec-c36189171073 # ACM ์ธ์ฆ์ ARN
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS":443}]'
alb.ingress.kubernetes.io/force-ssl-redirect: 'true'
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-path: /login
alb.ingress.kubernetes.io/success-codes: '200-399'
spec:
ingressClassName: alb
rules:
- host: test-argo.luckypanda.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
7. ์ ์ฉํ๊ธฐ
### ์คํฌ๋ฆฝํธ ์ ์ฉ
$ kubectl apply -f {ํ์ผ๊ฒฝ๋ก}
### ์๋ฒ ๋ฆฌ์คํํธ
$ rollout restart deployment -n {๋ค์์คํ์ด์ค} {์๋ฒ๋ช
}
์ ๋ง ๋.!
์งํ ์ ๋ง๋๊ฒ ๋ ์ค๋ฅ์ ๋ํ์ฌ..
(Err) ArgoCD - err_too_many_redirects ์ค๋ฅ ํด๊ฒฐํ๊ธฐ
- ArgoCD๋ฅผ Https๋ฅผ ์ฐ๊ฒฐํด ๋ธ๋ผ์ฐ์ ์์ ํ์ธํ๋ฉด too_many_redirect ์ค๋ฅ๊ฐ ๋ฐ์ํ ์ ์๋ค.
- ์์ธ
- ALB์ ๋ฆฌ๋๋ ์ ๊ณผ ArgoCD ๋ฆฌ๋๋ ์ ์ด ๊ฒน์ณ์ ๋ฐ์
- https → ALB → HTTP → ArgoCD → ๋ค์ HTTPS ๋ฆฌ๋๋ ์ → ALB → …
- → ๋ฌดํ ๋ฐ๋ณต ๋ฆฌ๋๋ ์ ๋ฐ์
- ํด๊ฒฐ๋ฐฉ์
- ConfigMap์ insecure : true ์ค์ ์ถ๊ฐ
- ArgoCD๊ฐ HTTPS ๋ฆฌ๋๋ ์ ์ ํ์ง ์๋๋ก ์ค์
- ConfigMap ์์ ํ๊ธฐ
- ConfigMap์ insecure : true ์ค์ ์ถ๊ฐ
### ์์ vi
kubectl edit configmap argocd-cmd-params-cm -n argocd
### ์ถ๊ฐ ํ์
data:
server.insecure: "true"
# ArgoCD Server ์ฌ์์ํ์ฌ ์ ์ฉํ๊ธฐ
kubectl rollout restart deployment argocd-server -n argocd'Web Development > Back-end' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
| EKS - ํ๊ฒฝ๋ณ์ ๊ด๋ฆฌ (1) | 2025.07.21 |
|---|---|
| EKS - Context (0) | 2025.07.21 |
| ๋๊ธฐ ๋น๋๊ธฐ & ์ง๋ ฌ & ๋์์ฑ์ ๋ํ์ฌ (0) | 2025.07.21 |
| ECS ์ฌ์ฉ๊ธฐ (3) | 2025.07.21 |
| Docker ์ ๋ํ์ฌ ์ ๋ฆฌ (4) | 2025.07.21 |