#chaos
### Pre-requisites
- k8s 1.15 or later
- PV 20GB
- Helm3 or kubectl
### Install
[Litmus Chaos Control Plane | Litmus Docs](https://litmusdocs-beta.netlify.app/docs/litmus-install)
[litmus-2-0-0-beta 2.0.20-Beta7 · helm/litmuschaos](https://artifacthub.io/packages/helm/litmuschaos/litmus-2-0-0-beta)
```shell-session
~/s/g/a/m/d/kubernetes [litmus]× » helm install litmus-portal litmuschaos/litmus-2-0-0-beta 16:42:29
WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/stable" via:
WARNING: helm repo add "stable" "https://charts.helm.sh/stable" --force-update
Error: failed to download "litmuschaos/litmus-2-0-0-beta" (hint: running `helm repo update` may help)
~/s/g/a/m/d/kubernetes [litmus]× » helm repo add "stable" "https://charts.helm.sh/stable" --force-update -1- 16:42:49
WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/stable" via:
WARNING: helm repo add "stable" "https://charts.helm.sh/stable" --force-update
"stable" has been added to your repositories
~/s/g/a/m/d/kubernetes [litmus]× » helm install litmus-portal litmuschaos/litmus-2-0-0-beta 16:42:54
Error: failed to download "litmuschaos/litmus-2-0-0-beta" (hint: running `helm repo update` may help)
~/s/g/a/m/d/kubernetes [litmus]× » helm repo update -1- 16:42:58
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "litmuschaos" chart repository
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
~/s/g/a/m/d/kubernetes [litmus]× » helm install --devel litmus-portal litmuschaos/litmus-2-0-0-beta -1- 16:44:54
NAME: litmus-portal
LAST DEPLOYED: Wed Jun 9 16:45:16 2021
NAMESPACE: sock-shop
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing litmus-2-0-0-beta 😀
Your release is named litmus-portal and it's installed to namespace: sock-shop.
Visit https://litmusdocs-beta.netlify.app/docs/introduction to find more info.
```
namespaceをきってやり直し
```shell-session
kubectl create ns litmus
git submodule add https://github.com/litmuschaos/litmus-helm
cd litmus-helm
helm install litmuschaos --namespace litmus ./charts/litmus-2-0-0-beta/
NAME: litmuschaos
LAST DEPLOYED: Wed Jun 9 17:33:42 2021
NAMESPACE: litmus
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing litmus-2-0-0-beta 😀
Your release is named litmuschaos and it's installed to namespace: litmus.
Visit https://litmusdocs-beta.netlify.app/docs/introduction to find more info.
```
インストールできたか確認する。
```shell-session
kubectl get pods -n litmus 17:34:06
NAME READY STATUS RESTARTS AGE
litmuschaos-litmus-2-0-0-beta-frontend-6bdfcbf949-l9pp2 1/1 Running 0 23s
litmuschaos-litmus-2-0-0-beta-mongo-0 1/1 Running 0 23s
litmuschaos-litmus-2-0-0-beta-server-656b758975-2qx68 1/2 Error 0 23s
```
Errorになった。
```shell-session
kubectl describe pod -n litmus litmuschaos-litmus-2-0-0-beta-server-656b758975-2qx68 | tail -n 14 17:36:46
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m3s default-scheduler Successfully assigned litmus/litmuschaos-litmus-2-0-0-beta-server-656b758975-2qx68 to gke-microservices-experi-control-pool-7934097e-h7cp
Normal Pulled 3m kubelet Successfully pulled image "litmuschaos/litmusportal-server:2.0.0-Beta7" in 2.73593102s
Normal Pulled 2m55s kubelet Successfully pulled image "litmuschaos/litmusportal-auth-server:2.0.0-Beta7" in 4.827971315s
Normal Pulling 2m41s (x2 over 3m) kubelet Pulling image "litmuschaos/litmusportal-auth-server:2.0.0-Beta7"
Normal Created 2m39s (x2 over 2m55s) kubelet Created container auth-server
Normal Pulled 2m39s kubelet Successfully pulled image "litmuschaos/litmusportal-auth-server:2.0.0-Beta7" in 2.4974486s
Normal Pulling 2m38s (x2 over 3m3s) kubelet Pulling image "litmuschaos/litmusportal-server:2.0.0-Beta7"
Normal Started 2m38s (x2 over 2m55s) kubelet Started container auth-server
Normal Pulled 2m36s kubelet Successfully pulled image "litmuschaos/litmusportal-server:2.0.0-Beta7" in 2.487265738s
Normal Created 2m35s (x2 over 3m) kubelet Created container graphql-server
Normal Started 2m35s (x2 over 3m) kubelet Started container graphql-server
```
describeしても正常にみえる。
```shell-session
kubectl get pods -n litmus -1- 17:39:46
NAME READY STATUS RESTARTS AGE
litmuschaos-litmus-2-0-0-beta-frontend-6bdfcbf949-l9pp2 1/1 Running 0 6m8s
litmuschaos-litmus-2-0-0-beta-mongo-0 1/1 Running 0 6m8s
litmuschaos-litmus-2-0-0-beta-server-656b758975-2qx68 2/2 Running 2 6m8s
```
気づいたら、Runningになっていた。
```shell-session
kubectl get svc -n litmus 17:40:50
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
litmuschaos-litmus-2-0-0-beta-mongo ClusterIP 10.0.3.23 <none> 27017/TCP 7m8s
litmusportal-frontend-service NodePort 10.0.4.182 <none> 9091:30231/TCP 7m8s
litmusportal-server-service NodePort 10.0.11.217 <none> 9002:30138/TCP,9003:31756/TCP 7m8s
```
```shell-session
kubectl get pods -n litmus -o wide 17:42:28
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
litmuschaos-litmus-2-0-0-beta-frontend-6bdfcbf949-l9pp2 1/1 Running 0 9m15s 10.8.3.15 gke-microservices-experi-control-pool-7934097e-h7cp <none> <none>
litmuschaos-litmus-2-0-0-beta-mongo-0 1/1 Running 0 9m15s 10.8.3.16 gke-microservices-experi-control-pool-7934097e-h7cp <none> <none>
litmuschaos-litmus-2-0-0-beta-server-656b758975-2qx68 2/2 Running 2 9m15s 10.8.3.14 gke-microservices-experi-control-pool-7934097e-h7cp <none> <none>
```
```shell-session
kubectl get nodes -o wide 17:42:59
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-microservices-experi-control-pool-7934097e-h7cp Ready <none> 80m v1.19.10-gke.1700 10.146.15.221 35.187.223.31 Container-Optimized OS from Google 5.4.89+ docker://19.3.14
gke-microservices-experi-default-pool-66a015a7-8rzc Ready <none> 3h14m v1.19.10-gke.1700 10.146.15.220 34.85.99.48 Container-Optimized OS from Google 5.4.89+ docker://19.3.14
gke-microservices-experi-default-pool-66a015a7-s07i Ready <none> 3h18m v1.19.10-gke.1700 10.146.15.219 34.84.191.116 Container-Optimized OS from Google 5.4.89+ docker://19.3.14
gke-microservices-experi-default-pool-66a015a7-xhqv Ready <none> 3h26m v1.19.10-gke.1700 10.146.15.217 34.85.90.158 Container-Optimized OS from Google 5.4.89+ docker://19.3.14
gke-microservices-experi-default-pool-66a015a7-zhsy Ready <none> 3h22m v1.19.10-gke.1700 10.146.15.218 35.189.133.67 Container-Optimized OS from Google 5.4.89+ docker://19.3.14
```
http://35.187.223.31:30231/ でアクセスできるはずだけどできない。
```shell-session
kubectl exec -it deploy/litmuschaos-litmus-2-0-0-beta-frontend -n litmus -- /bin/sh -126- 17:57:20
/ $ ps axufww
PID USER TIME COMMAND
1 nginx 0:00 nginx: master process nginx -g daemon off;
22 nginx 0:00 nginx: worker process
23 nginx 0:00 nginx: worker process
30 nginx 0:00 /bin/sh
36 nginx 0:00 ps axufww
/ $ curl localhost
curl: (7) Failed to connect to localhost port 80: Connection refused
/ $ ss -tln
/bin/sh: ss: not found
/ $ netstat -tan
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
/ $ curl localhost:8080
<!doctype html><html><head><meta charset="utf-8"/><meta name="viewport" content="user-scalable=0,initial-scale=1,minimum-scale=1,width=device-width,height=device-height"/><meta name="theme-color" content="#ffffff"/><link rel="manifest" href="/manifest.json"/><meta name="apple-mobile-web-app-capable" content="yes"/><meta name="apple-mobile-web-app-status-bar-style" content="black"/><meta name="apple-mobile-web-app-title" content="React App"/><link rel="apple-touch-icon" href="./favicon.ico"/><meta name="msapplication-TileImage" content="./favicon.ico"/><meta name="msapplication-TileColor" content="#ffc40d"/><link rel="shortcut icon" href="/favicon.ico"/><link rel="stylesheet" href="index.css"/><link rel="apple-touch-icon" sizes="152x152" href="./apple-touch-icon.png"/><link rel="manifest" href="/site.webmanifest"/><link rel="mask-icon" href="./safari-pinned-tab.svg" color="#5bbad5"/><link rel="preconnect" href="https://fonts.gstatic.com"/><link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;700&display=swap" rel="stylesheet"/><title>Litmus Portal</title></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div class="root" id="root"><div class="loaderContainer"><div class="loader"></div></div></div><script>!function(e){function t(t){for(var n,c,a=t[0],d=t[1],u=t[2],i=0,s=[];i<a.length;i++)c=a[i],Object.prototype.hasOwnProperty.call(f,c)&&f[c]&&s.push(f[c][0]),f[c]=0;for(n in d)Object.prototype.hasOwnProperty.call(d,n)&&(e[n]=d[n]);for(l&&l(t);s.length;)s.shift()();return o.push.apply(o,u||[]),r()}function r(){for(var e,t=0;t<o.length;t++){for(var r=o[t],n=!0,c=1;c<r.length;c++){var d=r[c];0!==f[d]&&(n=!1)}n&&(o.splice(t--,1),e=a(a.s=r[0]))}return e}var n={},c={8:0},f={8:0},o=[];function a(t){if(n[t])return n[t].exports;var r=n[t]={i:t,l:!1,exports:{}};return e[t].call(r.exports,r,r.exports,a),r.l=!0,r.exports}a.e=function(e){var t=[];c[e]?t.push(c[e]):0!==c[e]&&{3:1}[e]&&t.push(c[e]=new Promise((function(t,r){for(var n="static/css/"+({}[e]||e)+"."+{0:"31d6cfe0",1:"31d6cfe0",2:"31d6cfe0",3:"9229e6f1",4:"31d6cfe0",5:"31d6cfe0",6:"31d6cfe0",9:"31d6cfe0",10:"31d6cfe0",11:"31d6cfe0",13:"31d6cfe0",14:"31d6cfe0",15:"31d6cfe0",16:"31d6cfe0",17:"31d6cfe0",18:"31d6cfe0",19:"31d6cfe0",20:"31d6cfe0",21:"31d6cfe0",22:"31d6cfe0",23:"31d6cfe0",24:"31d6cfe0",25:"31d6cfe0",26:"31d6cfe0",27:"31d6cfe0",28:"31d6cfe0",29:"31d6cfe0",30:"31d6cfe0",31:"31d6cfe0",32:"31d6cfe0",33:"31d6cfe0",34:"31d6cfe0",35:"31d6cfe0",36:"31d6cfe0",37:"31d6cfe0",38:"31d6cfe0",39:"31d6cfe0",40:"31d6cfe0",41:"31d6cfe0"}[e]+".chunk.css",f=a.p+n,o=document.getElementsByTagName("link"),d=0;d<o.length;d++){var u=(l=o[d]).getAttribute("data-href")||l.getAttribute("href");if("stylesheet"===l.rel&&(u===n||u===f))return t()}var i=document.getElementsByTagName("style");for(d=0;d<i.length;d++){var l;if((u=(l=i[d]).getAttribute("data-href"))===n||u===f)return t()}var s=document.createElement("link");s.rel="stylesheet",s.type="text/css",s.onload=t,s.onerror=function(t){var n=t&&t.target&&t.target.src||f,o=new Error("Loading CSS chunk "+e+" failed.\n("+n+")");o.code="CSS_CHUNK_LOAD_FAILED",o.request=n,delete c[e],s.parentNode.removeChild(s),r(o)},s.href=f,document.getElementsByTagName("head")[0].appendChild(s)})).then((function(){c[e]=0})));var r=f[e];if(0!==r)if(r)t.push(r[2]);else{var n=new Promise((function(t,n){r=f[e]=[t,n]}));t.push(r[2]=n);var o,d=document.createElement("script");d.charset="utf-8",d.timeout=120,a.nc&&d.setAttribute("nonce",a.nc),d.src=function(e){return a.p+"static/js/"+({}[e]||e)+"."+{0:"e610a789",1:"db9c8d62",2:"6b042db0",3:"504d9498",4:"088b477d",5:"b24fef7a",6:"81cfb579",9:"405d89ab",10:"486c584b",11:"2589bfd4",13:"8988c339",14:"c132f40e",15:"f8c8d6a7",16:"af0b1b77",17:"230eed58",18:"ca0d9f92",19:"859841f6",20:"b6666bb4",21:"3ab8e4b9",22:"496f7fad",23:"1ad24b02",24:"0acb1743",25:"b6708dec",26:"7b356d57",27:"2a68aac0",28:"701900f0",29:"d971cf50",30:"769cc542",31:"e248c9e6",32:"18e04deb",33:"68391121",34:"3aeec1d3",35:"8405a3b6",36:"1aab86ff",37:"42a8f5d7",38:"d836367f",39:"cd9a2abd",40:"0d978669",41:"410fa87b"}[e]+".chunk.js"}(e);var u=new Error;o=function(t){d.onerror=d.onload=null,clearTimeout(i);var r=f[e];if(0!==r){if(r){var n=t&&("load"===t.type?"missing":t.type),c=t&&t.target&&t.target.src;u.message="Loading chunk "+e+" failed.\n("+n+": "+c+")",u.name="ChunkLoadError",u.type=n,u.request=c,r[1](u)}f[e]=void 0}};var i=setTimeout((function(){o({type:"timeout",target:d})}),12e4);d.onerror=d.onload=o,document.head.appendChild(d)}return Promise.all(t)},a.m=e,a.c=n,a.d=function(e,t,r){a.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},a.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},a.t=function(e,t){if(1&t&&(e=a(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(a.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var n in e)a.d(r,n,function(t){return e[t]}.bind(null,n));return r},a.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return a.d(t,"a",t),t},a.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},a.p="/",a.oe=function(e){throw console.error(e),e};var d=this["webpackJsonplitmus-portal"]=this["webpackJsonplitmus-portal"]||[],u=d.push.bind(d);d.push=t,d=d.slice();for(var i=0;i<d.length;i++)t(d[i]);var l=u;r()}([])</script><script src="/static/js/12.f58f4373.chunk.js"></script><script src="/static/js/main.fe554ce2.chunk.js"></script></body></html>/ $
```
ここでブロックしている。
おそらく、GKEのファイアウォールでブロックされている。[Service | Kubernetes](https://kubernetes.io/ja/docs/concepts/services-networking/service/)
```shell-
helm show values -n litmus litmuschaos/litmus > helm-conf/values.yaml
```
[Install LitmusPortal with Ingress | Litmus Docs](https://litmusdocs-beta.netlify.app/docs/litmus-with-ingress)
NodePortのかわりに、nginx Ingress controllerを使う。
-> やはりめんどくさそうなので、helmからportal frontendのServiceをNodePortからLoadBalancerへ変更する。
```shell-session
helm upgrade litmuschaos -n litmus litmuschaos/litmus -f litmus-conf/values.yaml 21:19:43
Release "litmuschaos" has been upgraded. Happy Helming!
NAME: litmuschaos
LAST DEPLOYED: Wed Jun 9 21:19:52 2021
NAMESPACE: litmus
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
## Additional Steps (Verification)
----------------------------------
You can run the following commands if you wish to verify if all desired components are installed successfully.
- Check if chaos api-resources are available:
root@demo:~# kubectl api-resources | grep litmus
chaosengines litmuschaos.io true ChaosEngine
chaosexperiments litmuschaos.io true ChaosExperiment
chaosresults litmuschaos.io true ChaosResult
- Check if the litmus chaos operator deployment is running successfully
root@demo:~# kubectl get pods -n litmus
NAME READY STATUS RESTARTS AGE
litmus-7d998b6568-nnlcd 1/1 Running 0 106s
## Start Running Chaos Experiments
----------------------------------
With this, you are good to go!! Refer to the chaos experiment documentation @ https://docs.litmuschaos.io
to start executing your first experiment.
```
その後、起動しなくなった。
```shell-session
$ kubectl get pods -n litmus -o wide 21:25:20
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
litmus-9574f67d-vhhrt 0/1 CrashLoopBackOff 5 5m30s 10.8.3.17 gke-microservices-experi-control-pool-7934097e-h7cp <none> <none>
$ kubectl get svc -n litmus 21:25:31
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chaos-operator-metrics ClusterIP 10.0.12.168 <none> 8383/TCP 5m26s
```
uninstallしてやりなおし。 [Litmus Chaos Control Plane Uninstall | Litmus Docs](https://litmusdocs-beta.netlify.app/docs/litmus-uninstall)
```shell-session
$ helm uninstall litmuschaos --namespace litmus
```
kubectl port-forward で外部アクセスするようにする。
[kubernetesに外部アクセスするときによく使うコマンド | みんなに幸あれ!](https://hakengineer.xyz/2019/08/15/post-2122/)
```shell-session
$ kubectl port-forward deployment/litmuschaos-litmus-2-0-0-beta-frontend -n litmus --address 0.0.0.0 8080:8080
Forwarding from 0.0.0.0:8080 -> 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
```
手元マシンのhttp://localhost:8080 でみれるようになった。admin:litmusでログインする。
### Agentのインストール
[Litmus Chaos Agent Install | Litmus Docs](https://litmusdocs-beta.netlify.app/docs/agent-install/)
```shell-session
$ curl -s https://litmusctl-bucket.s3-eu-west-1.amazonaws.com/litmusctl-darwin-amd64-master.tar.gz | tar -C ~/tmp -xz
$ chmod +x ~/tmp/litmusctl
$ mv ~/tmp/litmusctl ~/bin/
$ litmusctl version
Litmusctl version: v0.1.0
```
```shell-session
litmusctl agent connect 14:53:14
🔥 Connecting LitmusChaos agent
📶 Please enter LitmusChaos details --
👉 Host URL where litmus is installed: http://localhost:8080
🤔 Username [admin]: admin
🙈 Password:
✅ Login Successful!
✨ Projects List:
1. admin's project
🔎 Select Project: 1
🔌 Installation Modes:
1. Cluster
2. Namespace
👉 Select Mode [cluster]: 2
🏃 Running prerequisites check....
🔑 role - ✅
🔑 rolebinding - ✅
🌟 Sufficient permissions. Connecting Agent
🔗 Enter the details of the agent ----
🤷 Agent Name: test-agent
📘 Agent Description:
📦 Platform List
1. AWS
2. GKE
3. Openshift
4. Rancher
5. Others
🔎 Select Platform [GKE]:
📁 Enter the namespace (new or existing) [litmus]:
🚫 Subscriber already present. Please enter a different namespace
📁 Enter the namespace (new or existing) [litmus]: sockshop
🔑 Enter service account [litmus]:
📌 Summary --------------------------
Agent Name: test-agent
Agent Description:
Platform Name: GKE
Namespace: sockshop (new)
Service Account: litmus (new)
Installation Mode: namespace
-------------------------------------
🤷 Do you want to continue with the above details? [Y/N]: y
👍 Continuing agent connection!!
Applying YAML:
http://localhost:8080/api/file/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJjbHVzdGVyX2lkIjoiYmE2YThkOTEtZWJiZC00ODc4LWE1MDYtYTUxMWIzMjRiNjBhIn0.Q8D-YIxMQeLbdgABUwZ_MZlKtHggevD50m3foKznMw8.yaml
❌ Failed in applying connection yaml: [Error: exit status 1]
```
最後のYAMLの適用するときにエラーが出力された。
次は、cluster modeを指定してみる。
```shell-session
$ litmusctl agent connect
🔥 Connecting LitmusChaos agent
📶 Please enter LitmusChaos details --
👉 Host URL where litmus is installed: http://localhost:8080
🤔 Username [admin]: admin
🙈 Password:
✅ Login Successful!
✨ Projects List:
1. admin's project
🔎 Select Project: 1
🔌 Installation Modes:
1. Cluster
2. Namespace
👉 Select Mode [cluster]: 2
🏃 Running prerequisites check....
🔑 role - ✅
🔑 rolebinding - ✅
🌟 Sufficient permissions. Connecting Agent
🔗 Enter the details of the agent ----
🤷 Agent Name: test-agent
📘 Agent Description:
📦 Platform List
1. AWS
2. GKE
3. Openshift
4. Rancher
5. Others
🔎 Select Platform [GKE]:
📁 Enter the namespace (new or existing) [litmus]:
🚫 Subscriber already present. Please enter a different namespace
📁 Enter the namespace (new or existing) [litmus]: sockshop
🔑 Enter service account [litmus]:
📌 Summary --------------------------
Agent Name: test-agent
Agent Description:
Platform Name: GKE
Namespace: sockshop (new)
Service Account: litmus (new)
Installation Mode: namespace
-------------------------------------
🤷 Do you want to continue with the above details? [Y/N]: y
👍 Continuing agent connection!!
Applying YAML:
http://localhost:8080/api/file/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJjbHVzdGVyX2lkIjoiYmE2YThkOTEtZWJiZC00ODc4LWE1MDYtYTUxMWIzMjRiNjBhIn0.Q8D-YIxMQeLbdgABUwZ_MZlKtHggevD50m3foKznMw8.yaml
❌ Failed in applying connection yaml: [Error: exit status 1]
~ » litmusctl version -1- 14:55:29
Litmusctl version: v0.1.0
~ » litmusctl agent connect 14:55:50
🔥 Connecting LitmusChaos agent
📶 Please enter LitmusChaos details --
👉 Host URL where litmus is installed: http://localhost:8080
🤔 Username [admin]:
🙈 Password:
✅ Login Successful!
✨ Projects List:
1. admin's project
🔎 Select Project: ❗ Invalid Project. Please select a correct one.
🔎 Select Project: 1
🔌 Installation Modes:
1. Cluster
2. Namespace
👉 Select Mode [cluster]: 1
🏃 Running prerequisites check....
🔑 clusterrole - ✅
🔑 clusterrolebinding - ✅
🌟 Sufficient permissions. Connecting Agent
🔗 Enter the details of the agent ----
🤷 Agent Name: test-agent
test-agent
🚫 Agent with the given name already exists.
📘 Connected agents list -----------
- Self-Agent
- test-agent
-------------------------------------
❗ Please enter a different name.
🤷 Agent Name: test2-agent
📘 Agent Description:
📦 Platform List
1. AWS
2. GKE
3. Openshift
4. Rancher
5. Others
🔎 Select Platform [GKE]:
📁 Enter the namespace (new or existing) [litmus]:
🚫 Subscriber already present. Please enter a different namespace
📁 Enter the namespace (new or existing) [litmus]: sockshop
🔑 Enter service account [litmus]:
📌 Summary --------------------------
Agent Name: test2-agent
Agent Description:
Platform Name: GKE
Namespace: sockshop (new)
Service Account: litmus (new)
Installation Mode: cluster
-------------------------------------
🤷 Do you want to continue with the above details? [Y/N]: y
👍 Continuing agent connection!!
Applying YAML:
http://localhost:8080/api/file/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJjbHVzdGVyX2lkIjoiYWQ4M2EyNWQtYjg5Zi00MjBkLTk5MmQtYTM4MmI2YmU2NzE4In0.dyJp3CCdBZLisBWUCMhmImXwzD6bf6EVe4o2hzDdj1Q.yaml
namespace/sockshop created
serviceaccount/litmus created
configmap/agent-config created
deployment.apps/subscriber created
deployment.apps/event-tracker created
service/argo-server created
service/workflow-controller-metrics created
deployment.apps/argo-server created
deployment.apps/workflow-controller created
Warning: resource customresourcedefinitions/chaosengines.litmuschaos.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/chaosengines.litmuschaos.io configured
Warning: resource customresourcedefinitions/chaosexperiments.litmuschaos.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/chaosexperiments.litmuschaos.io configured
Warning: resource customresourcedefinitions/chaosresults.litmuschaos.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/chaosresults.litmuschaos.io configured
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Warning: resource customresourcedefinitions/eventtrackerpolicies.eventtracker.litmuschaos.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/eventtrackerpolicies.eventtracker.litmuschaos.io configured
serviceaccount/litmus-admin created
Warning: resource clusterroles/litmus-admin is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/litmus-admin configured
Warning: resource clusterrolebindings/litmus-admin is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/litmus-admin configured
serviceaccount/argo-chaos created
Warning: resource clusterroles/chaos-cluster-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/chaos-cluster-role configured
Warning: resource clusterrolebindings/chaos-cluster-role-binding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/chaos-cluster-role-binding configured
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
Warning: resource clusterroles/subscriber-cluster-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/subscriber-cluster-role configured
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
Warning: resource clusterrolebindings/subscriber-cluster-role-binding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/subscriber-cluster-role-binding configured
serviceaccount/event-tracker-sa created
Warning: resource clusterroles/event-tracker-cluster-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/event-tracker-cluster-role configured
Warning: resource clusterrolebindings/event-tracker-clusterole-binding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/event-tracker-clusterole-binding configured
serviceaccount/litmus-cluster-scope created
Warning: resource clusterroles/litmus-cluster-scope is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/litmus-cluster-scope configured
Warning: resource clusterrolebindings/litmus-cluster-scope is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/litmus-cluster-scope configured
serviceaccount/argo created
serviceaccount/argo-server created
Warning: resource clusterroles/argo-aggregate-to-admin is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/argo-aggregate-to-admin configured
Warning: resource clusterroles/argo-aggregate-to-edit is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/argo-aggregate-to-edit configured
Warning: resource clusterroles/argo-aggregate-to-view is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/argo-aggregate-to-view configured
Warning: resource clusterroles/argo-cluster-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/argo-cluster-role configured
Warning: resource clusterroles/argo-server-cluster-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/argo-server-cluster-role configured
Warning: resource clusterrolebindings/argo-binding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/argo-binding configured
Warning: resource clusterrolebindings/argo-server-binding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/argo-server-binding configured
deployment.apps/chaos-operator-ce created
deployment.apps/chaos-exporter created
service/chaos-exporter created
Warning: resource customresourcedefinitions/clusterworkflowtemplates.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/clusterworkflowtemplates.argoproj.io configured
Warning: resource customresourcedefinitions/cronworkflows.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/cronworkflows.argoproj.io configured
Warning: resource customresourcedefinitions/workflows.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/workflows.argoproj.io configured
Warning: resource customresourcedefinitions/workflowtemplates.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/workflowtemplates.argoproj.io configured
configmap/workflow-controller-configmap created
💡 Connecting agent to Litmus Portal.
🏃 Agents running!!
🚀 Agent Connection Successful!! 🎉
👉 Litmus agents can be accessed here: http://localhost:8080/targets
```
成功した。