# installing NextCloud container to GCP GKE ``` 2024-05-23 + initial deployment demo /A 2024-09-15 * review and refresh doc /A this will install Nextcloud instance with Cloud SQL DB backend, without SSL and no storage configured /A 2024-09-16 * review and refresh /A new organization in GCP and fresh deployment ``` APIs: - Cloud SQL - Cloud SQL - network connect - [artifactregistry.googleapis.com] not enabled on project [metal-sky-xx] - message=Kubernetes Engine API has not been used in project metal-sky-xx https://console.cloud.google.com/apis/library/container.googleapis.com?project=metal-sky-xx ## Create DB, DB user in Cloud SQL: ``` # takes 10-15 minutes public IP address: 34.88.xx.xx internal IP address: 172.21.xx.xx DB: hub2_2dz_fi_nextcloud u: hub2_2dz_fi_nextcloud_nc p: (StrongPass) ``` Preparations (check where are you running commands from) ```shell uname -a hostname gcloud init gcloud auth list gcloud auth login (GCP account) gcloud config set account (GCP account) gcloud projects list # reauthenticate gcloud config list project gcloud config set project spry-analyzer-xxxxxx gcloud config set accessibility/screen_reader false gcloud config set compute/region europe-north1 gcloud config set compute/zone europe-north1-c ``` Make local tmp dir, clone repo ```bash cd mkdir -p delme/GCP.2024-09-16.1155 cd delme/GCP.2024-09-16.1155 git clone https://github.com/nextcloud/docker.git ``` Copy templates ```bash cd docker cp .examples/dockerfiles/full/apache/Dockerfile . cp .examples/dockerfiles/full/apache/supervisord.conf . cp .examples/docker-compose/insecure/mariadb/apache/db.env . cp .examples/docker-compose/insecure/mariadb/apache/docker-compose.yml . ``` Provide credentials (earlier created for DB) and configure settings (which port to publish to (internal to external) ```bash vi db.env vi docker-compose.yml ``` Check port mapping (should be defaults) ```bash app: ports: - 127.0.0.1:8080:80 ``` Create repostory in Artifact Registry, check ```bash gcloud auth configure-docker europe-north1-docker.pkg.dev gcloud artifacts repositories create nc-docker-local \ --repository-format=docker \ --mode=standard-repository \ --location=europe-north1 gcloud artifacts repositories list ``` Get URL for repository, will need it later ```bash gcloud artifacts repositories describe nc-docker-local --location=europe-north1 ``` ```bash Registry URL: europe-north1-docker.pkg.dev/spry-analyzer-xx/nc-docker-local ``` Install docker on Debian and check/give local permissions ```bash https://docs.docker.com/engine/install/debian/#install-using-the-repository cat /etc/group | grep docker sudo groupadd docker sudo usermod -aG docker (your username) logout # login again ``` Install docker on MacOS ```zsh brew install --cask docker ``` Build application, tag it and push it to repository (use repo URL extracted earlier) and check ```bash cd cd delme/GCP.2024-09-16.1155 # note a dot in the end. docker build -t (! repo URL here without https !)/nc-docker-app:v1 . docker images ``` Give permissions. Get project number, not project name, not project ID and substitute it. Grant permissions for service account to read from "nc-docker-local" repository. ```bash gcloud projects list ``` ```bash PROJECT_ID NAME PROJECT_NUMBER spry-analyzer-xxxxxx infra-pvt 853xxxxxxx34 ``` ```bash gcloud artifacts repositories add-iam-policy-binding nc-docker-local \ --location=europe-north1 \ --member=serviceAccount:853xxxxxxx34-compute@developer.gserviceaccount.com \ --role="roles/artifactregistry.reader" gcloud artifacts repositories add-iam-policy-binding nc-docker-local \ --location=europe-north1 \ --member=serviceAccount:853xxxxxxx34-compute@developer.gserviceaccount.com \ --role="roles/artifactregistry.writer" ``` If deployment machine is in GCP itself, open necessary ports in firewalls (basically from everywhere to deployment machine on port tcp/8081): ```bash gcloud compute --project=spry-analyzer-xxxxxx firewall-rules create \ untrust--gcp1mx1-tcp8081 \ --description="temporary testing internal image docker" \ --direction=INGRESS \ --priority=1000 \ --network=default \ --action=ALLOW \ --rules=tcp:8081 \ --source-ranges=0.0.0.0/0 \ --destination-ranges=10.xx.0.xx/32 \ --enable-logging ``` Run docker locally (will be exposed to 8080) ```bash tmux a tmux # in this example we publish internal port 80 (inside of container) on port 8081 (host machine) docker images docker run --rm -p 8081:80 (repo URL)/nc-docker-app:v1 C-B c docker ps -a sudo ss -ntap | grep docker sudo ss -ntap | grep 8081 curl http://127.0.0.1:8081 curl ifconfig.io ``` Open with workstation local browser ```bash open -a firefox http://(IP address from output above):8081/ ``` At this point, if deployment is successful, we are ready to publish image to repo (Artifact Registry) Pushing docker image into Artifact Registry ```bash gcloud auth configure-docker europe-north1-docker.pkg.dev docker push (repo URL)/nc-docker-app:v1 ``` List content of repostiory ```bash gcloud artifacts repositories list gcloud artifacts files list \ --location=europe-north1 \ --project=spry-analyzer-xxxxxx \ --repository=nc-docker-local ``` Create a GKE cluster ```shell # for Debian sudo apt-get install kubectl google-cloud-cli-gke-gcloud-auth-plugin # for Mac gcloud components install gke-gcloud-auth-plugin gcloud components install kubectl gcloud container clusters list # add scale and autosclae parameters to creation process gcloud container clusters create \ twodz-nc-demo \ --machine-type=e2-micro \ --zone=europe-north1-c # will take some time gcloud container clusters list ``` Get authentication credentials for the cluster (in order to manage it) ```shell gcloud container clusters get-credentials twodz-nc-demo --zone=europe-north1-c kubectl cluster-info ``` Deploy an application to the cluster ```shell kubectl create deployment nc-demo-app \ --image=europe-north1-docker.pkg.dev/spry-analyzer-xxxxxx/nc-docker-local/nc-docker-app:v1 kubectl edit deployment nc-demo-app kubectl get deployments kubectl scale deployment nc-demo-app --replicas=1 # kubectl autoscale deployment nc-demo-app --cpu-percent=80 --min=1 --max=3 kubectl autoscale deployment nc-demo-app --min=1 --max=1 ``` Get into pods ```bash kubectl get pods --output=wide kubectl get pods -o=wide kubectl exec --stdin --tty nc-demo-app-xx-yy -- /bin/bash ``` ## Publish to Internet (create load balancer) ```bash kubectl expose deployment \ nc-demo-app \ --name=nc-demo-app-service \ --type=LoadBalancer \ --port 80 \ --target-port 80 # wait for external IP be assigned from ' state' watch -n1 kubectl get services --output=wide kubectl get services --output=wide ``` When external IP is assigned, open it using local browser ```bash open -a firefox http://(external load balancer's IP address) ``` ## Cleaning ```shell # takes some time... kubectl delete deployment nc-demo-app gcloud container clusters list # takes some time ... gcloud container clusters delete twodz-nc-demo --zone=europe-north1-c docker rmi -f 0fa923cc879e ``` Issue: ``` Memory limit of 512 MiB exceeded with 512 MiB used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits ``` Solution: increase RAM size. Create Volume in Cloud Storage (Bucket). troubleshooting ```bash kubectl get pods --output=wide kubectl exec --stdin --tty nc-demo-app-54dc479f5-crvhx -- /bin/bash apt update apt install net-tools netstat -ntap ``` On welcome page, provide !internal IP address for Cloud SQL (earlier created) login: admin