Added Divers

This commit is contained in:
2025-04-24 16:06:40 +02:00
parent 2abf8e30cf
commit b867b58f01
20 changed files with 1357 additions and 0 deletions

487
Turing Pi/docker-swarm.md Normal file
View File

@@ -0,0 +1,487 @@
# Docker Swarm
## 1. Compute Modules
### Install OS / headless installation
1. Flash the image to a SD card/ EMMC.
2. Go to Boot partition.
3. Then change the DTB listed in /boot/extlinux/extlinux.conf to -model-a.dtb instead of -cm4.dtb.
4. Now put the SD card/ EMMC to the preferred base board and boot.
5. First boot will take time as the user partition will resize to take whole storage size.
> ssh as "root" and complete setup
```
sudo apt update && sudo apt upgrade
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt install -y docker-compose docker git vim screenfetch net-tools wget curl
sudo apt autoremove
```
#### Edit
> sudo vim /etc/sudoers.d/pleb
```
pleb ALL=(ALL:ALL) NOPASSWD: ALL
```
### Set Static IP
#### Create/Edit
> sudo vim /etc/systemd/network/end0.network
```
[Match]
Name=end0
[Network]
Address=192.168.1.21/24
Gateway=192.168.1.254
# OpenDNS
DNS=208.67.222.222
DNS=208.67.220.220
```
#### Edit
> sudo vim /etc/hosts
```
# Host addresses
127.0.0.1 localhost
127.0.1.1 shell1
# Nodes
192.168.1.186 shell1 shell1.local
192.168.1.243 shell2 shell2.local
192.168.1.194 shell3 shell3.local
192.168.1.222 shell4 shell3.local
```
#### Reboot
> On remote :
```
ssh-keygen -t ed25519
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@shell2
ssh pleb@shell2
```
#### Edit
> sudo vim /etc/ssh/sshd\_config
```
PasswordAuthentication no
```
#### Restart service
```
sudo systemctl restart sshd
```
---
```
sudo systemctl enable iptables
sudo systemctl start iptables
sudo iptables -I INPUT -p tcp -m tcp --dport 2376 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
sudo iptables -I INPUT -p udp -m udp --dport 7946 -j ACCEPT
sudo iptables -I INPUT -p tcp -m udp --dport 4789 -j ACCEPT
sudo iptables -I INPUT -p 50 -j ACCEPT # allows ipsec when secure overlay is enabled
```
#### Restart...
---
## 2. Docker Swarm
### Install docker...
```
sudo apt install -y docker docker-compose git
sudo usermod -aG docker pleb
sudo systemctl enable docker.service
sudo systemctl start docker.service
sudo chmod 666 /var/run/docker.sock
```
---
#### On master
```
docker swarm init --advertise-addr 192.168.1.186
```
> *Swarm initialized: current node (3kdxixaa86m8pvag6jn0b70ut) is now a manager*.
#### On nodes
```
docker swarm join --token SWMTKN-1-2px1bindhl41x9h6l4ve7x15iwjryr0uf3ekmu7hz4bezjewwh-ae9vv4657zcki160s71vjn75z 192.168.1.186:2377
```
> *To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions*.
> Change ip range if necessary ?!
```
docker network rm ingress
# Create in different range
docker network create --driver overlay --ingress --subnet 192.168.1.0/16 --gateway 192.168.1.254 ingress
```
---
```
docker node ls
docker network ls
```
---
## 3. Keepalived
> *Load Balancer : install on all nodes*
```
sudo apt-get -y install keepalived
```
### On master node (shell1)
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface end0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
### On Node2
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface end0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
### On Node3
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf<span class="keep-md">`</span>
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface end0
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
### On Node4
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf<span class="keep-md">`</span>
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface end0
virtual_router_id 51
priority 70
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
#### Start/Enable Service
```
sudo systemctl enable keepalived
sudo systemctl start keepalived
ip a show end0
```
---
## 4. Glusterfs
> *Distributed Storage : install on all nodes*
```
sudo apt-get -y install glusterfs-server
sudo systemctl enable glusterd
sudo systemctl start glusterd
```
> sudo fdisk -l
```
# Format your disk (If you decided to use your internal storage, just ignore the mount and format steps)
sudo mkfs.xfs -f /dev/sda
# Create brick folder location
sudo mkdir -p /data/glusterfs/volume/brick
# Add line to /etc/fstab to auto mount this disk to /data on boot
echo "/dev/sda1 /data/glusterfs/volume/brick xfs defaults 0 0" | sudo tee -a /etc/fstab
# Create brick folder
# sudo mkdir -p /data/glusterfs/volume/brick/brick
# Mount
sudo mount -a
sudo systemctl daemon-reload
sudo gluster peer probe shell2
sudo gluster peer probe shell3
sudo gluster peer probe shell4
```
> Check
```
df -h /data/glusterfs/volume/brick
sudo gluster pool list
sudo gluster peer status
```
> For 2 nodes:
```
sudo gluster volume create docker-volume replica 2 transport tcp shell1:/mnt/datav shell3:/mnt/data force
```
For 3 nodes:
```
sudo gluster volume create dockervolume disperse 3 redundancy 1 shell1:/data/glusterfs/volume/brick shell2:/data/glusterfs/volume/brick shell3:/data/glusterfs/volume/brick force
```
#### Start Gluster Volume
```
sudo gluster volume info dockervolume
sudo gluster volume start dockervolume
```
> On all nodes :
```
sudo mkdir /mnt/data
echo "localhost:/dockervolume /mnt/data glusterfs defaults,_netdev 0 0" | sudo tee -a /etc/fstab
sudo mount -a
sudo systemctl daemon-reload
```
> Check
> > Persistent shared storage will be "/mnt/data"
```
df -h /mnt/data
sudo gluster volume set dockervolume auth.allow 127.0.0.1,192.168.1.186,192.168.1.243,192.168.1.194,192.168.1.222
```
> When adding a new brick:
```
sudo gluster volume add-brick **volume** replica X **server:brick**
```
> Option
```
docker plugin install --alias glusterfs mikebarkmin/glusterfs SERVERS=shell1,shell2,shell3,shell4 VOLNAME=dockervolume
```
### 5. Portainer
Create container directory
```
sudo mkdir -p /var/lib/docker/volumes
sudo mkdir -p /mnt/data/portainer
```
> Edit
```
sudo vim portainer-agent-stack.yml
```
> Down at the bottom, remove :
```
volumes:
portainer_data:
```
> And in Section services -&gt; portainer -&gt; volumes, change it to :
```
volumes:
- type: bind
source: /mnt/data/portainer/portainer_data
target: /data
```
> Deploy
```
docker stack deploy -c portainer-agent-stack.yml portainer
```
#### Upgrade
```
docker service ls
docker pull portainer/portainer-ce:latest
docker service update --image portainer/portainer-ce:latest --publish-add 9443:9443 --force portainer_portainer
docker pull portainer/agent:latest
docker service update --image portainer/agent:latest --force portainer_agent
```