This commit is contained in:
2026-01-28 16:54:06 +01:00
parent 213c2836f9
commit 493eb0ed90
73 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,6 @@
SOQuartz BLADE Baseboard
===
**Input power :**
- DC 5V @ 3A
- 3.5OD / 1.35ID (IEC 60130-10 Type H)
- Barrel DC Jack connector

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,107 @@
# Baseboard Management Controller (BMC)
### bikiniBottom
```
Login : root
Password : hardiness-mystify-pretext
```
[http://192.168.1.69](http://192.168.1.69 "http://192.168.1.69")
```bash
# tpi -h
Usage: tpi [host] <options...>
Options:
-p, --power (on off status) Power management
-u, --usb (host device status) USB mode,Must be used with the node command
-n, --node (1 2 3 4) USB selected node
-r, --resetsw reset switch
-U, --uart uart opt get or set
-C, --cmd uart set cmd
-F, --upgrade upgrade fw
-f, --flash todo
-h, --help usage
example:
$ tpi -p on //power on
$ tpi -p off //power off
$ tpi -u host -n 1 //USB uses host mode to connect to Node1
$ tpi --uart=get -n 1 //get node1 uart info
$ tpi --uart=set -n 1 --cmd=ls//set node1 uart cmd
$ tpi --upgrade=/mnt/sdcard/xxxx.swu //upgrade fw
$ tpi -r //reset switch
```
`vi /etc/network/interfaces`
```
# interface file auto-generated by buildroot
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
hwaddress ether 02:e4:fd:09:22:fb
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.254
pre-up /etc/network/nfs_check
wait-delay 15
hostname $(hostname)
```
`vi /etc/resolv.conf`
```
search lan # eth0
nameserver 192.168.1.254 # eth0
```
```bash
/etc/init.d/S40network restart
date -s @"$(curl -s "http://worldtimeapi.org/api/timezone/Europe/Paris" | sed -n 's/.*"unixtime":\([0-9]*\).*/\1/p')"
hwclock --systohc
```
- MAC Address : 02:e4:fd:09:22:fb
- Hosts password : transatlantique
#### Update
```
Check the root partition in use: Use df -h to check if /mnt/sdcard is mounted and to note which root partition is in use - ubi0_5 or ubi0_6.
```
```bash
scp path\\to\\the\\swu\\file root@192.168.1.10:/tmp
```
```
Create and execute a command: Use the firmware path and the partition information to create the command. If the partition in use is ubi05, flash ubi06 and vice versa.
```
```bash
swupdate -i /tmp/turing_pi_____.swu -e stable,upgrade_ubi6
```
#### Powering On All Nodes at Startup
By default, the Turing Pi 2 doesn't power on any nodes at startup. While this can be beneficial in certain scenarios, it might not be ideal in situations like recovering from a power outage. To alter this default behavior, a small startup script can be added. Create the file S99zallnodeson.sh in the /etc/init.d directory with the following content:
```bash
ssh root@192.168.1.10
vi /etc/init.d/S99zallnodeson.sh
```
```sh
#!/bin/sh
while ! netstat -tuln | grep LISTEN | grep ':80 '; do sleep 1; done
```
##### Turn on all nodes
`tpi -p on` The name of the file must follow alphabetically the name S99hello.sh because the BMC server starts in the S99hello.sh.

View File

@@ -0,0 +1,8 @@
/srv/nfs4 192.168.1.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)
/srv/nfs4/cloud 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/media 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/ebooks 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/git 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/photos 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/downloads 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/backups 192.168.1.0/24(ro,sync,no_subtree_check)

169
TuringPi/compute-modules.md Normal file
View File

@@ -0,0 +1,169 @@
Compute Modules
===
### Install OS / headless installation
#### SoQuartz (RK3566)
[![soquartz-cm.webp](https://bookstack.delmar.bzh/uploads/images/gallery/2024-05/scaled-1680-/qocUKTE93iLdKVDU-soquartz-cm.webp)](https://bookstack.delmar.bzh/uploads/images/gallery/2024-05/qocUKTE93iLdKVDU-soquartz-cm.webp)
##### Specifications
##### <span class="mw-headline" id="bkmrk-cpu-architecture-1">CPU Architecture</span>
- [Quad-core ARM Cortex-A55@1.8GHz](https://developer.arm.com/ip-products/processors/cortex-a/cortex-a55)
- AArch32 for full backwards compatibility with ARMv7
- ARM Neon Advanced SIMD (single instruction, multiple data) support for accelerated media and signal processing computation
- Includes VFP hardware to support single and double-precision operations
- ARMv8 Cryptography Extensions
- Integrated 32KB L1 instruction cache and 32KB L1 data cache per core
- 512KB unified system L3 cache
##### <span class="mw-headline" id="bkmrk-gpu-%28graphics-proces-1">GPU (Graphics Processing Unit) Capabilities</span>
- [Mali-G52 2EE Bifrost GPU@800MHz](https://developer.arm.com/ip-products/graphics-and-multimedia/mali-gpus/mali-g52-gpu)
- 4x Multi-Sampling Anti-Aliasing (MSAA) with minimal performance drop
- 128KB L2 Cache configurations
- Supports OpenGL ES 1.1, 2.0, and 3.2
- Supports Vulkan 1.0 and 1.1
- Supports OpenCL 2.0 Full Profile
- Supports 1600 Mpix/s fill rate when at 800MHz clock frequency
- Supports 38.4 GLOP/s when at 800MHz clock frequency
##### <span class="mw-headline" id="bkmrk-neural-process-unit--1">Neural Process Unit NPU Capability</span>
- Neural network acceleration engine with processing performance of up to 0.8 TOPS
- Supports integer 8 and integer 16 convolution operations
- Supports the following deep learning frameworks: TensorFlow, TF-lite, Pytorch, Caffe, ONNX, MXNet, Keras, Darknet
##### <span class="mw-headline" id="bkmrk-system-memory-1">System Memory</span>
- RAM Memory Variants: 2GB, 4GB, 8GB LPDDR4.
- Storage Memory: optional 128Mb SPI Flash and optional eMMC module from 8GB up to 128GB
##### <span class="mw-headline" id="bkmrk-network-1">Network</span>
- 10/100/1000Mbps Ethernet
- WiFi 802.11 b/g/n/ac with Bluetooth 5.0
##### Setup
1. Flash the Plebian image to a SD card/ EMMC.
2. Boot.
3. ssh as "pleb/pleb" and complete setup
```bash
sudo vim /etc/sudoers.d/pleb
pleb ALL=(ALL:ALL) NOPASSWD: ALL
```
```
sudo dpkg-reconfigure locales
sudo timedatectl set-timezone Europe/Paris
```
```bash
sudo apt update && sudo apt install -y curl
curl -O https://overviewer.org/~pillow/up/75bea78e59/devicetrees-plebian-quartz64-20230601130309-arm64.deb
sudo dpkg -i devicetrees-plebian-quartz64-20230601130309-arm64.deb
# sudo sysctl -w net.core.rmem_default=262144
# sudo sysctl -w net.core.rmem_max=262144
#
# W: Possible missing firmware /lib/firmware/rockchip/dptx.bin for module rockchipdrm
#
sudo mkdir -p /lib/firmware/rockchip/
wget https://raw.githubusercontent.com/wkennington/linux-firmware/master/rockchip/dptx.bin
sudo cp /home/pleb/dptx.bin /lib/firmware/rockchip/dptx.bin
```
```bash
sudo apt install -y cowsay fortune
sudo vim /etc/update-motd.d/01-cowsay
```
```bash
#!/bin/bash
/usr/games/fortune | /usr/games/cowsay -f tux
```
```bash
sudo chmod +x /etc/update-motd.d/01-cowsay
```
#### Networking
> [Networking](networking.md)
#### Sensors
```bash
sudo apt install lm-sensors fancontrol read-edid i2c-tools libi2c-dev python3-smbus -y
sudo sensors-detect
```
---
#### Turing RK1 (RK3588)
[![turing-rk1-cm.webp](https://bookstack.delmar.bzh/uploads/images/gallery/2024-05/scaled-1680-/q91BzNH9bujT572b-turing-rk1-cm.webp)](https://bookstack.delmar.bzh/uploads/images/gallery/2024-05/q91BzNH9bujT572b-turing-rk1-cm.webp)
##### Specifications
<table id="bkmrk-instruction-set-armv"><tbody><tr><td>**Instruction Set**</td><td>ARMv8-A (64-bit)</td></tr><tr><td>**OS Support**</td><td>Ubuntu Server 22.04 LTS</td></tr><tr><td>**CPUs**</td><td>8× | 4× ARM Cortex-A76 | 4× ARM Cortex-A55 | DynamIQ</td></tr><tr><td>**GPU**</td><td>G610 GPU
Support OpenGLES 1.1, 2.0, and 3.2, OpenCL up to 2.2 and Vulkan1.2
Proprietary 2D hardware acceleration engine</td></tr><tr><td>**NPU**</td><td>6 TOPS</td></tr><tr><td>**RAM**</td><td>LPDDR4 up to 32 GB</td></tr><tr><td>**Storage**</td><td>32 G eMMC 5.1, SD 3.0</td></tr><tr><td>**Ethernet**</td><td>1000Mbps</td></tr><tr><td>**USB**</td><td>2x USB 3.0, 2x USB 2.0</td></tr><tr><td>**PCIe**</td><td>PCIe Gen3</td></tr><tr><td>**MIPI**</td><td>4-Lane MIPI-DSI, 4-Lane MIPI-CSI</td></tr><tr><td>**Digital Audio port**</td><td>2× I2S</td></tr><tr><td>**HDMI**</td><td>HDMI 2.1, 8K@60fps</td></tr><tr><td>**Video input port**</td><td>2× MIPI-CSI RX 4× lanes
1× MIPI-CSI DPHY RX 4× lanes</td></tr><tr><td>**Display output port**</td><td>1× MIPI-DSI DPHY 2× lanes
1× HDMI 2.1
1× DP 1.4</td></tr><tr><td>**VPU**</td><td>VPU 2.0, supporting 8K video</td></tr><tr><td>**Video Encoder**</td><td>H264, 8K@30FPS
VP9/H265, 8k@60FPS
AV1/AVS2, 4k@60FPS</td></tr><tr><td>**Video Decoder**</td><td>H265/H264/VP9/AV1/AVS2 up to 8K@60fps</td></tr><tr><td>**Power**</td><td>5V/3A via USB Type-C</td></tr><tr><td>**Operating Temp**</td><td>-20°C to 70°C</td></tr><tr><td>**Storage Temp**</td><td>-40°C to 85°C</td></tr><tr><td>**Weight**</td><td>17g</td></tr><tr><td>**Form Factor**</td><td>69.6mm x 45mm
260-pin SO-DIMM connector</td></tr></tbody></table>
##### Setup
[https://joshua-riek.github.io/ubuntu-rockchip-download/boards/turing-rk1.html](https://joshua-riek.github.io/ubuntu-rockchip-download/boards/turing-rk1.html)
> sandy (192.168.1.14)
> pleb / transatlantique
```bash
sudo useradd pleb
sudo groupadd sudo
sudo usermod -aG sudo pleb
```
```bash
sudo hostnamectl set-hostname [hostname]
sudo dpkg-reconfigure tzdata
(. /etc/lsb-release &&
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh |
sudo env os=ubuntu dist="${DISTRIB_CODENAME}" bash)
sudo apt update && sudo apt install -y git-all git-lfs curl wget screenfetch net-tools open-iscsi python3 python3-pip build-essential libssl-dev libffi-dev python3-dev wireguard
```
**PS :** [Ubuntu Pro](https://ubuntu.com/pro/tutorial)
---
### Documentation
[https://docs.turingpi.com](https://docs.turingpi.com)
##### help.turingpi.com
jme69@pm.me
Ln4@iJMNn85h2Xrx
[https://help.turingpi.com/hc/en-us](https://help.turingpi.com/hc/en-us "https://help.turingpi.com/hc/en-us")
##### Server World
[https://www.server-world.info/en/note?os=Debian\_12&amp;p=download&amp;f=1](https://www.server-world.info/en/note?os=Debian_12&p=download&f=1)

109
TuringPi/disks.md Normal file
View File

@@ -0,0 +1,109 @@
Disk(s)
===
#### Format and mount the disks
```bash
sudo fdisk -l
**With disk <xxx>**
sudo fdisk /dev/<xxx>
(create new primary partition)
sudo mkfs -t ext4 /dev/<xxxp1>
sudo blkid -s UUID -o value /dev/<xxxp1>
```
#### If needed, check and repair the filesystem
```bash
sudo e2fsck -fDC0 /dev/sdaX
```
#### Add mount point to fstab
```bash
sudo mkdir -p /mnt/<folder>
echo "UUID=... /mnt/<folder> ext4 defaults 0 0" | sudo tee -a /etc/fstab
```
#### bob (nvme0n1)
```
UUID=5855d1bc-99fd-41f2-9d11-7c00a1c0ad00 / ext4 defaults,x-systemd.growfs 0 1
# NVME
UUID=2423d885-a583-44d2-bd45-80d847c625b0 /mnt/ssd ext4 defaults 0 0
# Caddy Websites
/mnt/ssd/www /var/www auto defaults,nofail,nobootwait,bind 0 2
```
#### carlo (nvme0n1)
```
UUID=5855d1bc-99fd-41f2-9d11-7c00a1c0ad00 / ext4 defaults,x-systemd.growfs 0 1
# SSD
UUID=1bf5f58a-b0d9-4af2-8118-64ccf24fa0ce /mnt/ssd ext4 defaults 0 0
```
#### sandy (sda / nvme0n1) ...
```
UUID=1cf633ab-4f5f-42da-b347-31282732a446 / ext4 defaults,x-systemd.growfs 0 1
# SSD
UUID=6fab06af-f38f-493a-87ab-512f52a6616c /mnt/ssd ext4 defaults 0 0
# HDD
UUID=22446d47-8d74-451e-8774-e1c8e2617b7e /mnt/hdd ext4 defaults 0 0
```
#### gary (nvme0n1) ...
```
UUID=1cf633ab-4f5f-42da-b347-31282732a446 / ext4 defaults,x-systemd.growfs 0 1
# SSD
UUID=7f348b05-b44e-4b85-8445-657dc95e72df /mnt/data ext4 defaults 0 0
# NFS
krabs:/srv/nfs4/media /nfs/media nfs _netdev,x-systemd.automount,nofail,noatime,nolock,tcp,bg,actimeo=1800
krabs:/srv/nfs4/downloads /nfs/downloads nfs defaults
```
#### sheldon (sda / sdb / nvme0n1) ...
```
UUID=1cf633ab-4f5f-42da-b347-31282732a446 / ext4 defaults,x-systemd.growfs 0 1
# SSD
UUID=a612717e-ca95-44a3-9b10-10e6be26112f /mnt/docker ext4 defaults 0 0
# LVM
UUID=dbc636b9-e77b-4fb5-a192-a17c244857a2 /mnt/data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
```
#### krabs (nfs server)
```
UUID=0c390345-ca52-45fd-9097-6e931d651a8f / ext4 defaults,x-systemd.growfs 0 1
# LVM
/dev/hdds/datas /mnt/data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
# NFS exports (cf. /etc/exports)
/mnt/data/media /srv/nfs4/media none bind,nofail 0 0
/mnt/data/media/downloads /srv/nfs4/downloads none bind,nofail 0 0
/opt/backups /srv/nfs4/backups none bind,nofail 0 0
```
#### bernie (nvme0n1) ...
```
UUID=5855d1bc-99fd-41f2-9d11-7c00a1c0ad00 / ext4 defaults,x-systemd.growfs 0 1
# SSD
UUID=e8d74f7a-642a-486d-92db-d39388d1545f /mnt/ssd ext4 defaults 0 0
```
#### patrick (nvme0n1)
```
UUID=0ED4-57AA /config vfat defaults,x-systemd.automount,fmask=0077,dmask=0077 0 2
UUID=0ED4-A6E5 /boot/efi vfat defaults,x-systemd.automount,fmask=0077,dmask=0077 0 2
UUID=8ac0019e-35c5-4065-828a-804da615ee7a / ext4 defaults 0 1
```
#### Check
```bash
cat /etc/fstab
```
#### Mount disk...
```bash
sudo mount -a
sudo systemctl daemon-reload
df -h /mnt/<folder>
```

487
TuringPi/docker-swarm.md Normal file
View File

@@ -0,0 +1,487 @@
# Docker Swarm
## 1. Compute Modules
### Install OS / headless installation
1. Flash the image to a SD card/ EMMC.
2. Go to Boot partition.
3. Then change the DTB listed in /boot/extlinux/extlinux.conf to -model-a.dtb instead of -cm4.dtb.
4. Now put the SD card/ EMMC to the preferred base board and boot.
5. First boot will take time as the user partition will resize to take whole storage size.
> ssh as "root" and complete setup
```
sudo apt update && sudo apt upgrade
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt install -y docker-compose docker git vim screenfetch net-tools wget curl
sudo apt autoremove
```
#### Edit
> sudo vim /etc/sudoers.d/pleb
```
pleb ALL=(ALL:ALL) NOPASSWD: ALL
```
### Set Static IP
#### Create/Edit
> sudo vim /etc/systemd/network/end0.network
```
[Match]
Name=end0
[Network]
Address=192.168.1.21/24
Gateway=192.168.1.254
# OpenDNS
DNS=208.67.222.222
DNS=208.67.220.220
```
#### Edit
> sudo vim /etc/hosts
```
# Host addresses
127.0.0.1 localhost
127.0.1.1 shell1
# Nodes
192.168.1.186 shell1 shell1.local
192.168.1.243 shell2 shell2.local
192.168.1.194 shell3 shell3.local
192.168.1.222 shell4 shell3.local
```
#### Reboot
> On remote :
```
ssh-keygen -t ed25519
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@shell2
ssh pleb@shell2
```
#### Edit
> sudo vim /etc/ssh/sshd\_config
```
PasswordAuthentication no
```
#### Restart service
```
sudo systemctl restart sshd
```
---
```
sudo systemctl enable iptables
sudo systemctl start iptables
sudo iptables -I INPUT -p tcp -m tcp --dport 2376 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
sudo iptables -I INPUT -p udp -m udp --dport 7946 -j ACCEPT
sudo iptables -I INPUT -p tcp -m udp --dport 4789 -j ACCEPT
sudo iptables -I INPUT -p 50 -j ACCEPT # allows ipsec when secure overlay is enabled
```
#### Restart...
---
## 2. Docker Swarm
### Install docker...
```
sudo apt install -y docker docker-compose git
sudo usermod -aG docker pleb
sudo systemctl enable docker.service
sudo systemctl start docker.service
sudo chmod 666 /var/run/docker.sock
```
---
#### On master
```
docker swarm init --advertise-addr 192.168.1.186
```
> *Swarm initialized: current node (3kdxixaa86m8pvag6jn0b70ut) is now a manager*.
#### On nodes
```
docker swarm join --token SWMTKN-1-2px1bindhl41x9h6l4ve7x15iwjryr0uf3ekmu7hz4bezjewwh-ae9vv4657zcki160s71vjn75z 192.168.1.186:2377
```
> *To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions*.
> Change ip range if necessary ?!
```
docker network rm ingress
# Create in different range
docker network create --driver overlay --ingress --subnet 192.168.1.0/16 --gateway 192.168.1.254 ingress
```
---
```
docker node ls
docker network ls
```
---
## 3. Keepalived
> *Load Balancer : install on all nodes*
```
sudo apt-get -y install keepalived
```
### On master node (shell1)
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface end0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
### On Node2
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface end0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
### On Node3
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf<span class="keep-md">`</span>
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface end0
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
### On Node4
#### Create/Edit
> sudo vim /etc/keepalived/keepalived.conf<span class="keep-md">`</span>
```
! Configuration File for keepalived
global_defs {
notification_email {
jme69@pm.me
}
notification_email_from admin@delmar.bzh
smtp_server pro1.mail.ovh.net
smtp_connect_timeout 30
router_id docker_ingress
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface end0
virtual_router_id 51
priority 70
advert_int 1
authentication {
auth_type PASS
auth_pass transatlantique
}
virtual_ipaddress {
192.168.1.30
}
}
```
#### Start/Enable Service
```
sudo systemctl enable keepalived
sudo systemctl start keepalived
ip a show end0
```
---
## 4. Glusterfs
> *Distributed Storage : install on all nodes*
```
sudo apt-get -y install glusterfs-server
sudo systemctl enable glusterd
sudo systemctl start glusterd
```
> sudo fdisk -l
```
# Format your disk (If you decided to use your internal storage, just ignore the mount and format steps)
sudo mkfs.xfs -f /dev/sda
# Create brick folder location
sudo mkdir -p /data/glusterfs/volume/brick
# Add line to /etc/fstab to auto mount this disk to /data on boot
echo "/dev/sda1 /data/glusterfs/volume/brick xfs defaults 0 0" | sudo tee -a /etc/fstab
# Create brick folder
# sudo mkdir -p /data/glusterfs/volume/brick/brick
# Mount
sudo mount -a
sudo systemctl daemon-reload
sudo gluster peer probe shell2
sudo gluster peer probe shell3
sudo gluster peer probe shell4
```
> Check
```
df -h /data/glusterfs/volume/brick
sudo gluster pool list
sudo gluster peer status
```
> For 2 nodes:
```
sudo gluster volume create docker-volume replica 2 transport tcp shell1:/mnt/datav shell3:/mnt/data force
```
For 3 nodes:
```
sudo gluster volume create dockervolume disperse 3 redundancy 1 shell1:/data/glusterfs/volume/brick shell2:/data/glusterfs/volume/brick shell3:/data/glusterfs/volume/brick force
```
#### Start Gluster Volume
```
sudo gluster volume info dockervolume
sudo gluster volume start dockervolume
```
> On all nodes :
```
sudo mkdir /mnt/data
echo "localhost:/dockervolume /mnt/data glusterfs defaults,_netdev 0 0" | sudo tee -a /etc/fstab
sudo mount -a
sudo systemctl daemon-reload
```
> Check
> > Persistent shared storage will be "/mnt/data"
```
df -h /mnt/data
sudo gluster volume set dockervolume auth.allow 127.0.0.1,192.168.1.186,192.168.1.243,192.168.1.194,192.168.1.222
```
> When adding a new brick:
```
sudo gluster volume add-brick **volume** replica X **server:brick**
```
> Option
```
docker plugin install --alias glusterfs mikebarkmin/glusterfs SERVERS=shell1,shell2,shell3,shell4 VOLNAME=dockervolume
```
### 5. Portainer
Create container directory
```
sudo mkdir -p /var/lib/docker/volumes
sudo mkdir -p /mnt/data/portainer
```
> Edit
```
sudo vim portainer-agent-stack.yml
```
> Down at the bottom, remove :
```
volumes:
portainer_data:
```
> And in Section services -&gt; portainer -&gt; volumes, change it to :
```
volumes:
- type: bind
source: /mnt/data/portainer/portainer_data
target: /data
```
> Deploy
```
docker stack deploy -c portainer-agent-stack.yml portainer
```
#### Upgrade
```
docker service ls
docker pull portainer/portainer-ce:latest
docker service update --image portainer/portainer-ce:latest --publish-add 9443:9443 --force portainer_portainer
docker pull portainer/agent:latest
docker service update --image portainer/agent:latest --force portainer_agent
```

134
TuringPi/glusterfs-check.sh Normal file
View File

@@ -0,0 +1,134 @@
#!/bin/bash
LOG_FILE="/var/log/glusterfs-check.log"
STEPS=7 # Nombre total d'étapes dans le script
# Supprimer les anciens logs
if [ -f "$LOG_FILE" ]; then
rm "$LOG_FILE"
fi
# Fonction de journalisation
log() {
echo "$(date) : $1" >> $LOG_FILE
}
# Fonction de progression
progress() {
local step=$1
local total=$2
local percent=$(( 100 * step / total ))
echo -ne "Progress: [${percent}%]...\r"
}
step=1
log "Script de vérification GlusterFS démarré."
progress $step $STEPS
# Vérification de l'état du service glusterd
if ! systemctl is-active --quiet glusterd; then
log "Le service glusterd n'est pas actif. Tentative de démarrage."
sudo systemctl start glusterd
if ! systemctl is-active --quiet glusterd; then
log "Échec du démarrage du service glusterd."
exit 1
else
log "Service glusterd démarré avec succès."
fi
else
log "Le service glusterd est déjà actif."
fi
step=$((step + 1))
progress $step $STEPS
# Vérification de l'état des pairs GlusterFS
peers_status=$(sudo gluster peer status | grep "Peer in Cluster" | wc -l)
if [ "$peers_status" -ne 2 ]; then
log "Tous les pairs ne sont pas en ligne. Tentative de reconnexion des pairs."
# Ajout des adresses IP de vos pairs
peers=("192.168.1.50" "192.168.1.80" "192.168.1.110")
for peer in "${peers[@]}"; do
sudo gluster peer probe $peer
done
# Revérification de l'état des pairs
peers_status=$(sudo gluster peer status | grep "Peer in Cluster" | wc -l)
if [ "$peers_status" -ne 2]; then
log "Échec de la reconnexion de tous les pairs."
exit 1
else
log "Tous les pairs sont maintenant en ligne."
fi
else
log "Tous les pairs sont en ligne."
fi
step=$((step + 1))
progress $step $STEPS
# Vérification de l'état du volume
volume_status=$(sudo gluster volume info volume_applicatif | grep "Status: Started" | wc -l)
if [ "$volume_status" -ne 1]; then
log "Le volume volume_applicatif n'est pas démarré. Tentative de démarrage."
sudo gluster volume start volume_applicatif
if [ "$?" -ne 0 ]; then
log "Échec du démarrage du volume volume_applicatif."
exit 1
else
log "Volume volume_applicatif démarré avec succès."
fi
else
log "Le volume volume_applicatif est déjà démarré."
fi
step=$((step + 1))
progress $step $STEPS
# Vérification des briques du volume
brick_status=$(sudo gluster volume status volume_applicatif detail | grep "N/A" | wc -l)
if [ "$brick_status" -ne 0 ]; then
log "Certaines briques du volume volume_applicatif ne sont pas en ligne."
# Tentative de remonter les briques
sudo gluster volume heal volume_applicatif
fi
step=$((step + 1))
progress $step $STEPS
# Vérification du montage des volumes
if mountpoint -q /mnt/montage_applicatif; then
log "Le volume GlusterFS est déjà monté."
else
log "Tentative de montage du volume GlusterFS."
sudo mount -t glusterfs 192.168.1.50:/volume_applicatif /mnt/montage_applicatif
if [ "$?" -ne 0 ]; then
log "Échec du montage des volumes GlusterFS. Tentative de diagnostic."
# Diagnostic supplémentaire
mount_output=$(sudo mount -t glusterfs 192.168.1.50:/volume_applicatif /mnt/montage_applicatif 2>&1)
log "Sortie de la commande de montage : $mount_output"
# Vérification de l'accès au volume
glusterfs_output=$(sudo gluster volume status volume_applicatif 2>&1)
log "Statut du volume GlusterFS : $glusterfs_output"
exit 1
else
log "Volumes GlusterFS montés avec succès."
fi
fi
step=$((step + 1))
progress $step $STEPS
log "Script de vérification GlusterFS terminé avec succès."
progress $STEPS $STEPS
echo ""

257
TuringPi/hosts.md Normal file
View File

@@ -0,0 +1,257 @@
Hosts
===
## Hosts / IPs
```bash
sudo vim /etc/hosts
```
```bash
127.0.0.1 localhost
127.0.1.1 <host>.local <host>
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
#
192.168.1.11 bob
192.168.1.12 carlo
192.168.1.13 gary
192.168.1.14 sandy
192.168.1.15 krabs
192.168.1.16 sheldon
192.168.1.17 bernie
192.168.1.18 patrick
#
192.168.1.53 retropie
```
### SSH
#### On remote (PC) :
```bash
ssh-keygen -t ed25519
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@192.168.1.xxx
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@<hostname>
```
===
### Map
```markdown
. bikiniBottom
├── bob (192.168.1.11) # caddy
├── carlo (192.168.1.12) # stream
├── turing-pi (192.168.1.10)
| ├── gary (192.168.1.13)
| ├── sandy (192.168.1.14)
| ├── sheldon (192.168.1.16)
| └── patrick (192.168.1.18)
├── krabs (192.168.1.15) # nfs server
├── bernie (192.168.1.17) # octoprint
| ------------------------------
└── retropie (192.168.1.53)
```
[![bikiniBottom.webp](https://bookstack.delmar.bzh/uploads/images/gallery/2025-01/scaled-1680-/FAAaSpaRwVvu0HVc-bikinibottom.webp)](https://bookstack.delmar.bzh/uploads/images/gallery/2025-01/FAAaSpaRwVvu0HVc-bikinibottom.webp)
### bob
```bash
_,met$$$$$gg. hostname: bob
,g$$$$$$$$$$$$$$$P. OS: Debian 12 bookworm
,g$$P"" """Y$$.". Kernel: aarch64 Linux 6.1.0-41-arm64
,$$P' `$$$. Uptime:
',$$P ,ggs. `$$b: Packages: 619
`d$$' ,$P"' . $$$ Shell: bash 5.2.15
$$P d$' , $$P Disk: 148G
$$: $$. - ,d$$' CPU: ARM Cortex-A55 @ 4x 1.8GHz
$$\; Y$b._ _,d$P' RAM: 3737MiB
Y$$. `.`"Y$$$$P"'
`$$b "-.__
`Y$$
`Y$$.
`$$b.
`Y$$b.
`"Y$b._
`""""
```
### carlo
```bash
_,met$$$$$gg. hostname: carlo
,g$$$$$$$$$$$$$$$P. OS: Debian 12 bookworm
,g$$P"" """Y$$.". Kernel: aarch64 Linux 6.1.0-41-arm64
,$$P' `$$$. Uptime:
',$$P ,ggs. `$$b: Packages: 1339
`d$$' ,$P"' . $$$ Shell: bash 5.2.15
$$P d$' , $$P Disk: 265G
$$: $$. - ,d$$' CPU: ARM Cortex-A55 @ 4x 1.8GHz
$$\; Y$b._ _,d$P' RAM: 3737MiB
Y$$. `.`"Y$$$$P"'
`$$b "-.__
`Y$$
`Y$$.
`$$b.
`Y$$b.
`"Y$b._
`""""
```
### gary
```bash
./+o+- hostname: gary
yyyyy- -yyyyyy+ OS: Ubuntu 25.10 questing
://+//////-yyyyyyo Kernel: aarch64 Linux 6.11.0-1006-rockchip
.++ .:/++++++/-.+sss/` Uptime:
.:++o: /++++++++/:--:/- Packages: 1447
o:+o+:++.`..```.-/oo+++++/ Shell: bash 5.2.37
.:+o:+o/. `+sssoo+/ Disk: 1TB
.++/+:+oo+o:` /sssooo. CPU: ARM Cortex-A55 Cortex-A76 @ 8x 1.8GHz
/+++//+:`oo+o /::--:. GPU: Mali G610
\+/+o+++`o++o ++////. RAM: 15959MiB
.++.o+++oo+:` /dddhhh.
.+.o+oo:. `oddhhhh+
\+.++o+o``-````.:ohdhhhhh+
`:o+++ `ohhhhhhhhyo++os:
.o:`.syhhhhhhh/.oo++o`
/osyyyyyyo++ooo+++/
````` +oo+++o\:
`oo++.
```
### sandy
```bash
./+o+- hostname: sandy
yyyyy- -yyyyyy+ OS: Ubuntu 25.10 questing
://+//////-yyyyyyo Kernel: aarch64 Linux 6.11.0-1006-rockchip
.++ .:/++++++/-.+sss/` Uptime:
.:++o: /++++++++/:--:/- Packages: 1262
o:+o+:++.`..```.-/oo+++++/ Shell: bash 5.2.37
.:+o:+o/. `+sssoo+/ Disk: 1TB / 1TB
.++/+:+oo+o:` /sssooo. CPU: ARM Cortex-A55 Cortex-A76 @ 8x 1.8GHz
/+++//+:`oo+o /::--:. RAM: 7929MiB
\+/+o+++`o++o ++////.
.++.o+++oo+:` /dddhhh.
.+.o+oo:. `oddhhhh+
\+.++o+o``-````.:ohdhhhhh+
`:o+++ `ohhhhhhhhyo++os:
.o:`.syhhhhhhh/.oo++o`
/osyyyyyyo++ooo+++/
````` +oo+++o\:
`oo++.
```
### krabs ([NFS Server](https://bookstack.delmar.bzh/books/turing-pi/page/nfs-server "NFS Server"))
```bash
_,met$$$$$gg. hostname: krabs
,g$$$$$$$$$$$$$$$P. OS: Debian 12 bookworm
,g$$P"" """Y$$.". Kernel: aarch64 Linux 6.1.0-41-arm64
,$$P' `$$$. Uptime:
',$$P ,ggs. `$$b: Packages: 585
`d$$' ,$P"' . $$$ Shell: bash 5.2.15
$$P d$' , $$P Disk: 7.2TB
$$: $$. - ,d$$' CPU: ARM Cortex-A55 @ 4x 1.8GHz
$$\; Y$b._ _,d$P' RAM: 7690MiB
Y$$. `.`"Y$$$$P"'
`$$b "-.__
`Y$$
`Y$$.
`$$b.
`Y$$b.
`"Y$b._
`""""
```
### sheldon
```bash
./+o+- hostname: sheldon
yyyyy- -yyyyyy+ OS: Ubuntu 25.10 questing
://+//////-yyyyyyo Kernel: aarch64 Linux 6.11.0-1006-rockchip
.++ .:/++++++/-.+sss/` Uptime:
.:++o: /++++++++/:--:/- Packages: 1053
o:+o+:++.`..```.-/oo+++++/ Shell: bash 5.2.37
.:+o:+o/. `+sssoo+/ Disk: 1TB / 1.8TB
.++/+:+oo+o:` /sssooo. CPU: ARM Cortex-A55 Cortex-A76 @ 8x 1.8GHz
/+++//+:`oo+o /::--:. RAM: 15959MiB
\+/+o+++`o++o ++////.
.++.o+++oo+:` /dddhhh.
.+.o+oo:. `oddhhhh+
\+.++o+o``-````.:ohdhhhhh+
`:o+++ `ohhhhhhhhyo++os:
.o:`.syhhhhhhh/.oo++o`
/osyyyyyyo++ooo+++/
````` +oo+++o\:
`oo++.
```
### bernie
```bash
_,met$$$$$gg. hostname: bernie
,g$$$$$$$$$$$$$$$P. OS: Debian 12 bookworm
,g$$P"" """Y$$.". Kernel: aarch64 Linux 6.1.0-41-arm64
,$$P' `$$$. Uptime:
',$$P ,ggs. `$$b: Packages: 988
`d$$' ,$P"' . $$$ Shell: bash 5.2.15
$$P d$' , $$P Disk: 265G
$$: $$. - ,d$$' CPU: ARM Cortex-A55 @ 4x 1.8GHz
$$\; Y$b._ _,d$P' RAM: 3737MiB
Y$$. `.`"Y$$$$P"'
`$$b "-.__
`Y$$
`Y$$.
`$$b.
`Y$$b.
`"Y$b._
`""""
```
### patrick
```bash
_,met$$$$$gg. pleb@patrick
,g$$$$$$$$$$$$$$$P. OS: Debian 12 bookworm
,g$$P"" """Y$$.". Kernel: aarch64 Linux 6.6.89-3-sky1
,$$P' `$$$. Uptime: 14m
',$$P ,ggs. `$$b: Packages: 1621
`d$$' ,$P"' . $$$ Shell: bash 5.2.15
$$P d$' , $$P Disk: 7,2G / 955G (1%)
$$: $$. - ,d$$' CPU: CIX P1 CD8180 @ 12x 2,6GHz
$$\; Y$b._ _,d$P' RAM: 7166MiB / 31675MiB
Y$$. `.`"Y$$$$P"'
`$$b "-.__
`Y$$
`Y$$.
`$$b.
`Y$$b.
`"Y$b._
`""""
```
---
## Upcoming
### pearl
```bash
```

483
TuringPi/kubernetes.md Normal file
View File

@@ -0,0 +1,483 @@
# Kubernetes
## 1. Compute Modules
### Install OS / headless installation
1. Flash the Plebian image to a SD card/ EMMC.
2. Boot.
3. ssh as "pleb/pleb" and complete setup `sudo vim /etc/sudoers.d/pleb`
```
pleb ALL=(ALL:ALL) NOPASSWD: ALL
```
```bash
sudo apt update && sudo apt install -y curl
curl -O https://overviewer.org/~pillow/up/75bea78e59/devicetrees-plebian-quartz64-20230601130309-arm64.deb
sudo dpkg -i devicetrees-plebian-quartz64-20230601130309-arm64.deb
# sudo sysctl -w net.core.rmem_max=2500000
sudo apt update && sudo apt -y upgrade
sudo hostnamectl set-hostname shell1
sudo apt install -y git wget screenfetch net-tools open-iscsi python3 python3-pip build-essential libssl-dev libffi-dev python3-dev
sudo apt install -y docker docker-compose
```
#### Reboot
`sudo blkid -s UUID -o value /dev/sda1`
```
shell1 | CHANGED | rc=0 >>
UUID=a4e3d8f1-11cc-482b-9596-57c44b122e48
shell2 | CHANGED | rc=0 >>
UUID=e264ad26-0767-4e6c-9131-0671f4ecfad4
shell3 | CHANGED | rc=0 >>
UUID=2056f43e-f41c-4772-9bf9-8f5a09dc911c
```
```bash
sudo mkfs.ext4 /dev/sda1
sudo mkdir -p /mnt/data
echo "UUID=... /mnt/data ext4 defaults 0 0" | sudo tee -a /etc/fstab
cat /etc/fstab
sudo mount -a
sudo systemctl daemon-reload
df -h /mnt/data
```
`sudo vim /etc/hosts`
```
# ----------------------------------------
# Host addresses
127.0.0.1 localhost shell1
# Nodes
192.168.1.109 shell1 shell1.local
192.168.1.163 shell2 shell2.local
#192.168.1. shell3 shell3.local
#192.168.1. shell4 shell3.local
# ----------------------------------------
```
#### On remote (PC) :
```bash
ssh-keygen -t ed25519
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@shell1
ssh pleb@shell1
```
`vim /etc/ssh/sshd_config`
```
# ----------------------------------------
PasswordAuthentication no
# ----------------------------------------
```
#### Restart service
`sudo systemctl restart sshd`
#### On shell1 :
```bash
ssh-keygen -t ed25519
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@shell2
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@shell3
ssh-copy-id -f -i ~/.ssh/id_ed25519.pub pleb@shell4
ssh pleb@shell2
ssh pleb@shell3
ssh pleb@shell4
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --user
python3 -m pip install --user ansible
```
`vim hosts.ini`
```ini
[master]
shell1 ansible_connection=local
[node]
shell2 ansible_connection=ssh
shell3 ansible_connection=ssh
shell4 ansible_connection=ssh
[k3s_cluster:children]
master
node
```
## UFW
```bash
sudo apt install -y ufw
sudo ufw allow "OpenSSH"
sudo ufw enable
sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10259/tcp
sudo ufw allow 10257/tcp
sudo ufw allow 179/tcp
sudo ufw allow 4789/udp
sudo ufw allow 4789/tcp
sudo ufw allow 2379/tcp
sudo ufw allow 30000:32767/tcp
sudo ufw status
```
```bash
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo service docker start
sudo docker run hello-world
sudo chmod 666 /var/run/docker.sock
sudo groupadd docker
sudo usermod -aG docker $USER
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
```
```bash
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ee:latest
```
## 2. Install Kubernetes (via Ansible)
### k3s-ansible
#### Install Ansible
- [https://wiki.archlinux.org/title/Ansible#Installation](https://wiki.archlinux.org/title/Ansible#Installation "https://wiki.archlinux.org/title/Ansible#Installation")
- [https://docs.ansible.com/ansible/latest/installation\_guide/index.html](https://docs.ansible.com/ansible/latest/installation_guide/index.html "https://docs.ansible.com/ansible/latest/installation_guide/index.html")
#### On remote (PC):
`apt install -y ansible`
#### Clone Git repo : [https://github.com/k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible "https://github.com/k3s-io/k3s-ansible")
```bash
cd www
git clone https://github.com/k3s-io/k3s-ansible.git
cp -R k3s-ansible/inventory/sample k3s-ansible/inventory/sandy
```
`vim www/k3s-ansible/inventory/sandy/hosts.ini`
```ini
[master]
192.168.1.209 ansible_connection=ssh var_hostname=shell1 var_disk=sda1 var_uuid=e13c29b3-5263-4ae7-82df-8ccdcc78e0b2
[node]
192.168.1.224 ansible_connection=ssh var_hostname=shell2 var_disk=sda1 var_uuid=85efb43b-5386-4a2d-9128-9a89b10538fa
192.168.1.123 ansible_connection=ssh var_hostname=shell3 var_disk=sda1 var_uuid=08af04f0-a35c-447d-9651-c46675a27142
192.168.1.233 ansible_connection=ssh var_hostname=shell4
[k3s_cluster:children]
master
node
```
`vim www/k3s-ansible/inventory/sandy/group_vars/all.yml`
> > change ansible\_user to pleb
#### Install playbook
```bash
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -m ping
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -m apt -a "name=iptables state=present" --become
ansible-playbook ~/www/k3s-ansible/site.yml -i ~/www/k3s-ansible/inventory/sandy/hosts.ini
```
#### On master:
```bash
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
mkdir ~/.kube
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER: ~/.kube/config
$ export KUBECONFIG=~/.kube/config
```
#### On PC :
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
scp pleb@shell1:~/.kube/config ~/.kube/config
export KUBECONFIG=~/.kube/config
kubectl get nodes
kubectl cluster-info
```
#### Setting labels:
```bash
# sudo k3s kubectl label nodes shell1 kubernetes.io/role=worker
sudo k3s kubectl label nodes shell2 kubernetes.io/role=worker
sudo k3s kubectl label nodes shell3 kubernetes.io/role=worker
sudo k3s kubectl label nodes shell4 kubernetes.io/role=worker
# sudo k3s kubectl label nodes shell1 node-type=worker
sudo k3s kubectl label nodes shell2 node-type=worker
sudo k3s kubectl label nodes shell3 node-type=worker
sudo k3s kubectl label nodes shell4 node-type=worker
```
#### ! Ping !
`ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -m ping`
#### ! Reset !
```bash
ansible-playbook ~/www/k3s-ansible/reset.yml -i ~/www/k3s-ansible/inventory/sandy/hosts.ini
```
#### ! Restart !
`ansible all -i ~/www/k3s-ansible/inventory/sandy/hosts.ini -a "shutdown -r now" -b`
## 3. Helm
#### On master
```bash
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version
```
## 4. MetalLb
```bash
helm repo add metallb https://metallb.github.io/metallb
helm search repo metallb
helm upgrade --install metallb metallb/metallb --create-namespace --namespace metallb-system --wait
```
```bash
cat << 'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.30-192.168.1.49
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
EOF
```
## 5. Local Storage Provider (Longhorn)
```bash
sudo systemctl enable iscsid.service
sudo systemctl start iscsid.service
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -b -m apt -a "name=nfs-common state=present"
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -b -m apt -a "name=open-iscsi state=present"
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -b -m apt -a "name=util-linux state=present"
```
```bash
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -b -m shell -a "lsblk -f"
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -b -m shell -a "blkid -s UUID -o value /dev/sda1"
```
#### Ansible mount:
```bash
ansible -i ~/www/k3s-ansible/inventory/sandy/hosts.ini k3s_cluster -m ansible.posix.mount -a "path=/mnt/data src=UUID={{ var_uuid }} fstype=ext4 state=mounted" -b
```
### Longhorn
```bash
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.4.2 --set defaultSettings.defaultDataPath="/mnt/data" --set service.ui.loadBalancerIP="192.168.1.31" --set service.ui.type="LoadBalancer"
sudo k3s kubectl -n longhorn-system get pod
sudo k3s kubectl get storageclass
```
- local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 111m
- longhorn (default) driver.longhorn.io Delete Immediate true 65m Mark as "non-default"
```bash
sudo k3s kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
```
Mark as "default"
```bash
kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
#### UI
```bash
USER=admin; PASSWORD=transatlantique; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth
sudo k3s kubectl -n longhorn-system create secret generic basic-auth --from-file=auth
```
`vim longhorn-ingress.yml`
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/ssl-redirect: 'false'
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required '
nginx.ingress.kubernetes.io/proxy-body-size: 10000m
spec:
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: longhorn-frontend
port:
number: 80
```
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
# add an annotation indicating the issuer to use
cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
rules:
- host: dashboard.delmar.bzh
http:
paths:
- path: /
backend:
serviceName: longhorn-frontend
servicePort: 8090
tls:
- # cert-manager will store the certificate and key in this secret
secretName: dashboard-delmar-bzh-cert
hosts:
- dashboard.delmar.bzh
```
```bash
sudo k3s kubectl -n longhorn-system apply -f longhorn-ingress.yml
sudo k3s kubectl -n longhorn-system get ingress
```
## 6. HAProxy
```bash
helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm install haproxy haproxytech/kubernetes-ingress -n kube-system \
--set controller.service.nodePorts.http=30000 \
--set controller.service.nodePorts.https=30001 \
--set controller.service.nodePorts.stat=30002 \
--set controller.service.type=LoadBalancer
```
## 7. Services
...
## Uninstall
> On master:
`/usr/local/bin/k3s-uninstall.sh`
> On workers:
`/usr/local/bin/k3s-agent-uninstall.sh`

197
TuringPi/networking.md Normal file
View File

@@ -0,0 +1,197 @@
# Networking
#### Hosts
```bash
sudo vim /etc/hosts
```
```conf
127.0.0.1 localhost
127.0.1.1 <hostname>.local <hostname>
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
#
192.168.1.254 mabbox.bytel.fr
# local
192.168.1.10 bikinibottom
192.168.1.11 bob
192.168.1.12 carlo
192.168.1.13 gary
192.168.1.14 sandy
192.168.1.15 krabs
192.168.1.16 sheldon
192.168.1.17 bernie
#
192.168.1.53 recalbox
# Optional
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
```
### Broadcom (WiFi)
Réf. : [https://wiki.debian.org/WiFi/HowToUse](https://wiki.debian.org/WiFi/HowToUse)
```bash
cd /lib/firmware/brcm/
ln -s brcmfmac43455-sdio.AW-CM256SM.txt brcmfmac43455-sdio.pine64,rockpro64-v2.0.txt
ln -s brcmfmac43455-sdio.AW-CM256SM.txt brcmfmac43455-sdio.pine64,rockpro64-v2.1.txt
sudo rmmod brcmfmac
sudo modprobe brcmfmac
```
```bash
sudo vim /etc/systemd/network/25-wlan.network
```
```systemd
[Match]
Name=wlan0
[Network]
DHCP=ipv4
[DHCP]
UseDNS=yes
```
```bash
sudo vim /etc/network/interfaces.d/local
```
```conf
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
# allow-hotplug end0
auto wlan0
iface wlan0 inet static
address 192.168.1.xx
netmask 255.255.255.0
gateway 192.168.1.254
    dns-nameservers 45.90.28.0
    dns-nameservers 45.90.30.0
# This is an autoconfigured IPv6 interface
iface end0 inet6 auto
```
```bash
sudo vim /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
```
```conf
ctrl_interface=DIR=/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="BikiniBottom"
#psk="transatlantique"
psk=e5b692f43ab186494b7f37949d9084e1c0e97fd74d8f14a6325be2e62309067e
}
```
```bash
sudo systemctl enable --now wpa_supplicant@wlan0.service
sudo su -l -c "systemctl enable --now systemd-networkd"
```
### Ethernet
#### Debian
```bash
sudo vim /etc/network/interfaces
```
```bash
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
```
```bash
sudo vim /etc/network/interfaces.d/local
```
```conf
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
# allow-hotplug end0
auto end0
iface end0 inet static
address 192.168.1.xx
netmask 255.255.255.0
gateway 192.168.1.254
    dns-nameservers 45.90.28.0
    dns-nameservers 45.90.30.0
# This is an autoconfigured IPv6 interface
iface end0 inet6 auto
```
```bash
sudo systemctl restart networking.service
```
#### Ubuntu
```bash
sudo vim /etc/netplan/50-cloud-init.yaml
```
```yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    end0:
      dhcp4: no
      addresses:
        - 192.168.1.xx/24
      routes:
        - to: default
          via: 192.168.1.254
      nameservers:
        addresses: [45.90.28.0, 45.90.30.0]
```
```bash
sudo netplan apply
```
#### DNS Servers
| DNS | Adresses IPv4 | Adresses IPv6 |
| --- | --- | --- |
| 1 [**Cloudflare 1.1.1.1**] | 1.1.1.1<br />1.0.0.1 | 2606:4700:4700::1111<br />2606:4700:4700::1001 |
| 2 [Cisco OpenDNS Home](https://www.opendns.com/) | 208.67.222.222<br />208.67.220.220 | 2620:119:35::35<br />2620:119:53::53 |
| 3 [Neustar UltraDNS Public](https://www.publicdns.neustar/) | 64.6.64.6<br />64.6.65.6 | 2620:74:1b::1:1<br />2620:74:1c::2:2 |
| 4 [NextDNS](https://nextdns.io/fr) | 45.90.28.0<br />45.90.30.0 | 2a07:a8c0::<br />2a07:a8c1::|
| 5 [Google Public DNS](https://developers.google.com/speed/public-dns) | 8.8.8.8<br />8.8.4.4 | 2001:4860:4860::8888<br />2001:4860:4860::8844 |
| 6 [**Quad9**](https://www.quad9.net/) | 9.9.9.9<br />149.112.112.112 | 2620:fe::fe, 2620:fe::9 |
| 7 [Comodo Secure DNS](https://www.comodo.com/secure-dns/) | 8.26.56.26<br />8.20.247.20 | |
| 8 [Yandex.DNS](https://dns.yandex.com/ ) | 77.88.8.8<br />77.88.8.1 | 2a02:6b8::feed:0ff<br />2a02:6b8:0:1::feed:0ff |
| 9 [SafeDNS](https://www.safedns.com/fr/) | 195.46.39.39<br />195.46.39.40 | 2001:67c:2778::3939<br />2001:67c:2778::3940 |

252
TuringPi/nfs-server.md Normal file
View File

@@ -0,0 +1,252 @@
# NFS Server
### Structure
<div class="table-wrapper" id="bkmrk-name-maj%3Amin-rm-size"><table><tbody><tr><th colspan="1" rowspan="1">NAME</th><th colspan="1" rowspan="1">MAJ:MIN</th><th colspan="1" rowspan="1">RM</th><th colspan="1" rowspan="1">SIZE</th><th colspan="1" rowspan="1">RO</th><th colspan="1" rowspan="1">TYPE</th><th colspan="1" rowspan="1">MOUNTPOINTS</th></tr><tr><td colspan="1" rowspan="1">sdb</td><td colspan="1" rowspan="1">8:16</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">3.6T</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">disk</td><td colspan="1" rowspan="1"> </td></tr><tr><td colspan="1" rowspan="1">└─ hdds-datas</td><td colspan="1" rowspan="1">253:0</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">7.3T</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">lvm</td><td colspan="1" rowspan="1">/export/media</td></tr><tr><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1">/srv/dev-disk-by-uuid-a67ae390-f36c-4b7c-98a0-4a3b5601c107</td></tr><tr><td colspan="1" rowspan="1">sdc</td><td colspan="1" rowspan="1">8:32</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">3.6T</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">disk</td><td colspan="1" rowspan="1"> </td></tr><tr><td colspan="1" rowspan="1">└─ hdds-datas</td><td colspan="1" rowspan="1">253:0</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">7.3T</td><td colspan="1" rowspan="1">0</td><td colspan="1" rowspan="1">lvm</td><td colspan="1" rowspan="1">/export/media</td></tr><tr><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1"> </td><td colspan="1" rowspan="1">/srv/dev-disk-by-uuid-a67ae390-f36c-4b7c-98a0-4a3b5601c107</td></tr></tbody></table>
</div>```bash
apt install -y lvm2
```
### Disks
```bash
fdisk -l
```
```
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VN006-3CW1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdc: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VN006-3CW1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/hdds-datas: 7.28 TiB, 8001566015488 bytes, 15628058624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
```
### LVM
#### Physical Volumes (partitions)
```
pvcreate /dev/sdb
pvcreate /dev/sdc
```
#### Volume Group
```
vgcreate hdds /dev/sdb /dev/sdc
```
#### Logical Volume(s)
```
lvcreate --extents 100%FREE -n datas hdds
```
```
pvdisplay
```
```
--- Physical volume ---
PV Name               /dev/sdb
VG Name               hdds
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               zHiPKx-t2BO-15r3-xPfQ-DPlB-7K40-YxIdM1
--- Physical volume ---
PV Name               /dev/sdc
VG Name               hdds
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               bGAdMs-JsRy-6r3M-0zWt-CYQJ-GPsB-KCnCo6
```
```bash
vgdisplay
```
```
--- Volume group ---
VG Name               hdds
System ID
Format                lvm2
Metadata Areas        2
Metadata Sequence No  6
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                1
Open LV               1
Max PV                0
Cur PV                2
Act PV                2
VG Size               <7.28 TiB
PE Size               4.00 MiB
Total PE              1907722
Alloc PE / Size       1907722 / <7.28 TiB
Free  PE / Size       0 / 0
VG UUID               DPZPo2-OwJt-R1wG-wJRB-HD38-JeVl-5ZFfnI
```
```
lvdisplay
```
```
--- Logical volume ---
LV Path                /dev/hdds/datas
LV Name                data
VG Name                hdds
LV UUID                qgdjwr-9Aau-p4mj-rksl-IeBB-SjD1-z6hede
LV Write Access        read/write
LV Creation host, time polito, 2023-08-01 10:10:14 +0200
LV Status              available
# open                 0
LV Size                <7.28 TiB
Current LE             1907722
Segments               2
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           254:0
```
#### Format LV
```bash
sudo mkfs.ext4 /dev/hdds/datas
```
### NFS
```
# mkdir -p /mnt/data/cloud
# mkdir -p /mnt/data/gitea
# mkdir -p /mnt/data/media
# mkdir -p /mnt/data/media/downloads
sudo vim /etc/exports
```
```
# /etc/exports: the access control list for filesystems which may be exported to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
# /export 192.168.1.0/24(rw,no_root_squash,no_subtree_check)
/srv/nfs4 192.168.1.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)
/srv/nfs4/media 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/downloads 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/backups 192.168.1.0/24(ro,sync,no_subtree_check)
```
```bash
sudo systemctl restart nfs-server
```
##### Get LVM's UUID
```bash
sudo blkid -s UUID -o value /dev/mapper/hdds-datas
```
##### Mount LVM
```
sudo vim /etc/fstab
```
```
UUID=0c390345-ca52-45fd-9097-6e931d651a8f / ext4 defaults,x-systemd.growfs 0 1
# LVM
/dev/hdds/datas /mnt/data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
# NFS exports
/mnt/data/media /srv/nfs4/media none bind,nofail 0 0
/mnt/data/media/downloads /srv/nfs4/downloads none bind,nofail 0 0
/opt/backups /srv/nfs4/backups none bind,nofail 0 0
```
**PS : Check the content of /var/lib/nfs/etab to show NFS shares list and all the applied permission details**
```bash
cat /var/lib/nfs/etab
```
```shell
/srv/nfs4 192.168.1.0/24(rw,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4/backups 192.168.1.0/24(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
/srv/nfs4/downloads 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
/srv/nfs4/media 192.168.1.0/24(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
```
#### Mount needed <folder> on client(s)
```bash
sudo apt install nfs-common -y
sudo mkdir -p /nfs/<folder>
sudo vim /etc/fstab
```
```shell
krabs:/srv/nfs4/<folder> /nfs/<folder> nfs _netdev,x-systemd.automount,noauto,nofail,noatime,nolock,tcp,bg,actimeo=1800 0 0
```
```bash
sudo mount /nfs/<folder>
sudo systemctl daemon-reload
```
### Monitoring NFS Server
#### Netstat
```bash
watch -n 5 "netstat -plaute | grep nfs | sort -r -n -k 3,2"
```
#### NFSWatch
```bash
sudo apt install -y nfswatch
```
Add these aliases to your .bashrc file:
```bash
# NFS watch clients
alias nfsw='nfswatch -allif -clients -usage 2>/dev/null'
# NFS watch users
alias nfsu='nfswatch -allif -auth -usage 2>/dev/null'
```
```bash
source ~/.bashrc
```

62
TuringPi/odroid-hc4.md Normal file
View File

@@ -0,0 +1,62 @@
Odroid HC4
===
[![hc4-board.webp](https://bookstack.delmar.bzh/uploads/images/gallery/2024-05/scaled-1680-/4kugAUKiY1KhcIBa-hc4-board.webp)](https://bookstack.delmar.bzh/uploads/images/gallery/2024-05/4kugAUKiY1KhcIBa-hc4-board.webp)
##### Specifications
<div class="level2" id="bkmrk-form-factor-board-di"><div class="table-responsive"><table class="inline table table-striped table-bordered table-hover table-condensed"><tbody><tr class="row0"><th class="col0 leftalign">Form Factor</th><td class="col1 leftalign">Board Dimensions: 84mm x 90.5mm x 25.0mm
Heatsink Dimensions: 40mm x 32mm x 10mm
Weight: 280g including heatsink, fan and case</td></tr><tr class="row1"><th class="col0 leftalign">Processor</th><td class="col1 leftalign">Amlogic S905X3 Processor
L1 instruction cache: 32 KB, 4-way set associative (128 sets), 64 byte lines, shared by 1 processor
L1 data cache: 32 KB, 4-way set associative (128 sets), 64 byte lines, shared by 1 processor
L3 data cache: 512KB , 16-way set associative (512 sets), 64 byte lines, shared by 4 processors
Quad-Core Cortex-A55 (1.800GHz)
ARMv8-A architecture with Neon and Crypto extensions
Mali-G31 MP2 GPU with 4 x Execution Engines (650Mhz)</td></tr><tr class="row2"><th class="col0 leftalign">Memory</th><td class="col1 leftalign">DDR4 4GiB with 32-bit bus width
Data rate: 2640 MT/s (PC4-21333 grade)
1.2Volt low power design</td></tr><tr class="row3"><th class="col0 leftalign">Storage</th><td class="col1 leftalign">1x Micro SD slot (DS/HS mode up to UHS-I SDR104)</td></tr><tr class="row4"><th class="col0 leftalign">Networking</th><td class="col1 leftalign">1 x GbE <abbr data-original-title="Local Area Network" title="">LAN</abbr> ports (RJ45, supports 10/100/1000 Mbps)
- Realtek RTL8211F (Ethernet transceiver)
- LED indicators
\* Green LED: Flashing by data traffics at 100Mbps connection
\* Amber LED: Flashing by data traffics at 1000Mbps connection</td></tr><tr class="row5"><th class="col0 leftalign">Video</th><td class="col1 leftalign">1 x HDMI 2.0 (up to 4K@60Hz with HDR, CEC, EDID)
Although the +5V Power pin is able to supply a mininum of 300mA, a sink shall not draw more than 50mA of current from the +5V Power pin.</td></tr><tr class="row6"><th class="col0 leftalign">Audio</th><td class="col1 leftalign">1 x HDMI digital output</td></tr><tr class="row7"><th class="col0 leftalign">Cooler</th><td class="col1 leftalign">1 x Passive heatsink ( 40x32x10 mm ) on the CPU of PCB board
1 x Hydraulic Bearing FAN
- Rated Voltage : 5V
- Rated Current : 100mA
- Max Speed RPM : 4,000
- Air Flow CFM : 4.3
- Air Pressure (In H2O) : 0.053
- Noise Level(dBA) : 23.9
- Dimension : 40 x 40 x 10.6 mm
- Wire &amp; Connector : 80mm, 4 pin 1.25mm pitch</td></tr><tr class="row8"><th class="col0 leftalign">External I/O</th><td class="col1 leftalign">1 x USB 2.0 host port
1 x Debug serial console (UART)
2 x SATA Connectors</td></tr><tr class="row9"><th class="col0 leftalign">Other features</th><td class="col1 leftalign">IR receiver for remote controller
System LED Indicators:
- Red (POWER) Solid light when DC power is connected
- Blue (ALIVE) Flashing like heartbeat while Kernel is running. Solid On in the u-boot stage. Turns off if there is no bootable media.
- Red (HDD/SSD active) - Flashing while SATA HDD/SSD are reading or writing.</td></tr><tr class="row10"><th class="col0 leftalign">Power</th><td class="col1 leftalign">1 x DC jack : outer (negative) diameter 5.5mm, inner (positive) diameter 2.1mm
DC input : 14.5V ~ 15.5V
\- DC 15V/4A power adapter is recommended
Power consumption with two ST2000DM006 HDDs :
\- IDLE(with HDD in spin-down) : ≃ 5.88W
\- Active(using SAMBA): ≃ 15.59W
\- Suspend: ≃ 0.29W
</td></tr></tbody></table>
</div></div>##### Setup
> sheldon (192.168.1.16)
> pleb/transatlantique
```
Choose "Exit to shell"
(switch to console then)
# netboot_default
# exit
(back to petiboot menu)
```
&gt; Choose OS from the net install-able" image list...

View File

@@ -0,0 +1,14 @@
```bash
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLk24u7FT8PhAdM8EVUFGlOi0hle4CW8L284E1foUhS julien@julien-pc
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE6wr+DUbcfVTltoWT6gbPRY3geUYNhgN7/CLcMaMu0B eliot@toile-win
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKmttayKqj6Z290hMCc97v4dMZTSUz4lYgXR0NtcRr8U delmar@thinkpad
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/ZRSzTgEBh+NbLKKxjW5F0Gj/j7GJylnMnGlf96Wpy pleb@bob
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIbDwpvEZ+pvVSBQryt6tGNQ25+z1P2UJO45cPHmDkj0 pleb@carlo
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKL7A0xvHSfmWo+LUHdWWb03a5NXN1IlbLS5iSHxs3zw pleb@sandy
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFHgj5whXrhaK3feCmvHvNTZp3zpJmyD2a7ooh13Fj91 pleb@gary
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII9USnPFJx/PbV+KnlTN4o3f4SwiuHCWaAlE8aKcY4Ne pleb@sheldon
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMfd7PrJ50jHrG6yjIJ5u7jKTyXi9mPn8/oa+HNAVNsf pleb@krabs
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJtrn0Oj2IstMwzheIZcJGBA8RNWTyNtksaK2LhvjNul pleb@bernie
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+kB6/xx0HmTT6NDt0H3+bHs7aFzXSqCiJcLCbqzp5E pleb@patrick
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICEQyj/qu5P/SFOK6V3p/BXZ4qCXisJfEWFev2IINpqx root@nextcloud-aio-borgbackup
```

66
TuringPi/webcam.md Normal file
View File

@@ -0,0 +1,66 @@
Webcam
===
### (carlo)
```bash
echo 'deb http://download.opensuse.org/repositories/hardware:/razer/Debian_12/ /' | sudo tee /etc/apt/sources.list.d/hardware:razer.list
curl -fsSL https://download.opensuse.org/repositories/hardware:razer/Debian_12/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/hardware_razer.gpg > /dev/null
sudo apt update
sudo apt install usbutils openrazer-meta vlc
lsusb | grep '1532:'
```
```
Bus 001 Device 004: ID 1532:0e06 Razer USA, Ltd Razer Kiyo X
```
```bash
cvlc v4l2:///dev/video3
```
```bash
sudo vim /usr/share/camera-streamer/examples/camera-streamer-generic-usb-cam.service
```
```
[Unit]
Description=camera-streamer web camera for USB camera on Generic platform
After=network.target
ConditionPathExistsGlob=/dev/v4l/by-id/usb-*-video-index0
[Service]
ExecStart=/usr/bin/camera-streamer \
-camera-path=/dev/video3 \
-camera-format=JPEG \
-camera-width=1920 -camera-height=1080 \
-camera-fps=30 \
; use two memory buffers to optimise usage
-camera-nbufs=3 \
--http-listen=0.0.0.0 \
--http-port=8080 \
; disable video streaming (WebRTC, RTSP, H264)
; on non-supported platforms
-camera-video.disabled
DynamicUser=yes
SupplementaryGroups=video i2c
Restart=always
RestartSec=10
Nice=10
IOSchedulingClass=idle
IOSchedulingPriority=7
CPUWeight=20
AllowedCPUs=1-2
MemoryMax=250M
[Install]
WantedBy=multi-user.target
```
```bash
sudo systemctl enable /usr/share/camera-streamer/examples/camera-streamer-generic-usb-cam.service
sudo systemctl start camera-streamer-generic-usb-cam.service
```

View File

@@ -0,0 +1,21 @@
x730 script (Geekworm)
========================
#### Install
```bash
mkdir x730
cd x730
wget https://raw.githubusercontent.com/geekworm-com/x730-script/master/x730.sh
sudo chmod +x x730.sh
sudo bash x730.sh
printf "%s\\n" "alias piOff='sudo x730shutdown.sh'" >> ~/.bashrc
sudo reboot
```
#### Safe shutdown
```bash
`piOff`
```