Added Divers

This commit is contained in:
2025-04-24 16:06:40 +02:00
parent 2abf8e30cf
commit b867b58f01
20 changed files with 1357 additions and 0 deletions

642
Divers/Ansible.md Normal file
View File

@@ -0,0 +1,642 @@
# Ansible
```bash
sudo vim ~/.ansible/inventory.yaml
```
```yaml
all:
hosts:
bob:
ansible_connection: ssh
ansible_user: pleb
carlo:
ansible_connection: ssh
ansible_user: pleb
patrick:
ansible_connection: ssh
ansible_user: pleb
sandy:
ansible_connection: ssh
ansible_user: pleb
krabs:
ansible_connection: ssh
ansible_user: pleb
sheldon:
ansible_connection: ssh
ansible_user: pleb
```
```bash
sudo vim ~/.ansible/inventory
```
```
[all]
bob ansible_connection=ssh ansible_user=pleb
carlo ansible_connection=ssh ansible_user=pleb
gary ansible_connection=ssh ansible_user=pleb
sandy ansible_connection=ssh ansible_user=pleb
krabs ansible_connection=ssh ansible_user=pleb
sheldon ansible_connection=ssh ansible_user=pleb
```
```bash
sudo vim /etc/ansible/ansible.cfg
```
```conf
# Since Ansible 2.12 (core):
# To generate an example config file (a "disabled" one with all default settings, commented out):
# $ ansible-config init --disabled > ansible.cfg
#
# Also you can now have a more complete file by including existing plugins:
# ansible-config init --disabled -t all > ansible.cfg
# For previous versions of Ansible you can check for examples in the 'stable' branches of each version
# Note that this file was always incomplete and lagging changes to configuration settings
# for example, for 2.9: https://github.com/ansible/ansible/blob/stable-2.9/examples/ansible.cfg
#
# config file for ansible -- https://ansible.com/
# ===============================================
# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first
[defaults]
# some basic default values...
inventory = ~/.ansible/inventory.yaml
#library = /usr/share/my_modules/
#module_utils = /usr/share/my_module_utils/
#remote_tmp = ~/.ansible/tmp
#local_tmp = ~/.ansible/tmp
#plugin_filters_cfg = /etc/ansible/plugin_filters.yml
#forks = 5
#poll_interval = 15
#sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
#transport = smart
#remote_port = 22
#module_lang = C
#module_set_locale = False
# plays will gather facts by default, which contain information about
# the remote system.
#
# smart - gather by default, but don't regather if already gathered
# implicit - gather by default, turn off with gather_facts: False
# explicit - do not gather by default, must say gather_facts: True
#gathering = implicit
# This only affects the gathering done by a play's gather_facts directive,
# by default gathering retrieves all facts subsets
# all - gather all subsets
# network - gather min and network facts
# hardware - gather hardware facts (longest facts to retrieve)
# virtual - gather min and virtual facts
# facter - import facts from facter
# ohai - import facts from ohai
# You can combine them using comma (ex: network,virtual)
# You can negate them using ! (ex: !hardware,!facter,!ohai)
# A minimal set of facts is always gathered.
#gather_subset = all
# some hardware related facts are collected
# with a maximum timeout of 10 seconds. This
# option lets you increase or decrease that
# timeout to something more suitable for the
# environment.
# gather_timeout = 10
# Ansible facts are available inside the ansible_facts.* dictionary
# namespace. This setting maintains the behaviour which was the default prior
# to 2.5, duplicating these variables into the main namespace, each with a
# prefix of 'ansible_'.
# This variable is set to True by default for backwards compatibility. It
# will be changed to a default of 'False' in a future release.
# ansible_facts.
# inject_facts_as_vars = True
# additional paths to search for roles in, colon separated
#roles_path = /etc/ansible/roles
# uncomment this to disable SSH key host checking
#host_key_checking = False
# change the default callback, you can only have one 'stdout' type enabled at a time.
#stdout_callback = skippy
## Ansible ships with some plugins that require whitelisting,
## this is done to avoid running all of a type by default.
## These setting lists those that you want enabled for your system.
## Custom plugins should not need this unless plugin author specifies it.
# enable callback plugins, they can output to stdout but cannot be 'stdout' type.
#callback_whitelist = timer, mail
# Determine whether includes in tasks and handlers are "static" by
# default. As of 2.0, includes are dynamic by default. Setting these
# values to True will make includes behave more like they did in the
# 1.x versions.
#task_includes_static = False
#handler_includes_static = False
# Controls if a missing handler for a notification event is an error or a warning
#error_on_missing_handler = True
# change this for alternative sudo implementations
#sudo_exe = sudo
# What flags to pass to sudo
# WARNING: leaving out the defaults might create unexpected behaviours
#sudo_flags = -H -S -n
# SSH timeout
#timeout = 10
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
#remote_user = root
remote_user = pleb
# logging is off by default unless this path is defined
# if so defined, consider logrotate
#log_path = /var/log/ansible.log
# default module name for /usr/bin/ansible
#module_name = command
# use this shell for commands executed under sudo
# you may need to change this to bin/bash in rare instances
# if sudo is constrained
#executable = /bin/sh
# if inventory variables overlap, does the higher precedence one win
# or are hash values merged together? The default is 'replace' but
# this can also be set to 'merge'.
#hash_behaviour = replace
# by default, variables from roles will be visible in the global variable
# scope. To prevent this, the following option can be enabled, and only
# tasks and handlers within the role will see the variables there
#private_role_vars = yes
# list any Jinja2 extensions to enable here:
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
# if set, always use this private key file for authentication, same as
# if passing --private-key to ansible or ansible-playbook
#private_key_file = /path/to/file
# If set, configures the path to the Vault password file as an alternative to
# specifying --vault-password-file on the command line.
#vault_password_file = /path/to/vault_password_file
# format of string {{ ansible_managed }} available within Jinja2
# templates indicates to users editing templates files will be replaced.
# replacing {file}, {host} and {uid} and strftime codes with proper values.
#ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
# {file}, {host}, {uid}, and the timestamp can all interfere with idempotence
# in some situations so the default is a static string:
#ansible_managed = Ansible managed
# by default, ansible-playbook will display "Skipping [host]" if it determines a task
# should not be run on a host. Set this to "False" if you don't want to see these "Skipping"
# messages. NOTE: the task header will still be shown regardless of whether or not the
# task is skipped.
#display_skipped_hosts = True
# by default, if a task in a playbook does not include a name: field then
# ansible-playbook will construct a header that includes the task's action but
# not the task's args. This is a security feature because ansible cannot know
# if the *module* considers an argument to be no_log at the time that the
# header is printed. If your environment doesn't have a problem securing
# stdout from ansible-playbook (or you have manually specified no_log in your
# playbook on all of the tasks where you have secret information) then you can
# safely set this to True to get more informative messages.
#display_args_to_stdout = False
# by default (as of 1.3), Ansible will raise errors when attempting to dereference
# Jinja2 variables that are not set in templates or action lines. Uncomment this line
# to revert the behavior to pre-1.3.
#error_on_undefined_vars = False
# by default (as of 1.6), Ansible may display warnings based on the configuration of the
# system running ansible itself. This may include warnings about 3rd party packages or
# other conditions that should be resolved if possible.
# to disable these warnings, set the following value to False:
#system_warnings = True
# by default (as of 1.4), Ansible may display deprecation warnings for language
# features that should no longer be used and will be removed in future versions.
# to disable these warnings, set the following value to False:
#deprecation_warnings = True
# (as of 1.8), Ansible can optionally warn when usage of the shell and
# command module appear to be simplified by using a default Ansible module
# instead. These warnings can be silenced by adjusting the following
# setting or adding warn=yes or warn=no to the end of the command line
# parameter string. This will for example suggest using the git module
# instead of shelling out to the git command.
# command_warnings = False
# set plugin path directories here, separate with colons
#action_plugins = /usr/share/ansible/plugins/action
#become_plugins = /usr/share/ansible/plugins/become
#cache_plugins = /usr/share/ansible/plugins/cache
#callback_plugins = /usr/share/ansible/plugins/callback
#connection_plugins = /usr/share/ansible/plugins/connection
#lookup_plugins = /usr/share/ansible/plugins/lookup
#inventory_plugins = /usr/share/ansible/plugins/inventory
#vars_plugins = /usr/share/ansible/plugins/vars
#filter_plugins = /usr/share/ansible/plugins/filter
#test_plugins = /usr/share/ansible/plugins/test
#terminal_plugins = /usr/share/ansible/plugins/terminal
#strategy_plugins = /usr/share/ansible/plugins/strategy
# by default, ansible will use the 'linear' strategy but you may want to try
# another one
#strategy = free
# by default callbacks are not loaded for /bin/ansible, enable this if you
# want, for example, a notification or logging callback to also apply to
# /bin/ansible runs
#bin_ansible_callbacks = False
# don't like cows? that's unfortunate.
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1
#nocows = 1
# set which cowsay stencil you'd like to use by default. When set to 'random',
# a random stencil will be selected for each task. The selection will be filtered
# against the `cow_whitelist` option below.
#cow_selection = default
#cow_selection = random
# when using the 'random' option for cowsay, stencils will be restricted to this list.
# it should be formatted as a comma-separated list with no spaces between names.
# NOTE: line continuations here are for formatting purposes only, as the INI parser
# in python does not support them.
#cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\
# hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\
# stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www
# don't like colors either?
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
#nocolor = 1
# if set to a persistent type (not 'memory', for example 'redis') fact values
# from previous runs in Ansible will be stored. This may be useful when
# wanting to use, for example, IP information from one group of servers
# without having to talk to them in the same playbook run to get their
# current IP information.
#fact_caching = memory
#This option tells Ansible where to cache facts. The value is plugin dependent.
#For the jsonfile plugin, it should be a path to a local directory.
#For the redis plugin, the value is a host:port:database triplet: fact_caching_connection = localhost:6379:0
#fact_caching_connection=/tmp
# retry files
# When a playbook fails a .retry file can be created that will be placed in ~/
# You can enable this feature by setting retry_files_enabled to True
# and you can change the location of the files by setting retry_files_save_path
#retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry
# squash actions
# Ansible can optimise actions that call modules with list parameters
# when looping. Instead of calling the module once per with_ item, the
# module is called once with all items at once. Currently this only works
# under limited circumstances, and only with parameters named 'name'.
#squash_actions = apk,apt,dnf,homebrew,pacman,pkgng,yum,zypper
# prevents logging of task data, off by default
#no_log = False
# prevents logging of tasks, but only on the targets, data is still logged on the master/controller
#no_target_syslog = False
# controls whether Ansible will raise an error or warning if a task has no
# choice but to create world readable temporary files to execute a module on
# the remote machine. This option is False by default for security. Users may
# turn this on to have behaviour more like Ansible prior to 2.1.x. See
# https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
# for more secure ways to fix this than enabling this option.
#allow_world_readable_tmpfiles = False
# controls the compression level of variables sent to
# worker processes. At the default of 0, no compression
# is used. This value must be an integer from 0 to 9.
#var_compression_level = 9
# controls what compression method is used for new-style ansible modules when
# they are sent to the remote system. The compression types depend on having
# support compiled into both the controller's python and the client's python.
# The names should match with the python Zipfile compression types:
# * ZIP_STORED (no compression. available everywhere)
# * ZIP_DEFLATED (uses zlib, the default)
# These values may be set per host via the ansible_module_compression inventory
# variable
#module_compression = 'ZIP_DEFLATED'
# This controls the cutoff point (in bytes) on --diff for files
# set to 0 for unlimited (RAM may suffer!).
#max_diff_size = 1048576
# This controls how ansible handles multiple --tags and --skip-tags arguments
# on the CLI. If this is True then multiple arguments are merged together. If
# it is False, then the last specified argument is used and the others are ignored.
# This option will be removed in 2.8.
#merge_multiple_cli_flags = True
# Controls showing custom stats at the end, off by default
#show_custom_stats = True
# Controls which files to ignore when using a directory as inventory with
# possibly multiple sources (both static and dynamic)
#inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo
# This family of modules use an alternative execution path optimized for network appliances
# only update this setting if you know how this works, otherwise it can break module execution
#network_group_modules=eos, nxos, ios, iosxr, junos, vyos
# When enabled, this option allows lookups (via variables like {{lookup('foo')}} or when used as
# a loop with `with_foo`) to return data that is not marked "unsafe". This means the data may contain
# jinja2 templating language which will be run through the templating engine.
# ENABLING THIS COULD BE A SECURITY RISK
#allow_unsafe_lookups = False
# set default errors for all plays
#any_errors_fatal = False
[inventory]
# enable inventory plugins, default: 'host_list', 'script', 'auto', 'yaml', 'ini', 'toml'
#enable_plugins = host_list, virtualbox, yaml, constructed
# ignore these extensions when parsing a directory as inventory source
#ignore_extensions = .pyc, .pyo, .swp, .bak, ~, .rpm, .md, .txt, ~, .orig, .ini, .cfg, .retry
# ignore files matching these patterns when parsing a directory as inventory source
#ignore_patterns=
# If 'true' unparsed inventory sources become fatal errors, they are warnings otherwise.
#unparsed_is_failed=False
[privilege_escalation]
#become=True
#become_method=sudo
#become_user=root
#become_ask_pass=False
[paramiko_connection]
# uncomment this line to cause the paramiko connection plugin to not record new host
# keys encountered. Increases performance on new host additions. Setting works independently of the
# host key checking setting above.
#record_host_keys=False
# by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this
# line to disable this behaviour.
#pty=False
# paramiko will default to looking for SSH keys initially when trying to
# authenticate to remote devices. This is a problem for some network devices
# that close the connection after a key failure. Uncomment this line to
# disable the Paramiko look for keys function
#look_for_keys = False
# When using persistent connections with Paramiko, the connection runs in a
# background process. If the host doesn't already have a valid SSH key, by
# default Ansible will prompt to add the host key. This will cause connections
# running in background processes to fail. Uncomment this line to have
# Paramiko automatically add host keys.
#host_key_auto_add = True
[connection]
# ssh arguments to use
# Leaving off ControlPersist will result in poor performance, so use
# paramiko on older platforms rather than removing it, -C controls compression use
#ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
# The base directory for the ControlPath sockets.
# This is the "%(directory)s" in the control_path option
#
# Example:
# control_path_dir = /tmp/.ansible/cp
#control_path_dir = ~/.ansible/cp
# The path to use for the ControlPath sockets. This defaults to a hashed string of the hostname,
# port and username (empty string in the config). The hash mitigates a common problem users
# found with long hostnames and the conventional %(directory)s/ansible-ssh-%%h-%%p-%%r format.
# In those cases, a "too long for Unix domain socket" ssh error would occur.
#
# Example:
# control_path = %(directory)s/%%h-%%r
#control_path =
# Enabling pipelining reduces the number of SSH operations required to
# execute a module on the remote server. This can result in a significant
# performance improvement when enabled, however when using "sudo:" you must
# first disable 'requiretty' in /etc/sudoers
#
# By default, this option is disabled to preserve compatibility with
# sudoers configurations that have requiretty (the default on many distros).
#
#pipelining = False
# Control the mechanism for transferring files (old)
# * smart = try sftp and then try scp [default]
# * True = use scp only
# * False = use sftp only
#scp_if_ssh = smart
# Control the mechanism for transferring files (new)
# If set, this will override the scp_if_ssh option
# * sftp = use sftp to transfer files
# * scp = use scp to transfer files
# * piped = use 'dd' over SSH to transfer files
# * smart = try sftp, scp, and piped, in that order [default]
#transfer_method = smart
# if False, sftp will not use batch mode to transfer files. This may cause some
# types of file transfer failures impossible to catch however, and should
# only be disabled if your sftp version has problems with batch mode
#sftp_batch_mode = False
# The -tt argument is passed to ssh when pipelining is not enabled because sudo
# requires a tty by default.
#usetty = True
# Number of times to retry an SSH connection to a host, in case of UNREACHABLE.
# For each retry attempt, there is an exponential backoff,
# so after the first attempt there is 1s wait, then 2s, 4s etc. up to 30s (max).
#retries = 3
[persistent_connection]
# Configures the persistent connection timeout value in seconds. This value is
# how long the persistent connection will remain idle before it is destroyed.
# If the connection doesn't receive a request before the timeout value
# expires, the connection is shutdown. The default value is 30 seconds.
#connect_timeout = 30
# The command timeout value defines the amount of time to wait for a command
# or RPC call before timing out. The value for the command timeout must
# be less than the value of the persistent connection idle timeout (connect_timeout)
# The default value is 30 second.
#command_timeout = 30
[accelerate]
#accelerate_port = 5099
#accelerate_timeout = 30
#accelerate_connect_timeout = 5.0
# The daemon timeout is measured in minutes. This time is measured
# from the last activity to the accelerate daemon.
#accelerate_daemon_timeout = 30
# If set to yes, accelerate_multi_key will allow multiple
# private keys to be uploaded to it, though each user must
# have access to the system via SSH to add a new key. The default
# is "no".
#accelerate_multi_key = yes
[selinux]
# file systems that require special treatment when dealing with security context
# the default behaviour that copies the existing context or uses the user default
# needs to be changed to use the file system dependent context.
#special_context_filesystems=nfs,vboxsf,fuse,ramfs,9p,vfat
# Set this to yes to allow libvirt_lxc connections to work without SELinux.
#libvirt_lxc_noseclabel = yes
[colors]
#highlight = white
#verbose = blue
#warn = bright purple
#error = red
#debug = dark gray
#deprecate = purple
#skip = cyan
#unreachable = red
#ok = green
#changed = yellow
#diff_add = green
#diff_remove = red
#diff_lines = cyan
[diff]
# Always print diff when running ( same as always running with -D/--diff )
# always = no
# Set how many context lines to show in diff
# context = 3
```
#### Working with Rx and Tx Ring Parameters
##### Viewing the current configuration :
```bash
ansible all -m shell -a "sudo ethtool -g end0"
```
##### Configuring the Rx (Receive) Ring Parameter :
```bash
ansible all -m shell -a "sudo ethtool -G end0 rx 1024"
```
##### Configuring the Tx (Rransmit) Ring Parameter :
```bash
ansible all -m shell -a "sudo ethtool -G end0 tx 1024"
```
##### Configuring the Rx (Receive) and Tx (Transmit) Ring Parameters :
```bash
ansible all -m shell -a "sudo ethtool -G end0 rx 1024 tx 1024"
```
##### Making changes permanent (on every node) :
```bash
sudo vim /etc/network/interfaces
```
###### Add :
```bash
post-up /sbin/ethtool -G end0 rx 1024 tx 1024
```
###### Ping :
```bash
vim ~/ansible/ping.yaml
```
```yaml
- name: Pinging Turing Pi (shells)
hosts: all
tasks:
- name: Ping my hosts
ansible.builtin.ping:
- name: Print message
ansible.builtin.debug:
msg: Hello world
```
```bash
ansible-playbook -i ~/.ansible/inventory.yaml ~/ansible/ping.yaml
```
###### Update :
```bash
vim ~/ansible/update.yaml
```
```yaml
- name: Updating Bikini Bottom
hosts: all
become: true
tasks:
- name: Updating and upgrading apt packages
become: true
apt:
upgrade: yes
update_cache: yes
autoremove: yes
autoclean: yes
register: result
# - name: List installed and updated packages
# shell: grep -E "^$(date +%Y-%m-%d).+ (install|upgrade) " /var/log/dpkg.log |cut -d " " -f 3-5
# register: result
- name: Show Output
debug: msg="{{ result.stdout_lines }}"
```
```bash
ansible-playbook -i ~/.ansible/inventory.yaml ~/ansible/update.yaml
```

View File

@@ -0,0 +1,95 @@
# Container migration
### Docker Container Migration
#### On old host (example : Vaultwarden)
```
docker ps
docker commit <container_id> vaultwarden
docker save vaultwarden | gzip > vaultwarden.tar.gz
scp vaultwarden.tar.gz pleb@<new_host>:/mnt/data/docker/
```
#### On new host
```
gunzip -c /mnt/data/docker/vaultwarden/vaultwarden.tar.gz | docker load
docker run -d --name=vaultwarden vaultwarden/server
```
# Moving docker containers between hosts
Script : [https://github.com/ricardobranco777/docker-volumes.sh](https://github.com/ricardobranco777/docker-volumes.sh "https://github.com/ricardobranco777/docker-volumes.sh")
##### Usage
```bash
docker-volumes.sh [-v|--verbose] CONTAINER [save|load] TARBALL
```
##### Example
```bash
# Stop the container
docker stop $CONTAINER
# Create a new image
docker commit $CONTAINER $CONTAINER
# Save image
docker save -o $CONTAINER.tar $CONTAINER
# Save the volumes (use ".tar.gz" if you want compression)
docker-volumes.sh $CONTAINER save $CONTAINER-volumes.tar
# Copy image and volumes to another host
scp $CONTAINER.tar $CONTAINER-volumes.tar $USER@$HOST:
# On the other host:
docker load -i $CONTAINER.tar
docker create --name $CONTAINER [<PREVIOUS CONTAINER OPTIONS>] $CONTAINER
# Load the volumes
docker-volumes.sh $CONTAINER load $CONTAINER-volumes.tar
# Start container
docker start $CONTAINER
```
Script : [https://github.com/ricardobranco777/docker-volumes.sh](https://github.com/ricardobranco777/docker-volumes.sh "https://github.com/ricardobranco777/docker-volumes.sh")
##### Usage
```bash
docker-volumes.sh [-v|--verbose] CONTAINER [save|load] TARBALL
```
##### Example
```bash
# Stop the container
docker stop $CONTAINER
# Create a new image
docker commit $CONTAINER $CONTAINER
# Save image
docker save -o $CONTAINER.tar $CONTAINER
# Save the volumes (use ".tar.gz" if you want compression)
docker-volumes.sh $CONTAINER save $CONTAINER-volumes.tar
# Copy image and volumes to another host
scp $CONTAINER.tar $CONTAINER-volumes.tar $USER@$HOST:
# On the other host:
docker load -i $CONTAINER.tar
docker create --name $CONTAINER [<PREVIOUS CONTAINER OPTIONS>] $CONTAINER
# Load the volumes
docker-volumes.sh $CONTAINER load $CONTAINER-volumes.tar
# Start container
docker start $CONTAINER
```

View File

@@ -0,0 +1,147 @@
# Update Docker Containers
```bash
sudo vim /mnt/data/docker/update_all_docker_containers.sh
```
#### 1. Automatic updates
```bash
#!/usr/bin/env bash
set -e
# Our function to update containers based on their base image
update_container() {
local image=$1
docker pull $image
local updated_containers=0
# Loop through all running containers
for container in $(docker ps --format "{{.Names}}"); do
local container_image=$(docker inspect --format '{{.Config.Image}}' "$container")
# We check if the current container's image matches the updated image
if [[ "$container_image" == "$image" ]]; then
local latest=$(docker inspect --format "{{.Id}}" $image)
local running=$(docker inspect --format "{{.Image}}" $container)
if [[ "$running" != "$latest" ]]; then
echo "Upgrading $container"
docker rm -f $container
docker run --name $container $image
((updated_containers++))
fi
fi
done
if [[ $updated_containers -eq 0 ]]; then
echo "No containers updated for $image"
else
echo "$updated_containers container(s) updated for $image"
fi
}
# Our main script starts here
# Check for updates to all images used by running containers
for image in $(docker ps --format '{{.Image}}' | sort | uniq); do
echo "Checking updates for $image"
update_container $image
done
echo "Container update check complete."
```
#### 2. Preserving configurations during automatic updates
```bash
#!/usr/bin/env bash
set -e
# Function to preserve and update container
preserve_and_update_container() {
local container=$1
local image=$(docker inspect --format '{{.Config.Image}}' "$container")
# Pull the latest image version
docker pull $image
# Compare image IDs to determine if an update is needed
local latest_image_id=$(docker inspect --format '{{.Id}}' $image)
local container_image_id=$(docker inspect --format '{{.Image}}' "$container")
if [[ "$latest_image_id" != "$container_image_id" ]]; then
echo "Updating $container..."
# Capture current configurations
local env_vars=$(docker inspect $container --format '{{range .Config.Env}}{{println .}}{{end}}')
local volumes=$(docker inspect $container --format '{{range .Mounts}}{{println .Source ":" .Destination}}{{end}}')
local network=$(docker network ls --filter id=$(docker inspect $container --format '{{.HostConfig.NetworkMode}}') --format '{{.Name}}')
# Remove the outdated container
docker rm -f $container
# Recreate the container with the same configurations
docker run -d --name $container $(echo "$env_vars" | xargs -I {} echo --env '{}') $(echo "$volumes" | xargs -I {} echo -v '{}') --network="$network" $image
echo "$container updated successfully."
else
echo "$container is already up to date."
fi
}
# Iterate over all running containers
for container in $(docker ps --format "{{.Names}}"); do
preserve_and_update_container $container
done
echo "Container update check complete while preserving existing container configurations."
```
```bash
sudo chmod +x update_all_docker_containers.sh
./update_all_docker_containers.sh
```
#### 3. Crontab
##### Option 1
```bash
crontab -e
```
```
0 3 * * * /mnt/data/docker/update_all_docker_containers.sh
```
##### Option 2
```bash
crontab -e
```
```
0 0 * * * /usr/bin/docker system prune -af
```
##### Option 3
```bash
cd /etc/cron.daily
sudo vim docker-prune
```
```
#!/bin/bash
docker system prune -af --filter "until=$((30*24))h"
```
```bash
sudo chmod +x /etc/cron.daily/docker-prune
```
#### Test
```bash
run-parts /etc/cron.daily
```

14
Divers/DockerHub.md Normal file
View File

@@ -0,0 +1,14 @@
# Docker Hub
jumaxotl
w6sXvkEV43iYSZ
RKL73LZQRIBCBYTR7GQINCSTHSUVWCLI
[https://hub.docker.com/](https://hub.docker.com/)
Note : 21HV2QZGGJ8ZT
```
Access token (bob) : dckr_pat_lB2GFtARuoI0EbOHaRWnNERiQ3w
```

View File

@@ -0,0 +1,37 @@
# Flatpak downgrade
##### View installed flatpaks
```
flatpak list --app
```
##### View runtimes only
```
flatpak list --runtime
```
##### Show infos for app to downgrade
```
flatpak remote-info --log flathub <Application ID>
```
&gt; **Application ID example :** com.github.zocker\_160.SyncThingy
```
sudo flatpak update --commit=<commit_code> <Application ID>
```
##### Mask app (disallow update)
```
sudo flatpak mask <Application ID>
```
##### Remove mask (allow update)
```
sudo flatpak --remove mask <Application ID>
```

34
Divers/OVH.md Normal file
View File

@@ -0,0 +1,34 @@
# OVH
jme69@pm.me
L3yBoEwY9th9m<span class="keep-md">\*</span>MS
NOKXYTQGJYXFLWI4
[https://www.ovh.com/manager/#/hub](https://www.ovh.com/manager/#/hub "https://www.ovh.com/manager/#/hub")
```
AFX2-AGML-OW1M-LL6L
DUJK-J8CH-GZT1-QM9U
A3HB-YTK0-XMDC-QJJA
JBDA-C51T-GN1T-0MRP
OCTT-0JEE-4FL5-SOU6
2KN8-5HE7-CMKV-1A6D
IT0W-IBMT-CRHW-UOZI
44WV-YWDJ-DT5Y-XQ5Y
GMD3-8OO8-TVZD-NL4E
Y5HD-PWDS-6TFT-EWCH
```
- N° de compte : mj2413081-ovh
- Identifiant DynHost : delmar.bzh-jme
- Password DynHost : !8onUAZNn<span class="keep-md">\*</span>ug8cMa
#### OVH letsencrypt credentials
```
dns_ovh_endpoint = ovh-eu
dns_ovh_application_key = 3f8bdfed17f848d8
dns_ovh_application_secret = 6946758d7515ecef108aeb286bf3c7d0
dns_ovh_consumer_key = 94b2ddf482d36421a33aa6b3aa515956
```

10
Divers/Outlook (admin).md Normal file
View File

@@ -0,0 +1,10 @@
# Outlook (admin@delmar.bzh)
admin@delmar.bzh
qmPAzYgJWQqaY&amp;zo3cXhjhVD%agcLZG5
[https://pro1.mail.ovh.net/owa/auth/logon.aspx?replaceCurrent=1&amp;url=https%3A%2F%2Fpro1.mail.ovh.net%2Fowa%2F](https://pro1.mail.ovh.net/owa/auth/logon.aspx?replaceCurrent=1&url=https%3a%2f%2fpro1.mail.ovh.net%2fowa%2f "https://pro1.mail.ovh.net/owa/auth/logon.aspx?replaceCurrent=1&url=https%3a%2f%2fpro1.mail.ovh.net%2fowa%2f")
- POP : pro1.mail.ovh.net:995 (SSL)
- IMAP : pro1.mail.ovh.net:993 (SSL)
- SMTP : pro1.mail.ovh.net:587 (SSL)

261
Divers/Proton VPN.md Normal file
View File

@@ -0,0 +1,261 @@
# Proton VPN
#### VPN (host)
```bash
sudo apt install wireguard resolvconf -y
```
&gt; Download wireguard configuration on [https://account.protonvpn.com](https://account.protonvpn.com%5D(https//account.protonvpn.com) and move it to /etc/wireguard
```bash
sudo vim /etc/wireguard/wg0.conf
```
```conf
[Interface]
# Key for sandy
# Bouncing = 9
# NetShield = 1
# Moderate NAT = off
# NAT-PMP (Port Forwarding) = off
# VPN Accelerator = on
PrivateKey = ADq//ojZkfqe9XVtjzfiolEmbtXqU2AmlsZWGDTeqEk=
Address = 10.2.0.2/32
DNS = 10.2.0.1
[Peer]
# FR#167
PublicKey = zeGY3uQTDqTiaxp6vGqFzXck1TPNnzY+JZ2iNI2BrRU=
AllowedIPs = 0.0.0.0/0
Endpoint = 146.70.194.98:51820
```
```bash
sudo wg-quick up wg0.conf
```
#### enable
```bash
sudo systemctl enable --now wg-quick@wg0
```
#### Check...
```
sudo wg
curl https://ip.m
```
---
## OpenVPN
```
sudo apt install -y openvpn openresolv
sudo wget "https://raw.githubusercontent.com/ProtonVPN/scripts/master/update-resolv-conf.sh" -O "/etc/openvpn/update-resolv-conf"
```
Download configuration file : [https://account.protonvpn.com/downloads](https://account.protonvpn.com/downloads)
Enter username and password : [https://account.protonvpn.com/account](https://account.protonvpn.com/account)
```bash
sudo vim /etc/openvpn/fr.protonvpn.tcp.conf
```
```conf
# ==============================================================================
# Copyright (c) 2023 Proton AG (Switzerland)
# Email: contact@protonvpn.com
#
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR # OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
# ==============================================================================
# If you are a paying user you can also enable the ProtonVPN ad blocker (NetShield) or Moderate NAT:
# Use: "u97gzcUx2gei2toi+f1" as username to enable anti-malware filtering
# Use: "u97gzcUx2gei2toi+f2" as username to additionally enable ad-blocking filtering
# Use: "u97gzcUx2gei2toi+nr" as username to enable Moderate NAT
# Note that you can combine the "+nr" suffix with other suffixes.
client
dev tun
proto tcp
remote 149.102.245.129 8443
remote 146.70.194.98 8443
remote 185.246.211.72 8443
remote 149.102.245.156 443
remote 146.70.194.34 7770
remote 146.70.194.34 8443
remote 146.70.194.114 7770
remote 149.102.245.129 443
remote 146.70.194.18 7770
remote 146.70.194.34 8443
remote 149.102.245.156 443
remote 149.102.245.156 8443
remote 146.70.194.66 8443
remote 146.70.194.82 8443
remote 146.70.194.18 443
remote 149.102.245.129 7770
remote 146.70.194.34 443
remote 146.70.194.34 7770
remote 146.70.194.98 443
remote 185.246.211.72 443
remote 146.70.194.2 8443
remote 149.102.245.156 7770
remote 146.70.194.98 8443
remote 149.102.245.129 7770
remote 146.70.194.34 7770
remote 149.102.245.156 7770
remote 149.102.245.129 443
remote 149.102.245.156 8443
remote 146.70.194.18 443
remote 149.102.245.129 8443
remote 146.70.194.98 443
remote 149.102.245.156 8443
remote 146.70.194.82 7770
remote 146.70.194.82 8443
remote 146.70.194.66 443
remote 149.102.245.129 7770
remote 146.70.194.98 7770
remote 149.102.245.156 443
remote 149.102.245.156 443
remote 146.70.194.34 443
remote 146.70.194.82 7770
remote 146.70.194.114 443
remote 146.70.194.34 443
remote 146.70.194.2 7770
remote 146.70.194.82 443
remote 146.70.194.98 7770
remote 146.70.194.82 443
remote 149.102.245.129 7770
remote 185.246.211.72 7770
remote 149.102.245.129 443
remote 146.70.194.18 7770
remote 149.102.245.129 443
remote 146.70.194.66 7770
remote 149.102.245.129 8443
remote 149.102.245.156 7770
remote 146.70.194.114 8443
remote 149.102.245.156 8443
remote 149.102.245.129 8443
remote 146.70.194.18 8443
remote 146.70.194.34 8443
remote 146.70.194.18 8443
remote 149.102.245.156 7770
remote 146.70.194.2 443
server-poll-timeout 20
remote-random
resolv-retry infinite
nobind
cipher AES-256-GCM
setenv CLIENT_CERT 0
tun-mtu 1500
mssfix 0
persist-key
persist-tun
reneg-sec 0
remote-cert-tls server
auth-user-pass .password
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
<ca>
-----BEGIN CERTIFICATE-----
MIIFnTCCA4WgAwIBAgIUCI574SM3Lyh47GyNl0WAOYrqb5QwDQYJKoZIhvcNAQEL
BQAwXjELMAkGA1UEBhMCQ0gxHzAdBgNVBAoMFlByb3RvbiBUZWNobm9sb2dpZXMg
QUcxEjAQBgNVBAsMCVByb3RvblZQTjEaMBgGA1UEAwwRUHJvdG9uVlBOIFJvb3Qg
Q0EwHhcNMTkxMDE3MDgwNjQxWhcNMzkxMDEyMDgwNjQxWjBeMQswCQYDVQQGEwJD
SDEfMB0GA1UECgwWUHJvdG9uIFRlY2hub2xvZ2llcyBBRzESMBAGA1UECwwJUHJv
dG9uVlBOMRowGAYDVQQDDBFQcm90b25WUE4gUm9vdCBDQTCCAiIwDQYJKoZIhvcN
AQEBBQADggIPADCCAgoCggIBAMkUT7zMUS5C+NjQ7YoGpVFlfbN9HFgG4JiKfHB8
QxnPPRgyTi0zVOAj1ImsRilauY8Ddm5dQtd8qcApoz6oCx5cFiiSQG2uyhS/59Zl
5wqIkw1o+CgwZgeWkq04lcrxhhfPgJZRFjrYVezy/Z2Ssd18s3/FFNQ+2iV1KC2K
z8eSPr50u+l9vEKsKiNGkJTdlWjoDKZM2C15i/h8Smi+PdJlx7WMTtYoVC1Fzq0r
aCPDQl18kspu11b6d8ECPWghKcDIIKuA0r0nGqF1GvH1AmbC/xUaNrKgz9AfioZL
MP/l22tVG3KKM1ku0eYHX7NzNHgkM2JKnBBannImQQBGTAcvvUlnfF3AHx4vzx7H
ahpBz8ebThx2uv+vzu8lCVEcKjQObGwLbAONJN2enug8hwSSZQv7tz7onDQWlYh0
El5fnkrEQGbukNnSyOqTwfobvBllIPzBqdO38eZFA0YTlH9plYjIjPjGl931lFAA
3G9t0x7nxAauLXN5QVp1yoF1tzXc5kN0SFAasM9VtVEOSMaGHLKhF+IMyVX8h5Iu
IRC8u5O672r7cHS+Dtx87LjxypqNhmbf1TWyLJSoh0qYhMr+BbO7+N6zKRIZPI5b
MXc8Be2pQwbSA4ZrDvSjFC9yDXmSuZTyVo6Bqi/KCUZeaXKof68oNxVYeGowNeQd
g/znAgMBAAGjUzBRMB0GA1UdDgQWBBR44WtTuEKCaPPUltYEHZoyhJo+4TAfBgNV
HSMEGDAWgBR44WtTuEKCaPPUltYEHZoyhJo+4TAPBgNVHRMBAf8EBTADAQH/MA0G
CSqGSIb3DQEBCwUAA4ICAQBBmzCQlHxOJ6izys3TVpaze+rUkA9GejgsB2DZXIcm
4Lj/SNzQsPlZRu4S0IZV253dbE1DoWlHanw5lnXwx8iU82X7jdm/5uZOwj2NqSqT
bTn0WLAC6khEKKe5bPTf18UOcwN82Le3AnkwcNAaBO5/TzFQVgnVedXr2g6rmpp9
gdedeEl9acB7xqfYfkrmijqYMm+xeG2rXaanch3HjweMDuZdT/Ub5G6oir0Kowft
lA1ytjXRg+X+yWymTpF/zGLYfSodWWjMKhpzZtRJZ+9B0pWXUyY7SuCj5T5SMIAu
x3NQQ46wSbHRolIlwh7zD7kBgkyLe7ByLvGFKa2Vw4PuWjqYwrRbFjb2+EKAwPu6
VTWz/QQTU8oJewGFipw94Bi61zuaPvF1qZCHgYhVojRy6KcqncX2Hx9hjfVxspBZ
DrVH6uofCmd99GmVu+qizybWQTrPaubfc/a2jJIbXc2bRQjYj/qmjE3hTlmO3k7V
EP6i8CLhEl+dX75aZw9StkqjdpIApYwX6XNDqVuGzfeTXXclk4N4aDPwPFM/Yo/e
KnvlNlKbljWdMYkfx8r37aOHpchH34cv0Jb5Im+1H07ywnshXNfUhRazOpubJRHn
bjDuBwWS1/Vwp5AJ+QHsPXhJdl3qHc1szJZVJb3VyAWvG/bWApKfFuZX18tiI4N0
EA==
-----END CERTIFICATE-----
</ca>
<tls-crypt>
-----BEGIN OpenVPN Static key V1-----
6acef03f62675b4b1bbd03e53b187727
423cea742242106cb2916a8a4c829756
3d22c7e5cef430b1103c6f66eb1fc5b3
75a672f158e2e2e936c3faa48b035a6d
e17beaac23b5f03b10b868d53d03521d
8ba115059da777a60cbfd7b2c9c57472
78a15b8f6e68a3ef7fd583ec9f398c8b
d4735dab40cbd1e3c62a822e97489186
c30a0b48c7c38ea32ceb056d3fa5a710
e10ccc7a0ddb363b08c3d2777a3395e1
0c0b6080f56309192ab5aacd4b45f55d
a61fc77af39bd81a19218a79762c3386
2df55785075f37d8c71dc8a42097ee43
344739a0dd48d03025b0450cf1fb5e8c
aeb893d9a96d1f15519bb3c4dcb40ee3
16672ea16c012664f8a9f11255518deb
-----END OpenVPN Static key V1-----
</tls-crypt>
```
```bash
sudo vim /etc/openvpn/.password
```
```txt
u97gzcUx2gei2toi
I78VQcnazNoOY8jzpqBwuw38HglJHK21
```
```bash
sudo systemctl enable openvpn.service
```

55
Divers/Rsync.md Normal file
View File

@@ -0,0 +1,55 @@
# Rsync
### target : krabs (nfs server)
#### tv-shows
```bash
rsync -r -v --remove-source-files --progress -e ssh /home/julien/Téléchargements/torrents/tv-shows/* pleb@krabs:/mnt/data/jellystack/media/tv-shows/ && find /home/julien/Téléchargements/torrents/tv-shows/* -type d -empty -delete
```
#### movies
```bash
rsync -r -v --remove-source-files --progress -e ssh /home/julien/Téléchargements/torrents/movies/* pleb@krabs:/mnt/data/jellystack/media/movies/ && find /home/julien/Téléchargements/torrents/movies/* -type d -empty -delete
```
### target : sandy
#### ebooks (uploads)
```bash
rsync -r -v --rsync-path='sudo rsync' --remove-source-files --progress -e ssh /home/julien/Téléchargements/torrents/ebooks/* pleb@sandy:/mnt/data/docker/calibre/uploads/ && find /home/julien/Téléchargements/torrents/ebooks/* -type d -empty -delete
```
### target : bob (caddy | websites)
#### nsns.delmar.bzh
```bash
rsync -r -v --rsync-path='sudo rsync' --progress -e ssh ~/Sync/delmar.bzh/nsns/public/* -p pleb@bob:/var/www/nsns/
```
#### www.delmar.bzh
```bash
rsync -r -v --rsync-path='sudo rsync' --progress -e ssh ~/Sync/delmar.bzh/hugo/public/* -p pleb@bob:/var/www/hugo/
```
#### julien.delmar.bzh
```bash
rsync -r -v --rsync-path='sudo rsync' --progress -e ssh ~/Sync/delmar.bzh/julien/public/* -p pleb@bob:/var/www/julien/
```
#### nds.delmar.bzh
```bash
rsync -r -v --rsync-path='sudo rsync' --progress -e ssh ~/Sync/delmar.bzh/nds/public/* -p pleb@bob:/var/www/nds/
```
### Copy from &lt;folder&gt; to another &lt;host&gt;
```bash
rsync -r -av --rsync-path='sudo rsync' --progress -e ssh /mnt/data/docker/<folder> -p pleb@<host>:/mnt/data/docker/
```

25
Divers/SMTP.md Normal file
View File

@@ -0,0 +1,25 @@
# SMTP
#### admin@delmar.bzh
```
Mode d'envoi : SMTP (TLS)
Adresse source : admin@delmar.bzh
Auth : Login (required)
Adresse serveur : pro1.mail.ovh.net
Port : 587
Id : admin@delmar.bzh
Pwd : qmPAzYgJWQqaY&zo3cXhjhVD%agcLZG5
```
#### jme120269@gmail.com
```
Mode d'envoi : SMTP (SSL/TLS)
Adresse source : jme69@pm.me
Auth : Login (required)
Adresse serveur : smtp.gmail.com
Port : 465
Id : jme120269@gmail.com
Pwd : eaimbyaogkzsigwh
```

View File

@@ -0,0 +1,15 @@
# Snap (full uninstall)
```
snap list
sudo snap remove ...
sudo systemctl stop snapd
sudo systemctl disable snapd
sudo systemctl mask snapd
sudo apt purge snapd -y
sudo apt-mark hold snapd
rm -rf ~/snap/sudo
rm -rf /snapsudo
rm -rf /var/snapsudo
rm -rf /var/lib/snapd
```

15
Divers/ffmpeg.md Normal file
View File

@@ -0,0 +1,15 @@
# ffmpeg
#### Image sequence (png) to mp4 :
```bash
ffmpeg -framerate 24 -i image_%03d.png -vcodec mpeg4 video.mp4
```
#### WebM loop image sequence (png) with music (mp3)
```bash
ffmpeg -stream_loop -1 -t 106 -framerate 10 -f image2 -i frame\%04d.png -i dance.mp3 -c:v libvpx-vp9 -pix_fmt yuva420p dance.webm
```
> Total duration : 1'46"

7
Divers/tar.md Normal file
View File

@@ -0,0 +1,7 @@
# tar
##### Compress folder with progress bar...
```bash
sudo tar cf - /mnt/data/cloud -P | pv -s $(sudo du -sb /mnt/data/cloud | awk '{print $1}') | gzip > data.tar.gz
```