diff --git a/On host/AIServerSetup/01-Ubuntu Server Setup/01-BaseSetup-WebminAndDocker.md b/On host/AIServerSetup/01-Ubuntu Server Setup/01-BaseSetup-WebminAndDocker.md new file mode 100644 index 0000000..c063d6c --- /dev/null +++ b/On host/AIServerSetup/01-Ubuntu Server Setup/01-BaseSetup-WebminAndDocker.md @@ -0,0 +1,143 @@ +# Installing Webmin and Docker on Ubuntu + +This guide walks you through installing Webmin on Ubuntu and expanding logical volumes via Webmin’s interface. Additionally, it covers Docker installation on Ubuntu. + +--- + +## Part 1: Installing Webmin on Ubuntu + +Webmin is a web-based interface for managing Unix-like systems, making tasks such as user management, server configuration, and software installation easier. + +### Step 1: Update Your System + +Before installing Webmin, update your system to ensure all packages are up to date. + +```bash +sudo apt update && sudo apt upgrade -y +``` + +### Step 2: Add the Webmin Repository and Key + +To add the Webmin repository, download and run the setup script. + +```bash +curl -o setup-repos.sh https://raw.githubusercontent.com/webmin/webmin/master/setup-repos.sh +sudo sh setup-repos.sh +``` + +### Step 3: Install Webmin + +With the repository set up, install Webmin: + +```bash +sudo apt-get install webmin --install-recommends +``` + +### Step 4: Access Webmin + +Once installed, Webmin runs on port 10000. You can access it by opening a browser and navigating to: + +``` +https://:10000 +``` + +If you are using a firewall, allow traffic on port 10000: + +```bash +sudo ufw allow 10000 +``` + +You can now log in to Webmin using your system's root credentials. + +--- + +## Part 2: Expanding a Logical Volume Using Webmin + +Expanding a logical volume through Webmin’s Logical Volume Management (LVM) interface is a simple process. + +### Step 1: Access Logical Volume Management + +Log in to Webmin and navigate to: + +**Hardware > Logical Volume Management** + +Here, you can manage physical volumes, volume groups, and logical volumes. + +### Step 2: Add a New Physical Volume + +If you've added a new disk or partition to your system, you need to allocate it to a volume group before expanding the logical volume. To do this: +1. Locate your volume group in the Logical Volume Management module. +2. Click **Add Physical Volume**. +3. Select the new partition or RAID device and click **Add to volume group**. This action increases the available space in the group. + +### Step 3: Resize the Logical Volume + +To extend a logical volume: +1. In the **Logical Volumes** section, locate the logical volume you wish to extend. +2. Select **Resize**. +3. Specify the additional space or use all available free space in the volume group. +4. Click **Apply** to resize the logical volume. + +### Step 4: Resize the Filesystem + +After resizing the logical volume, expand the filesystem to match: +1. Click on the logical volume to view its details. +2. For supported filesystems like ext2, ext3, or ext4, click **Resize Filesystem**. The filesystem will automatically adjust to the new size of the logical volume. + +--- + +## Part 3: Installing Docker on Ubuntu + +This section covers installing Docker on Ubuntu. + +### Step 1: Remove Older Versions + +If you have previous versions of Docker installed, remove them: + +```bash +sudo apt remove docker docker-engine docker.io containerd runc +``` + +### Step 2: Add Docker's Official GPG Key and Repository + +Add Docker’s GPG key and repository to your system’s Apt sources: + +```bash +sudo apt-get update +sudo apt-get install ca-certificates curl +sudo install -m 0755 -d /etc/apt/keyrings +sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc +sudo chmod a+r /etc/apt/keyrings/docker.asc + +echo \ +"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ +$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ +sudo tee /etc/apt/sources.list.d/docker.list > /dev/null + +sudo apt-get update +``` + +### Step 3: Install Docker + +Now, install Docker: + +```bash +sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin +``` + +### Step 4: Post-Installation Steps + +To allow your user to run Docker commands without `sudo`, add your user to the Docker group: + +```bash +sudo usermod -aG docker $USER +newgrp docker +``` + +Test your Docker installation by running the following command: + +```bash +docker run hello-world +``` + +For more information, visit the official [Docker installation page](https://docs.docker.com/engine/install/ubuntu/). \ No newline at end of file diff --git a/On host/AIServerSetup/01-Ubuntu Server Setup/02-NvidiaDriversSetup.md b/On host/AIServerSetup/01-Ubuntu Server Setup/02-NvidiaDriversSetup.md new file mode 100644 index 0000000..d5b2341 --- /dev/null +++ b/On host/AIServerSetup/01-Ubuntu Server Setup/02-NvidiaDriversSetup.md @@ -0,0 +1,182 @@ +# How to Install the Latest Version of NVIDIA CUDA on Ubuntu 22.04 LTS + +If you’re looking to unlock the power of your NVIDIA GPU for scientific computing, machine learning, or other parallel workloads, CUDA is essential. Follow this step-by-step guide to install the latest CUDA release on Ubuntu 22.04 LTS. + +## Prerequisites + +Before proceeding with installing CUDA, ensure your system meets the following requirements: + +- **Ubuntu 22.04 LTS** – This version is highly recommended for stability and compatibility. +- **NVIDIA GPU + Drivers** – CUDA requires having an NVIDIA GPU along with proprietary Nvidia drivers installed. + +To check for an NVIDIA GPU, open a terminal and run: +```bash +lspci | grep -i NVIDIA +``` +If an NVIDIA GPU is present, it will be listed. If not, consult NVIDIA’s documentation on installing the latest display drivers. + +## Step 1: Install Latest NVIDIA Drivers + +Install the latest NVIDIA drivers matched to your GPU model and CUDA version using Ubuntu’s built-in Additional Drivers utility: + +1. Open **Settings -> Software & Updates -> Additional Drivers** +2. Select the recommended driver under the NVIDIA heading +3. Click **Apply Changes** and **Reboot** + +Verify the driver installation by running: +```bash +nvidia-smi +``` +This should print details on your NVIDIA GPU and driver version. + +## Step 2: Add the CUDA Repository + +Add NVIDIA’s official repository to your system to install CUDA: + +1. Visit NVIDIA’s CUDA Download Page and select "Linux", "x86_64", "Ubuntu", "22.04", "deb(network)" +2. Copy the repository installation commands for Ubuntu 22.04: +```bash +wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb +sudo dpkg -i cuda-keyring_1.1-1_all.deb +sudo apt-get update +``` +Run these commands to download repository metadata and add the apt source. + +## Step 3: Install CUDA Toolkit + +Install CUDA using apt: +```bash +sudo apt-get -y install cuda +``` +Press **Y** to proceed and allow the latest supported version of the CUDA toolkit to install. + +## Step 4: Configure Environment Variables + +Update environment variables to recognize the CUDA compiler, tools, and libraries: + +Open `/etc/profile.d/cuda.sh` and add the following configuration: +```bash +export PATH=/usr/local/cuda/bin:$PATH +export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH +``` +Save changes and refresh environment variables: +```bash +source /etc/profile.d/cuda.sh +``` +Alternatively, reboot to load the updated environment variables. + +## Step 5: Verify Installation + +Validate the installation: + +1. Check the `nvcc` compiler version: + ```bash + nvcc --version + ``` + This should display details on the CUDA compile driver, including the installed version. + +2. Verify GPU details with NVIDIA SMI: + ```bash + nvidia-smi + ``` + +# Optional: Setting Up cuDNN with CUDA: A Comprehensive Guide + +This guide will walk you through downloading cuDNN from NVIDIA's official site, extracting it, copying the necessary files to the CUDA directory, and setting up environment variables for CUDA. + +## Step 1: Download cuDNN + +1. **Visit the NVIDIA cuDNN Archive**: + Navigate to the [NVIDIA cuDNN Archive](https://developer.nvidia.com/rdp/cudnn-archive). + +2. **Select the Version**: + Choose the appropriate version of cuDNN compatible with your CUDA version. For this guide, we'll assume you are downloading `cudnn-linux-x86_64-8.9.7.29_cuda12-archive`. + +3. **Download the Archive**: + Download the `tar.xz` file to your local machine. + +## Step 2: Extract cuDNN + +1. **Navigate to the Download Directory**: + Open a terminal and navigate to the directory where the archive was downloaded. + + ```bash + cd ~/Downloads + ``` + +2. **Extract the Archive**: + Use the `tar` command to extract the contents of the archive. + + ```bash + tar -xvf cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz + ``` + + This will create a directory named `cudnn-linux-x86_64-8.9.7.29_cuda12-archive`. + +## Step 3: Copy cuDNN Files to CUDA Directory + +1. **Navigate to the Extracted Directory**: + Move into the directory containing the extracted cuDNN files. + + ```bash + cd cudnn-linux-x86_64-8.9.7.29_cuda12-archive + ``` + +2. **Copy Header Files**: + Copy the header files to the CUDA include directory. + + ```bash + sudo cp include/cudnn*.h /usr/local/cuda-12.5/include/ + ``` + +3. **Copy Library Files**: + Copy the library files to the CUDA lib64 directory. + + ```bash + sudo cp lib/libcudnn* /usr/local/cuda-12.5/lib64/ + ``` + +4. **Set Correct Permissions**: + Ensure the copied files have the appropriate permissions. + + ```bash + sudo chmod a+r /usr/local/cuda-12.5/include/cudnn*.h /usr/local/cuda-12.5/lib64/libcudnn* + ``` + +## Step 4: Set Up Environment Variables + +1. **Open Your Shell Profile**: + Open your `.bashrc` or `.bash_profile` file in a text editor. + + ```bash + nano ~/.bashrc + ``` + +2. **Add CUDA to PATH and LD_LIBRARY_PATH**: + Add the following lines to set the environment variables for CUDA. This example assumes CUDA 12.5. + + ```bash + export PATH=/usr/local/cuda-12.5/bin:$PATH + export LD_LIBRARY_PATH=/usr/local/cuda-12.5/lib64:$LD_LIBRARY_PATH + ``` + +3. **Apply the Changes**: + Source the file to apply the changes immediately. + + ```bash + source ~/.bashrc + ``` + +## Verification + +1. **Check CUDA Installation**: + Verify that CUDA is correctly set up by running: + + ```bash + nvcc --version + ``` + +2. **Check cuDNN Installation**: + Optionally, you can compile and run a sample program to ensure cuDNN is working correctly. + +By following these steps, you will have downloaded and installed cuDNN, integrated it into your CUDA setup, and configured your environment variables for smooth operation. This ensures that applications requiring both CUDA and cuDNN can run without issues. \ No newline at end of file diff --git a/On host/AIServerSetup/01-Ubuntu Server Setup/03-PowerLimitNvidiaGPU.md b/On host/AIServerSetup/01-Ubuntu Server Setup/03-PowerLimitNvidiaGPU.md new file mode 100644 index 0000000..9a5a73d --- /dev/null +++ b/On host/AIServerSetup/01-Ubuntu Server Setup/03-PowerLimitNvidiaGPU.md @@ -0,0 +1,85 @@ +# OPTIONAL: Setting NVIDIA GPU Power Limit at System Startup + +## Overview + +This guide explains how to set the power limit for NVIDIA GPUs at system startup using a systemd service. This ensures the power limit setting is persistent across reboots. + +## Steps + +### 1. Create and Configure the Service File + +1. Open a terminal and create a new systemd service file: + + ```bash + sudo nano /etc/systemd/system/nvidia-power-limit.service + ``` + +2. Add the following content to the file, replacing `270` with the desired power limit (e.g., 270 watts for your GPUs): + + - For Dual GPU Setup: + + ```ini + [Unit] + Description=Set NVIDIA GPU Power Limit + + [Service] + Type=oneshot + ExecStart=/usr/bin/nvidia-smi -i 0 -pl 270 + ExecStart=/usr/bin/nvidia-smi -i 1 -pl 270 + + [Install] + WantedBy=multi-user.target + ``` + + - For Quad GPU Setup: + + ```ini + [Unit] + Description=Set NVIDIA GPU Power Limit + + [Service] + Type=oneshot + ExecStart=/usr/bin/nvidia-smi -i 0 -pl 270 + ExecStart=/usr/bin/nvidia-smi -i 1 -pl 270 + ExecStart=/usr/bin/nvidia-smi -i 2 -pl 270 + ExecStart=/usr/bin/nvidia-smi -i 3 -pl 270 + + [Install] + WantedBy=multi-user.target + ``` + + Save and close the file. + +### 2. Apply and Enable the Service + +1. Reload the systemd manager configuration: + + ```bash + sudo systemctl daemon-reload + ``` + +2. Enable the service to ensure it runs at startup: + + ```bash + sudo systemctl enable nvidia-power-limit.service + ``` + +### 3. (Optional) Start the Service Immediately + +To apply the power limit immediately without rebooting: + +```bash +sudo systemctl start nvidia-power-limit.service +``` + +## Verification + +Check the power limits using `nvidia-smi`: + +```bash +nvidia-smi -q -d POWER +``` + +Look for the "Power Management" section to verify the new power limits. + +By following this guide, you can ensure that your NVIDIA GPUs have a power limit set at every system startup, providing consistent and controlled power usage for your GPUs. \ No newline at end of file diff --git a/On host/AIServerSetup/02-Ollama And OpenWebUI/01-OllamaAndOpenWebUISetup.md b/On host/AIServerSetup/02-Ollama And OpenWebUI/01-OllamaAndOpenWebUISetup.md new file mode 100644 index 0000000..b0b4c09 --- /dev/null +++ b/On host/AIServerSetup/02-Ollama And OpenWebUI/01-OllamaAndOpenWebUISetup.md @@ -0,0 +1,103 @@ +# Ollama & OpenWebUI Docker Setup + +## Ollama with Nvidia GPU + +Ollama makes it easy to get up and running with large language models locally. +To run Ollama using an Nvidia GPU, follow these steps: + +### Step 1: Install the NVIDIA Container Toolkit + +#### Install with Apt + +1. **Configure the repository**: + + ```bash + curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \ + | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg + curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \ + | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \ + | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list + sudo apt-get update + ``` + +2. **Install the NVIDIA Container Toolkit packages**: + + ```bash + sudo apt-get install -y nvidia-container-toolkit + ``` + +#### Install with Yum or Dnf + +1. **Configure the repository**: + + ```bash + curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \ + | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo + ``` + +2. **Install the NVIDIA Container Toolkit packages**: + + ```bash + sudo yum install -y nvidia-container-toolkit + ``` + +### Step 2: Configure Docker to Use Nvidia Driver + +```bash +sudo nvidia-ctk runtime configure --runtime=docker +sudo systemctl restart docker +``` + +### Step 3: Start the Container + +```bash +docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --restart always --name ollama ollama/ollama +``` + +## Running Multiple Instances with Specific GPUs + +You can run multiple instances of the Ollama server and assign specific GPUs to each instance. In my server, I have 4 Nvidia 3090 GPUs, which I use as described below: + +### Ollama Server for GPUs 0 and 1 + +```bash +docker run -d --gpus '"device=0,1"' -v ollama:/root/.ollama -p 11435:11434 --restart always --name ollama1 --network ollama-network ollama/ollama +``` + +### Ollama Server for GPUs 2 and 3 + +```bash +docker run -d --gpus '"device=2,3"' -v ollama:/root/.ollama -p 11436:11434 --restart always --name ollama2 --network ollama-network ollama/ollama +``` + +## Running Models Locally + +Once the container is up and running, you can execute models using: + +```bash +docker exec -it ollama ollama run llama3.1 +``` + +```bash +docker exec -it ollama ollama run llama3.1:70b +``` + +```bash +docker exec -it ollama ollama run qwen2.5-coder:1.5b +``` + +```bash +docker exec -it ollama ollama run deepseek-v2 +``` + +### Try Different Models + +Explore more models available in the [Ollama library](https://github.com/ollama/ollama). + +## OpenWebUI Installation + +To install and run OpenWebUI, use the following command: + +```bash +docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main +``` \ No newline at end of file diff --git a/On host/AIServerSetup/02-Ollama And OpenWebUI/02-Update-Ollama-OpenwebUI.md b/On host/AIServerSetup/02-Ollama And OpenWebUI/02-Update-Ollama-OpenwebUI.md new file mode 100644 index 0000000..1e5f9a9 --- /dev/null +++ b/On host/AIServerSetup/02-Ollama And OpenWebUI/02-Update-Ollama-OpenwebUI.md @@ -0,0 +1,48 @@ +### Wiki: Updating Docker Containers for Ollama and OpenWebUI + +This guide explains the steps to update Docker containers for **Ollama** and **OpenWebUI**. Follow the instructions below to stop, remove, pull new images, and run the updated containers. + +--- + +## Ollama + +### Steps to Update + +1. **Stop Existing Containers** +2. **Remove Existing Containers** +3. **Pull the Latest Ollama Image** +4. **Run Updated Containers** + +For GPU devices 0 and 1: + +```bash +docker stop ollama +docker rm ollama +docker pull ollama/ollama +docker run -d --gpus '"device=0,1"' -v ollama:/root/.ollama -p 11434:11434 --restart always --name ollama -e OLLAMA_KEEP_ALIVE=1h ollama/ollama +``` + +For NVIDIA jetson/cpu + +```bash +docker stop ollama +docker rm ollama +docker pull ollama/ollama +docker run -d -v ollama:/root/.ollama -p 11434:11434 --restart always --name ollama -e OLLAMA_KEEP_ALIVE=1h ollama/ollama +``` +--- + +## OpenWebUI + +```bash +docker stop open-webui +docker rm open-webui +docker pull ghcr.io/open-webui/open-webui:main +docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main +``` + +--- + +### Notes +- Make sure to adjust GPU allocation or port numbers as necessary for your setup. +- The `OLLAMA_KEEP_ALIVE` environment variable is set to `1h` to maintain the container alive for an hour after inactivity. diff --git a/On host/AIServerSetup/03-SearXNG/SearXNGSetup.md b/On host/AIServerSetup/03-SearXNG/SearXNGSetup.md new file mode 100644 index 0000000..c5fcea4 --- /dev/null +++ b/On host/AIServerSetup/03-SearXNG/SearXNGSetup.md @@ -0,0 +1,58 @@ +# Running SearXNG with Custom Settings in Docker + +## Overview + +This guide walks you through the steps to run a SearXNG instance in Docker using a custom `settings.yml` configuration file. This setup is ideal for users who want to customize their SearXNG instance without needing to rebuild the Docker image every time they make a change. + +## Prerequisites + +- **Docker**: Ensure Docker is installed on your machine. Verify the installation by running `docker --version`. +- **Git**: For cloning the SearXNG repository, make sure Git is installed. + +## Steps + +### 1. Use the Official Image or Clone the SearXNG Repository + +You can pull the official image directly from Docker Hub: + +```bash +docker pull docker.io/searxng/searxng:latest +``` + +### 2. Customize `settings.yml` + +Place your custom `settings.yml` file in the directory of your choice. Ensure that this file is configured according to your needs, including enabling JSON responses if required. + +### 3. Run the SearXNG Docker Container + +Run the Docker container using your custom `settings.yml` file. Choose the appropriate command based on whether you are using the official image or a custom build. + +#### For the Official Image: + +```bash +docker run -d -p 4000:8080 --restart always --name searxng -v ./settings.yml:/etc/searxng/settings.yml searxng/searxng:latest +``` + +#### Command Breakdown: +- `-d`: Runs the container in detached mode. +- `-p 4000:8080`: Maps port 8080 in the container to port 4000 on your host machine. +- `-v ./settings.yml:/etc/searxng/settings.yml`: Mounts the custom `settings.yml` file into the container. +- `searxng/searxng:latest` or `searxng/searxng`: The Docker image being used. + +### 4. Access SearXNG + +Once the container is running, you can access your SearXNG instance by navigating to `http://:4000` in your web browser. + +### 5. Testing JSON Output + +To verify that the JSON output is correctly configured, you can use `curl` or a similar tool: + +```bash +curl http://:4000/search?q=python&format=json +``` + +This should return search results in JSON format. + +### 5. Configuration URL for OpenWebUI + +http://:4000/search?q= \ No newline at end of file diff --git a/On host/AIServerSetup/03-SearXNG/settings.yml b/On host/AIServerSetup/03-SearXNG/settings.yml new file mode 100644 index 0000000..da973c1 --- /dev/null +++ b/On host/AIServerSetup/03-SearXNG/settings.yml @@ -0,0 +1,2356 @@ +general: + # Debug mode, only for development. Is overwritten by ${SEARXNG_DEBUG} + debug: false + # displayed name + instance_name: 'searxng' + # For example: https://example.com/privacy + privacypolicy_url: false + # use true to use your own donation page written in searx/info/en/donate.md + # use false to disable the donation link + donation_url: false + # mailto:contact@example.com + contact_url: false + # record stats + enable_metrics: true + +brand: + new_issue_url: https://github.com/searxng/searxng/issues/new + docs_url: https://docs.searxng.org/ + public_instances: https://searx.space + wiki_url: https://github.com/searxng/searxng/wiki + issue_url: https://github.com/searxng/searxng/issues + # custom: + # maintainer: "Jon Doe" + # # Custom entries in the footer: [title]: [link] + # links: + # Uptime: https://uptime.searxng.org/history/darmarit-org + # About: "https://searxng.org" + +search: + # Filter results. 0: None, 1: Moderate, 2: Strict + safe_search: 0 + # Existing autocomplete backends: "dbpedia", "duckduckgo", "google", "yandex", "mwmbl", + # "seznam", "startpage", "stract", "swisscows", "qwant", "wikipedia" - leave blank to turn it off + # by default. + autocomplete: 'google' + # minimun characters to type before autocompleter starts + autocomplete_min: 4 + # Default search language - leave blank to detect from browser information or + # use codes from 'languages.py' + default_lang: 'auto' + # max_page: 0 # if engine supports paging, 0 means unlimited numbers of pages + # Available languages + # languages: + # - all + # - en + # - en-US + # - de + # - it-IT + # - fr + # - fr-BE + # ban time in seconds after engine errors + ban_time_on_fail: 5 + # max ban time in seconds after engine errors + max_ban_time_on_fail: 120 + suspended_times: + # Engine suspension time after error (in seconds; set to 0 to disable) + # For error "Access denied" and "HTTP error [402, 403]" + SearxEngineAccessDenied: 86400 + # For error "CAPTCHA" + SearxEngineCaptcha: 86400 + # For error "Too many request" and "HTTP error 429" + SearxEngineTooManyRequests: 3600 + # Cloudflare CAPTCHA + cf_SearxEngineCaptcha: 1296000 + cf_SearxEngineAccessDenied: 86400 + # ReCAPTCHA + recaptcha_SearxEngineCaptcha: 604800 + + # remove format to deny access, use lower case. + # formats: [html, csv, json, rss] + formats: + - html + - json + +server: + # Is overwritten by ${SEARXNG_PORT} and ${SEARXNG_BIND_ADDRESS} + port: 8888 + bind_address: '127.0.0.1' + # public URL of the instance, to ensure correct inbound links. Is overwritten + # by ${SEARXNG_URL}. + base_url: / # "http://example.com/location" + limiter: false # rate limit the number of request on the instance, block some bots + public_instance: false # enable features designed only for public instances + + # If your instance owns a /etc/searxng/settings.yml file, then set the following + # values there. + + secret_key: 'a2fb23f1b02e6ee83875b09826990de0f6bd908b6638e8c10277d415f6ab852b' # Is overwritten by ${SEARXNG_SECRET} + # Proxying image results through searx + image_proxy: false + # 1.0 and 1.1 are supported + http_protocol_version: '1.0' + # POST queries are more secure as they don't show up in history but may cause + # problems when using Firefox containers + method: 'POST' + default_http_headers: + X-Content-Type-Options: nosniff + X-Download-Options: noopen + X-Robots-Tag: noindex, nofollow + Referrer-Policy: no-referrer + +redis: + # URL to connect redis database. Is overwritten by ${SEARXNG_REDIS_URL}. + # https://docs.searxng.org/admin/settings/settings_redis.html#settings-redis + url: false + +ui: + # Custom static path - leave it blank if you didn't change + static_path: '' + static_use_hash: false + # Custom templates path - leave it blank if you didn't change + templates_path: '' + # query_in_title: When true, the result page's titles contains the query + # it decreases the privacy, since the browser can records the page titles. + query_in_title: false + # infinite_scroll: When true, automatically loads the next page when scrolling to bottom of the current page. + infinite_scroll: false + # ui theme + default_theme: simple + # center the results ? + center_alignment: false + # URL prefix of the internet archive, don't forget trailing slash (if needed). + # cache_url: "https://webcache.googleusercontent.com/search?q=cache:" + # Default interface locale - leave blank to detect from browser information or + # use codes from the 'locales' config section + default_locale: '' + # Open result links in a new tab by default + # results_on_new_tab: false + theme_args: + # style of simple theme: auto, light, dark + simple_style: auto + # Perform search immediately if a category selected. + # Disable to select multiple categories at once and start the search manually. + search_on_category_select: true + # Hotkeys: default or vim + hotkeys: default + +# Lock arbitrary settings on the preferences page. To find the ID of the user +# setting you want to lock, check the ID of the form on the page "preferences". +# +# preferences: +# lock: +# - language +# - autocomplete +# - method +# - query_in_title + +# searx supports result proxification using an external service: +# https://github.com/asciimoo/morty uncomment below section if you have running +# morty proxy the key is base64 encoded (keep the !!binary notation) +# Note: since commit af77ec3, morty accepts a base64 encoded key. +# +# result_proxy: +# url: http://127.0.0.1:3000/ +# # the key is a base64 encoded string, the YAML !!binary prefix is optional +# key: !!binary "your_morty_proxy_key" +# # [true|false] enable the "proxy" button next to each result +# proxify_results: true + +# communication with search engines +# +outgoing: + # default timeout in seconds, can be override by engine + request_timeout: 3.0 + # the maximum timeout in seconds + # max_request_timeout: 10.0 + # suffix of searx_useragent, could contain information like an email address + # to the administrator + useragent_suffix: '' + # The maximum number of concurrent connections that may be established. + pool_connections: 100 + # Allow the connection pool to maintain keep-alive connections below this + # point. + pool_maxsize: 20 + # See https://www.python-httpx.org/http2/ + enable_http2: true + # uncomment below section if you want to use a custom server certificate + # see https://www.python-httpx.org/advanced/#changing-the-verification-defaults + # and https://www.python-httpx.org/compatibility/#ssl-configuration + # verify: ~/.mitmproxy/mitmproxy-ca-cert.cer + # + # uncomment below section if you want to use a proxyq see: SOCKS proxies + # https://2.python-requests.org/en/latest/user/advanced/#proxies + # are also supported: see + # https://2.python-requests.org/en/latest/user/advanced/#socks + # + # proxies: + # all://: + # - http://proxy1:8080 + # - http://proxy2:8080 + # + # using_tor_proxy: true + # + # Extra seconds to add in order to account for the time taken by the proxy + # + # extra_proxy_timeout: 10.0 + # + # uncomment below section only if you have more than one network interface + # which can be the source of outgoing search requests + # + # source_ips: + # - 1.1.1.1 + # - 1.1.1.2 + # - fe80::/126 + +# External plugin configuration, for more details see +# https://docs.searxng.org/dev/plugins.html +# +# plugins: +# - plugin1 +# - plugin2 +# - ... + +# Comment or un-comment plugin to activate / deactivate by default. +# +# enabled_plugins: +# # these plugins are enabled if nothing is configured .. +# - 'Hash plugin' +# - 'Self Information' +# - 'Tracker URL remover' +# - 'Ahmia blacklist' # activation depends on outgoing.using_tor_proxy +# # these plugins are disabled if nothing is configured .. +# - 'Hostname replace' # see hostname_replace configuration below +# - 'Open Access DOI rewrite' +# - 'Tor check plugin' +# # Read the docs before activate: auto-detection of the language could be +# # detrimental to users expectations / users can activate the plugin in the +# # preferences if they want. +# - 'Autodetect search language' + +# Configuration of the "Hostname replace" plugin: +# +# hostname_replace: +# '(.*\.)?youtube\.com$': 'invidious.example.com' +# '(.*\.)?youtu\.be$': 'invidious.example.com' +# '(.*\.)?youtube-noocookie\.com$': 'yotter.example.com' +# '(.*\.)?reddit\.com$': 'teddit.example.com' +# '(.*\.)?redd\.it$': 'teddit.example.com' +# '(www\.)?twitter\.com$': 'nitter.example.com' +# # to remove matching host names from result list, set value to false +# 'spam\.example\.com': false + +checker: + # disable checker when in debug mode + off_when_debug: true + + # use "scheduling: false" to disable scheduling + # scheduling: interval or int + + # to activate the scheduler: + # * uncomment "scheduling" section + # * add "cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1" + # to your uwsgi.ini + + # scheduling: + # start_after: [300, 1800] # delay to start the first run of the checker + # every: [86400, 90000] # how often the checker runs + + # additional tests: only for the YAML anchors (see the engines section) + # + additional_tests: + rosebud: &test_rosebud + matrix: + query: rosebud + lang: en + result_container: + - not_empty + - ['one_title_contains', 'citizen kane'] + test: + - unique_results + + android: &test_android + matrix: + query: ['android'] + lang: ['en', 'de', 'fr', 'zh-CN'] + result_container: + - not_empty + - ['one_title_contains', 'google'] + test: + - unique_results + + # tests: only for the YAML anchors (see the engines section) + tests: + infobox: &tests_infobox + infobox: + matrix: + query: ['linux', 'new york', 'bbc'] + result_container: + - has_infobox + +categories_as_tabs: + general: + images: + videos: + news: + map: + music: + it: + science: + files: + social media: + +engines: + - name: 9gag + engine: 9gag + shortcut: 9g + disabled: true + + - name: annas archive + engine: annas_archive + disabled: true + shortcut: aa + + # - name: annas articles + # engine: annas_archive + # shortcut: aaa + # # https://docs.searxng.org/dev/engines/online/annas_archive.html + # aa_content: 'journal_article' # book_any .. magazine, standards_document + # aa_ext: 'pdf' # pdf, epub, .. + # aa_sort: 'newest' # newest, oldest, largest, smallest + + - name: apk mirror + engine: apkmirror + timeout: 4.0 + shortcut: apkm + disabled: true + + - name: apple app store + engine: apple_app_store + shortcut: aps + disabled: true + + # Requires Tor + - name: ahmia + engine: ahmia + categories: onions + enable_http: true + shortcut: ah + + - name: anaconda + engine: xpath + paging: true + first_page_num: 0 + search_url: https://anaconda.org/search?q={query}&page={pageno} + results_xpath: //tbody/tr + url_xpath: ./td/h5/a[last()]/@href + title_xpath: ./td/h5 + content_xpath: ./td[h5]/text() + categories: it + timeout: 6.0 + shortcut: conda + disabled: true + + - name: arch linux wiki + engine: archlinux + shortcut: al + + - name: artic + engine: artic + shortcut: arc + timeout: 4.0 + + - name: arxiv + engine: arxiv + shortcut: arx + timeout: 4.0 + + - name: ask + engine: ask + shortcut: ask + disabled: true + + # tmp suspended: dh key too small + # - name: base + # engine: base + # shortcut: bs + + - name: bandcamp + engine: bandcamp + shortcut: bc + categories: music + + - name: wikipedia + engine: wikipedia + shortcut: wp + # add "list" to the array to get results in the results list + display_type: ['infobox'] + base_url: 'https://{language}.wikipedia.org/' + categories: [general] + + - name: bilibili + engine: bilibili + shortcut: bil + disabled: true + + - name: bing + engine: bing + shortcut: bi + disabled: true + + - name: bing images + engine: bing_images + shortcut: bii + + - name: bing news + engine: bing_news + shortcut: bin + + - name: bing videos + engine: bing_videos + shortcut: biv + + - name: bitbucket + engine: xpath + paging: true + search_url: https://bitbucket.org/repo/all/{pageno}?name={query} + url_xpath: //article[@class="repo-summary"]//a[@class="repo-link"]/@href + title_xpath: //article[@class="repo-summary"]//a[@class="repo-link"] + content_xpath: //article[@class="repo-summary"]/p + categories: [it, repos] + timeout: 4.0 + disabled: true + shortcut: bb + about: + website: https://bitbucket.org/ + wikidata_id: Q2493781 + official_api_documentation: https://developer.atlassian.com/bitbucket + use_official_api: false + require_api_key: false + results: HTML + + - name: bpb + engine: bpb + shortcut: bpb + disabled: true + + - name: btdigg + engine: btdigg + shortcut: bt + disabled: true + + - name: ccc-tv + engine: xpath + paging: false + search_url: https://media.ccc.de/search/?q={query} + url_xpath: //div[@class="caption"]/h3/a/@href + title_xpath: //div[@class="caption"]/h3/a/text() + content_xpath: //div[@class="caption"]/h4/@title + categories: videos + disabled: true + shortcut: c3tv + about: + website: https://media.ccc.de/ + wikidata_id: Q80729951 + official_api_documentation: https://github.com/voc/voctoweb + use_official_api: false + require_api_key: false + results: HTML + # We don't set language: de here because media.ccc.de is not just + # for a German audience. It contains many English videos and many + # German videos have English subtitles. + + - name: openverse + engine: openverse + categories: images + shortcut: opv + + - name: chefkoch + engine: chefkoch + shortcut: chef + # to show premium or plus results too: + # skip_premium: false + + # - name: core.ac.uk + # engine: core + # categories: science + # shortcut: cor + # # get your API key from: https://core.ac.uk/api-keys/register/ + # api_key: 'unset' + + - name: crossref + engine: crossref + shortcut: cr + timeout: 30 + disabled: true + + - name: crowdview + engine: json_engine + shortcut: cv + categories: general + paging: false + search_url: https://crowdview-next-js.onrender.com/api/search-v3?query={query} + results_query: results + url_query: link + title_query: title + content_query: snippet + disabled: true + about: + website: https://crowdview.ai/ + + - name: yep + engine: yep + shortcut: yep + categories: general + search_type: web + disabled: true + + - name: yep images + engine: yep + shortcut: yepi + categories: images + search_type: images + disabled: true + + - name: yep news + engine: yep + shortcut: yepn + categories: news + search_type: news + disabled: true + + - name: curlie + engine: xpath + shortcut: cl + categories: general + disabled: true + paging: true + lang_all: '' + search_url: https://curlie.org/search?q={query}&lang={lang}&start={pageno}&stime=92452189 + page_size: 20 + results_xpath: //div[@id="site-list-content"]/div[@class="site-item"] + url_xpath: ./div[@class="title-and-desc"]/a/@href + title_xpath: ./div[@class="title-and-desc"]/a/div + content_xpath: ./div[@class="title-and-desc"]/div[@class="site-descr"] + about: + website: https://curlie.org/ + wikidata_id: Q60715723 + use_official_api: false + require_api_key: false + results: HTML + + - name: currency + engine: currency_convert + categories: general + shortcut: cc + + - name: bahnhof + engine: json_engine + search_url: https://www.bahnhof.de/api/stations/search/{query} + url_prefix: https://www.bahnhof.de/ + url_query: slug + title_query: name + content_query: state + shortcut: bf + disabled: true + about: + website: https://www.bahn.de + wikidata_id: Q22811603 + use_official_api: false + require_api_key: false + results: JSON + language: de + + - name: deezer + engine: deezer + shortcut: dz + disabled: true + + - name: destatis + engine: destatis + shortcut: destat + disabled: true + + - name: deviantart + engine: deviantart + shortcut: da + timeout: 3.0 + + - name: ddg definitions + engine: duckduckgo_definitions + shortcut: ddd + weight: 2 + disabled: true + tests: *tests_infobox + + # cloudflare protected + # - name: digbt + # engine: digbt + # shortcut: dbt + # timeout: 6.0 + # disabled: true + + - name: docker hub + engine: docker_hub + shortcut: dh + categories: [it, packages] + + - name: erowid + engine: xpath + paging: true + first_page_num: 0 + page_size: 30 + search_url: https://www.erowid.org/search.php?q={query}&s={pageno} + url_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/@href + title_xpath: //dl[@class="results-list"]/dt[@class="result-title"]/a/text() + content_xpath: //dl[@class="results-list"]/dd[@class="result-details"] + categories: [] + shortcut: ew + disabled: true + about: + website: https://www.erowid.org/ + wikidata_id: Q1430691 + official_api_documentation: + use_official_api: false + require_api_key: false + results: HTML + + # - name: elasticsearch + # shortcut: es + # engine: elasticsearch + # base_url: http://localhost:9200 + # username: elastic + # password: changeme + # index: my-index + # # available options: match, simple_query_string, term, terms, custom + # query_type: match + # # if query_type is set to custom, provide your query here + # #custom_query_json: {"query":{"match_all": {}}} + # #show_metadata: false + # disabled: true + + - name: wikidata + engine: wikidata + shortcut: wd + timeout: 3.0 + weight: 2 + # add "list" to the array to get results in the results list + display_type: ['infobox'] + tests: *tests_infobox + categories: [general] + + - name: duckduckgo + engine: duckduckgo + shortcut: ddg + + - name: duckduckgo images + engine: duckduckgo_extra + categories: [images, web] + ddg_category: images + shortcut: ddi + disabled: true + + - name: duckduckgo videos + engine: duckduckgo_extra + categories: [videos, web] + ddg_category: videos + shortcut: ddv + disabled: true + + - name: duckduckgo news + engine: duckduckgo_extra + categories: [news, web] + ddg_category: news + shortcut: ddn + disabled: true + + - name: duckduckgo weather + engine: duckduckgo_weather + shortcut: ddw + disabled: true + + - name: apple maps + engine: apple_maps + shortcut: apm + disabled: true + timeout: 5.0 + + - name: emojipedia + engine: emojipedia + timeout: 4.0 + shortcut: em + disabled: true + + - name: tineye + engine: tineye + shortcut: tin + timeout: 9.0 + disabled: true + + - name: etymonline + engine: xpath + paging: true + search_url: https://etymonline.com/search?page={pageno}&q={query} + url_xpath: //a[contains(@class, "word__name--")]/@href + title_xpath: //a[contains(@class, "word__name--")] + content_xpath: //section[contains(@class, "word__defination")] + first_page_num: 1 + shortcut: et + categories: [dictionaries] + about: + website: https://www.etymonline.com/ + wikidata_id: Q1188617 + official_api_documentation: + use_official_api: false + require_api_key: false + results: HTML + + # - name: ebay + # engine: ebay + # shortcut: eb + # base_url: 'https://www.ebay.com' + # disabled: true + # timeout: 5 + + - name: 1x + engine: www1x + shortcut: 1x + timeout: 3.0 + disabled: true + + - name: fdroid + engine: fdroid + shortcut: fd + disabled: true + + - name: flickr + categories: images + shortcut: fl + # You can use the engine using the official stable API, but you need an API + # key, see: https://www.flickr.com/services/apps/create/ + # engine: flickr + # api_key: 'apikey' # required! + # Or you can use the html non-stable engine, activated by default + engine: flickr_noapi + + - name: free software directory + engine: mediawiki + shortcut: fsd + categories: [it, software wikis] + base_url: https://directory.fsf.org/ + search_type: title + timeout: 5.0 + disabled: true + about: + website: https://directory.fsf.org/ + wikidata_id: Q2470288 + + # - name: freesound + # engine: freesound + # shortcut: fnd + # disabled: true + # timeout: 15.0 + # API key required, see: https://freesound.org/docs/api/overview.html + # api_key: MyAPIkey + + - name: frinkiac + engine: frinkiac + shortcut: frk + disabled: true + + - name: fyyd + engine: fyyd + shortcut: fy + timeout: 8.0 + disabled: true + + - name: genius + engine: genius + shortcut: gen + + - name: gentoo + engine: gentoo + shortcut: ge + timeout: 10.0 + + - name: gitlab + engine: json_engine + paging: true + search_url: https://gitlab.com/api/v4/projects?search={query}&page={pageno} + url_query: web_url + title_query: name_with_namespace + content_query: description + page_size: 20 + categories: [it, repos] + shortcut: gl + timeout: 10.0 + disabled: true + about: + website: https://about.gitlab.com/ + wikidata_id: Q16639197 + official_api_documentation: https://docs.gitlab.com/ee/api/ + use_official_api: false + require_api_key: false + results: JSON + + - name: github + engine: github + shortcut: gh + + # This a Gitea service. If you would like to use a different instance, + # change codeberg.org to URL of the desired Gitea host. Or you can create a + # new engine by copying this and changing the name, shortcut and search_url. + + - name: codeberg + engine: json_engine + search_url: https://codeberg.org/api/v1/repos/search?q={query}&limit=10 + url_query: html_url + title_query: name + content_query: description + categories: [it, repos] + shortcut: cb + disabled: true + about: + website: https://codeberg.org/ + wikidata_id: + official_api_documentation: https://try.gitea.io/api/swagger + use_official_api: false + require_api_key: false + results: JSON + + - name: goodreads + engine: goodreads + shortcut: good + timeout: 4.0 + disabled: true + + - name: google + engine: google + shortcut: go + # additional_tests: + # android: *test_android + + - name: google images + engine: google_images + shortcut: goi + # additional_tests: + # android: *test_android + # dali: + # matrix: + # query: ['Dali Christ'] + # lang: ['en', 'de', 'fr', 'zh-CN'] + # result_container: + # - ['one_title_contains', 'Salvador'] + + - name: google news + engine: google_news + shortcut: gon + # additional_tests: + # android: *test_android + + - name: google videos + engine: google_videos + shortcut: gov + # additional_tests: + # android: *test_android + + - name: google scholar + engine: google_scholar + shortcut: gos + + - name: google play apps + engine: google_play + categories: [files, apps] + shortcut: gpa + play_categ: apps + disabled: true + + - name: google play movies + engine: google_play + categories: videos + shortcut: gpm + play_categ: movies + disabled: true + + - name: material icons + engine: material_icons + categories: images + shortcut: mi + disabled: true + + - name: gpodder + engine: json_engine + shortcut: gpod + timeout: 4.0 + paging: false + search_url: https://gpodder.net/search.json?q={query} + url_query: url + title_query: title + content_query: description + page_size: 19 + categories: music + disabled: true + about: + website: https://gpodder.net + wikidata_id: Q3093354 + official_api_documentation: https://gpoddernet.readthedocs.io/en/latest/api/ + use_official_api: false + requires_api_key: false + results: JSON + + - name: habrahabr + engine: xpath + paging: true + search_url: https://habr.com/en/search/page{pageno}/?q={query} + results_xpath: //article[contains(@class, "tm-articles-list__item")] + url_xpath: .//a[@class="tm-title__link"]/@href + title_xpath: .//a[@class="tm-title__link"] + content_xpath: .//div[contains(@class, "article-formatted-body")] + categories: it + timeout: 4.0 + disabled: true + shortcut: habr + about: + website: https://habr.com/ + wikidata_id: Q4494434 + official_api_documentation: https://habr.com/en/docs/help/api/ + use_official_api: false + require_api_key: false + results: HTML + + - name: hackernews + engine: hackernews + shortcut: hn + disabled: true + + - name: hoogle + engine: xpath + paging: true + search_url: https://hoogle.haskell.org/?hoogle={query}&start={pageno} + results_xpath: '//div[@class="result"]' + title_xpath: './/div[@class="ans"]//a' + url_xpath: './/div[@class="ans"]//a/@href' + content_xpath: './/div[@class="from"]' + page_size: 20 + categories: [it, packages] + shortcut: ho + about: + website: https://hoogle.haskell.org/ + wikidata_id: Q34010 + official_api_documentation: https://hackage.haskell.org/api + use_official_api: false + require_api_key: false + results: JSON + + - name: imdb + engine: imdb + shortcut: imdb + timeout: 6.0 + disabled: true + + - name: imgur + engine: imgur + shortcut: img + disabled: true + + - name: ina + engine: ina + shortcut: in + timeout: 6.0 + disabled: true + + - name: invidious + engine: invidious + # Instanes will be selected randomly, see https://api.invidious.io/ for + # instances that are stable (good uptime) and close to you. + base_url: + - https://invidious.io.lol + - https://invidious.fdn.fr + - https://yt.artemislena.eu + - https://invidious.tiekoetter.com + - https://invidious.flokinet.to + - https://vid.puffyan.us + - https://invidious.privacydev.net + - https://inv.tux.pizza + shortcut: iv + timeout: 3.0 + disabled: true + + - name: jisho + engine: jisho + shortcut: js + timeout: 3.0 + disabled: true + + - name: kickass + engine: kickass + base_url: + - https://kickasstorrents.to + - https://kickasstorrents.cr + - https://kickasstorrent.cr + - https://kickass.sx + - https://kat.am + shortcut: kc + timeout: 4.0 + + - name: lemmy communities + engine: lemmy + lemmy_type: Communities + shortcut: leco + + - name: lemmy users + engine: lemmy + network: lemmy communities + lemmy_type: Users + shortcut: leus + + - name: lemmy posts + engine: lemmy + network: lemmy communities + lemmy_type: Posts + shortcut: lepo + + - name: lemmy comments + engine: lemmy + network: lemmy communities + lemmy_type: Comments + shortcut: lecom + + - name: library genesis + engine: xpath + # search_url: https://libgen.is/search.php?req={query} + search_url: https://libgen.rs/search.php?req={query} + url_xpath: //a[contains(@href,"book/index.php?md5")]/@href + title_xpath: //a[contains(@href,"book/")]/text()[1] + content_xpath: //td/a[1][contains(@href,"=author")]/text() + categories: files + timeout: 7.0 + disabled: true + shortcut: lg + about: + website: https://libgen.fun/ + wikidata_id: Q22017206 + official_api_documentation: + use_official_api: false + require_api_key: false + results: HTML + + - name: z-library + engine: zlibrary + shortcut: zlib + categories: files + timeout: 7.0 + + - name: library of congress + engine: loc + shortcut: loc + categories: images + + - name: lingva + engine: lingva + shortcut: lv + # set lingva instance in url, by default it will use the official instance + # url: https://lingva.thedaviddelta.com + + - name: lobste.rs + engine: xpath + search_url: https://lobste.rs/search?utf8=%E2%9C%93&q={query}&what=stories&order=relevance + results_xpath: //li[contains(@class, "story")] + url_xpath: .//a[@class="u-url"]/@href + title_xpath: .//a[@class="u-url"] + content_xpath: .//a[@class="domain"] + categories: it + shortcut: lo + timeout: 5.0 + disabled: true + about: + website: https://lobste.rs/ + wikidata_id: Q60762874 + official_api_documentation: + use_official_api: false + require_api_key: false + results: HTML + + - name: mastodon users + engine: mastodon + mastodon_type: accounts + base_url: https://mastodon.social + shortcut: mau + + - name: mastodon hashtags + engine: mastodon + mastodon_type: hashtags + base_url: https://mastodon.social + shortcut: mah + + # - name: matrixrooms + # engine: mrs + # # https://docs.searxng.org/dev/engines/online/mrs.html + # # base_url: https://mrs-api-host + # shortcut: mtrx + # disabled: true + + - name: mdn + shortcut: mdn + engine: json_engine + categories: [it] + paging: true + search_url: https://developer.mozilla.org/api/v1/search?q={query}&page={pageno} + results_query: documents + url_query: mdn_url + url_prefix: https://developer.mozilla.org + title_query: title + content_query: summary + about: + website: https://developer.mozilla.org + wikidata_id: Q3273508 + official_api_documentation: null + use_official_api: false + require_api_key: false + results: JSON + + - name: metacpan + engine: metacpan + shortcut: cpan + disabled: true + number_of_results: 20 + + # - name: meilisearch + # engine: meilisearch + # shortcut: mes + # enable_http: true + # base_url: http://localhost:7700 + # index: my-index + + - name: mixcloud + engine: mixcloud + shortcut: mc + + # MongoDB engine + # Required dependency: pymongo + # - name: mymongo + # engine: mongodb + # shortcut: md + # exact_match_only: false + # host: '127.0.0.1' + # port: 27017 + # enable_http: true + # results_per_page: 20 + # database: 'business' + # collection: 'reviews' # name of the db collection + # key: 'name' # key in the collection to search for + + - name: mozhi + engine: mozhi + base_url: + - https://mozhi.aryak.me + - https://translate.bus-hit.me + - https://nyc1.mz.ggtyler.dev + # mozhi_engine: google - see https://mozhi.aryak.me for supported engines + timeout: 4.0 + shortcut: mz + disabled: true + + - name: mwmbl + engine: mwmbl + # api_url: https://api.mwmbl.org + shortcut: mwm + disabled: true + + - name: npm + engine: json_engine + paging: true + first_page_num: 0 + search_url: https://api.npms.io/v2/search?q={query}&size=25&from={pageno} + results_query: results + url_query: package/links/npm + title_query: package/name + content_query: package/description + page_size: 25 + categories: [it, packages] + disabled: true + timeout: 5.0 + shortcut: npm + about: + website: https://npms.io/ + wikidata_id: Q7067518 + official_api_documentation: https://api-docs.npms.io/ + use_official_api: false + require_api_key: false + results: JSON + + - name: nyaa + engine: nyaa + shortcut: nt + disabled: true + + - name: mankier + engine: json_engine + search_url: https://www.mankier.com/api/v2/mans/?q={query} + results_query: results + url_query: url + title_query: name + content_query: description + categories: it + shortcut: man + about: + website: https://www.mankier.com/ + official_api_documentation: https://www.mankier.com/api + use_official_api: true + require_api_key: false + results: JSON + + - name: odysee + engine: odysee + shortcut: od + disabled: true + + - name: openairedatasets + engine: json_engine + paging: true + search_url: https://api.openaire.eu/search/datasets?format=json&page={pageno}&size=10&title={query} + results_query: response/results/result + url_query: metadata/oaf:entity/oaf:result/children/instance/webresource/url/$ + title_query: metadata/oaf:entity/oaf:result/title/$ + content_query: metadata/oaf:entity/oaf:result/description/$ + content_html_to_text: true + categories: 'science' + shortcut: oad + timeout: 5.0 + about: + website: https://www.openaire.eu/ + wikidata_id: Q25106053 + official_api_documentation: https://api.openaire.eu/ + use_official_api: false + require_api_key: false + results: JSON + + - name: openairepublications + engine: json_engine + paging: true + search_url: https://api.openaire.eu/search/publications?format=json&page={pageno}&size=10&title={query} + results_query: response/results/result + url_query: metadata/oaf:entity/oaf:result/children/instance/webresource/url/$ + title_query: metadata/oaf:entity/oaf:result/title/$ + content_query: metadata/oaf:entity/oaf:result/description/$ + content_html_to_text: true + categories: science + shortcut: oap + timeout: 5.0 + about: + website: https://www.openaire.eu/ + wikidata_id: Q25106053 + official_api_documentation: https://api.openaire.eu/ + use_official_api: false + require_api_key: false + results: JSON + + # - name: opensemanticsearch + # engine: opensemantic + # shortcut: oss + # base_url: 'http://localhost:8983/solr/opensemanticsearch/' + + - name: openstreetmap + engine: openstreetmap + shortcut: osm + + - name: openrepos + engine: xpath + paging: true + search_url: https://openrepos.net/search/node/{query}?page={pageno} + url_xpath: //li[@class="search-result"]//h3[@class="title"]/a/@href + title_xpath: //li[@class="search-result"]//h3[@class="title"]/a + content_xpath: //li[@class="search-result"]//div[@class="search-snippet-info"]//p[@class="search-snippet"] + categories: files + timeout: 4.0 + disabled: true + shortcut: or + about: + website: https://openrepos.net/ + wikidata_id: + official_api_documentation: + use_official_api: false + require_api_key: false + results: HTML + + - name: packagist + engine: json_engine + paging: true + search_url: https://packagist.org/search.json?q={query}&page={pageno} + results_query: results + url_query: url + title_query: name + content_query: description + categories: [it, packages] + disabled: true + timeout: 5.0 + shortcut: pack + about: + website: https://packagist.org + wikidata_id: Q108311377 + official_api_documentation: https://packagist.org/apidoc + use_official_api: true + require_api_key: false + results: JSON + + - name: pdbe + engine: pdbe + shortcut: pdb + # Hide obsolete PDB entries. Default is not to hide obsolete structures + # hide_obsolete: false + + - name: photon + engine: photon + shortcut: ph + + - name: pinterest + engine: pinterest + shortcut: pin + + - name: piped + engine: piped + shortcut: ppd + categories: videos + piped_filter: videos + timeout: 3.0 + + # URL to use as link and for embeds + frontend_url: https://srv.piped.video + # Instance will be selected randomly, for more see https://piped-instances.kavin.rocks/ + backend_url: + - https://pipedapi.kavin.rocks + - https://pipedapi-libre.kavin.rocks + - https://pipedapi.adminforge.de + + - name: piped.music + engine: piped + network: piped + shortcut: ppdm + categories: music + piped_filter: music_songs + timeout: 3.0 + + - name: piratebay + engine: piratebay + shortcut: tpb + # You may need to change this URL to a proxy if piratebay is blocked in your + # country + url: https://thepiratebay.org/ + timeout: 3.0 + + - name: podcastindex + engine: podcastindex + shortcut: podcast + + # Required dependency: psychopg2 + # - name: postgresql + # engine: postgresql + # database: postgres + # username: postgres + # password: postgres + # limit: 10 + # query_str: 'SELECT * from my_table WHERE my_column = %(query)s' + # shortcut : psql + + - name: presearch + engine: presearch + search_type: search + categories: [general, web] + shortcut: ps + timeout: 4.0 + disabled: true + + - name: presearch images + engine: presearch + network: presearch + search_type: images + categories: [images, web] + timeout: 4.0 + shortcut: psimg + disabled: true + + - name: presearch videos + engine: presearch + network: presearch + search_type: videos + categories: [general, web] + timeout: 4.0 + shortcut: psvid + disabled: true + + - name: presearch news + engine: presearch + network: presearch + search_type: news + categories: [news, web] + timeout: 4.0 + shortcut: psnews + disabled: true + + - name: pub.dev + engine: xpath + shortcut: pd + search_url: https://pub.dev/packages?q={query}&page={pageno} + paging: true + results_xpath: //div[contains(@class,"packages-item")] + url_xpath: ./div/h3/a/@href + title_xpath: ./div/h3/a + content_xpath: ./div/div/div[contains(@class,"packages-description")]/span + categories: [packages, it] + timeout: 3.0 + disabled: true + first_page_num: 1 + about: + website: https://pub.dev/ + official_api_documentation: https://pub.dev/help/api + use_official_api: false + require_api_key: false + results: HTML + + - name: pubmed + engine: pubmed + shortcut: pub + timeout: 3.0 + + - name: pypi + shortcut: pypi + engine: xpath + paging: true + search_url: https://pypi.org/search/?q={query}&page={pageno} + results_xpath: /html/body/main/div/div/div/form/div/ul/li/a[@class="package-snippet"] + url_xpath: ./@href + title_xpath: ./h3/span[@class="package-snippet__name"] + content_xpath: ./p + suggestion_xpath: /html/body/main/div/div/div/form/div/div[@class="callout-block"]/p/span/a[@class="link"] + first_page_num: 1 + categories: [it, packages] + about: + website: https://pypi.org + wikidata_id: Q2984686 + official_api_documentation: https://warehouse.readthedocs.io/api-reference/index.html + use_official_api: false + require_api_key: false + results: HTML + + - name: qwant + qwant_categ: web + engine: qwant + shortcut: qw + categories: [general, web] + additional_tests: + rosebud: *test_rosebud + + - name: qwant news + qwant_categ: news + engine: qwant + shortcut: qwn + categories: news + network: qwant + + - name: qwant images + qwant_categ: images + engine: qwant + shortcut: qwi + categories: [images, web] + network: qwant + + - name: qwant videos + qwant_categ: videos + engine: qwant + shortcut: qwv + categories: [videos, web] + network: qwant + + # - name: library + # engine: recoll + # shortcut: lib + # base_url: 'https://recoll.example.org/' + # search_dir: '' + # mount_prefix: /export + # dl_prefix: 'https://download.example.org' + # timeout: 30.0 + # categories: files + # disabled: true + + # - name: recoll library reference + # engine: recoll + # base_url: 'https://recoll.example.org/' + # search_dir: reference + # mount_prefix: /export + # dl_prefix: 'https://download.example.org' + # shortcut: libr + # timeout: 30.0 + # categories: files + # disabled: true + + - name: radio browser + engine: radio_browser + shortcut: rb + + - name: reddit + engine: reddit + shortcut: re + page_size: 25 + + - name: rottentomatoes + engine: rottentomatoes + shortcut: rt + disabled: true + + # Required dependency: redis + # - name: myredis + # shortcut : rds + # engine: redis_server + # exact_match_only: false + # host: '127.0.0.1' + # port: 6379 + # enable_http: true + # password: '' + # db: 0 + + # tmp suspended: bad certificate + # - name: scanr structures + # shortcut: scs + # engine: scanr_structures + # disabled: true + + - name: sepiasearch + engine: sepiasearch + shortcut: sep + + - name: soundcloud + engine: soundcloud + shortcut: sc + + - name: stackoverflow + engine: stackexchange + shortcut: st + api_site: 'stackoverflow' + categories: [it, q&a] + + - name: askubuntu + engine: stackexchange + shortcut: ubuntu + api_site: 'askubuntu' + categories: [it, q&a] + + - name: internetarchivescholar + engine: internet_archive_scholar + shortcut: ias + timeout: 5.0 + + - name: superuser + engine: stackexchange + shortcut: su + api_site: 'superuser' + categories: [it, q&a] + + - name: searchcode code + engine: searchcode_code + shortcut: scc + disabled: true + + # - name: searx + # engine: searx_engine + # shortcut: se + # instance_urls : + # - http://127.0.0.1:8888/ + # - ... + # disabled: true + + - name: semantic scholar + engine: semantic_scholar + disabled: true + shortcut: se + + # Spotify needs API credentials + # - name: spotify + # engine: spotify + # shortcut: stf + # api_client_id: ******* + # api_client_secret: ******* + + # - name: solr + # engine: solr + # shortcut: slr + # base_url: http://localhost:8983 + # collection: collection_name + # sort: '' # sorting: asc or desc + # field_list: '' # comma separated list of field names to display on the UI + # default_fields: '' # default field to query + # query_fields: '' # query fields + # enable_http: true + + # - name: springer nature + # engine: springer + # # get your API key from: https://dev.springernature.com/signup + # # working API key, for test & debug: "a69685087d07eca9f13db62f65b8f601" + # api_key: 'unset' + # shortcut: springer + # timeout: 15.0 + + - name: startpage + engine: startpage + shortcut: sp + timeout: 6.0 + disabled: true + additional_tests: + rosebud: *test_rosebud + + - name: tokyotoshokan + engine: tokyotoshokan + shortcut: tt + timeout: 6.0 + disabled: true + + - name: solidtorrents + engine: solidtorrents + shortcut: solid + timeout: 4.0 + base_url: + - https://solidtorrents.to + - https://bitsearch.to + + # For this demo of the sqlite engine download: + # https://liste.mediathekview.de/filmliste-v2.db.bz2 + # and unpack into searx/data/filmliste-v2.db + # Query to test: "!demo concert" + # + # - name: demo + # engine: sqlite + # shortcut: demo + # categories: general + # result_template: default.html + # database: searx/data/filmliste-v2.db + # query_str: >- + # SELECT title || ' (' || time(duration, 'unixepoch') || ')' AS title, + # COALESCE( NULLIF(url_video_hd,''), NULLIF(url_video_sd,''), url_video) AS url, + # description AS content + # FROM film + # WHERE title LIKE :wildcard OR description LIKE :wildcard + # ORDER BY duration DESC + + - name: tagesschau + engine: tagesschau + # when set to false, display URLs from Tagesschau, and not the actual source + # (e.g. NDR, WDR, SWR, HR, ...) + use_source_url: true + shortcut: ts + disabled: true + + - name: tmdb + engine: xpath + paging: true + categories: movies + search_url: https://www.themoviedb.org/search?page={pageno}&query={query} + results_xpath: //div[contains(@class,"movie") or contains(@class,"tv")]//div[contains(@class,"card")] + url_xpath: .//div[contains(@class,"poster")]/a/@href + thumbnail_xpath: .//img/@src + title_xpath: .//div[contains(@class,"title")]//h2 + content_xpath: .//div[contains(@class,"overview")] + shortcut: tm + disabled: true + + # Requires Tor + - name: torch + engine: xpath + paging: true + search_url: http://xmh57jrknzkhv6y3ls3ubitzfqnkrwxhopf5aygthi7d6rplyvk3noyd.onion/cgi-bin/omega/omega?P={query}&DEFAULTOP=and + results_xpath: //table//tr + url_xpath: ./td[2]/a + title_xpath: ./td[2]/b + content_xpath: ./td[2]/small + categories: onions + enable_http: true + shortcut: tch + + # torznab engine lets you query any torznab compatible indexer. Using this + # engine in combination with Jackett opens the possibility to query a lot of + # public and private indexers directly from SearXNG. More details at: + # https://docs.searxng.org/dev/engines/online/torznab.html + # + # - name: Torznab EZTV + # engine: torznab + # shortcut: eztv + # base_url: http://localhost:9117/api/v2.0/indexers/eztv/results/torznab + # enable_http: true # if using localhost + # api_key: xxxxxxxxxxxxxxx + # show_magnet_links: true + # show_torrent_files: false + # # https://github.com/Jackett/Jackett/wiki/Jackett-Categories + # torznab_categories: # optional + # - 2000 + # - 5000 + + # tmp suspended - too slow, too many errors + # - name: urbandictionary + # engine : xpath + # search_url : https://www.urbandictionary.com/define.php?term={query} + # url_xpath : //*[@class="word"]/@href + # title_xpath : //*[@class="def-header"] + # content_xpath: //*[@class="meaning"] + # shortcut: ud + + - name: unsplash + engine: unsplash + shortcut: us + + - name: yandex music + engine: yandex_music + shortcut: ydm + disabled: true + # https://yandex.com/support/music/access.html + inactive: true + + - name: yahoo + engine: yahoo + shortcut: yh + disabled: true + + - name: yahoo news + engine: yahoo_news + shortcut: yhn + + - name: youtube + shortcut: yt + # You can use the engine using the official stable API, but you need an API + # key See: https://console.developers.google.com/project + # + # engine: youtube_api + # api_key: 'apikey' # required! + # + # Or you can use the html non-stable engine, activated by default + engine: youtube_noapi + + - name: dailymotion + engine: dailymotion + shortcut: dm + + - name: vimeo + engine: vimeo + shortcut: vm + + - name: wiby + engine: json_engine + paging: true + search_url: https://wiby.me/json/?q={query}&p={pageno} + url_query: URL + title_query: Title + content_query: Snippet + categories: [general, web] + shortcut: wib + disabled: true + about: + website: https://wiby.me/ + + - name: alexandria + engine: json_engine + shortcut: alx + categories: general + paging: true + search_url: https://api.alexandria.org/?a=1&q={query}&p={pageno} + results_query: results + title_query: title + url_query: url + content_query: snippet + timeout: 1.5 + disabled: true + about: + website: https://alexandria.org/ + official_api_documentation: https://github.com/alexandria-org/alexandria-api/raw/master/README.md + use_official_api: true + require_api_key: false + results: JSON + + - name: wikibooks + engine: mediawiki + weight: 0.5 + shortcut: wb + categories: [general, wikimedia] + base_url: 'https://{language}.wikibooks.org/' + search_type: text + disabled: true + about: + website: https://www.wikibooks.org/ + wikidata_id: Q367 + + - name: wikinews + engine: mediawiki + shortcut: wn + categories: [news, wikimedia] + base_url: 'https://{language}.wikinews.org/' + search_type: text + srsort: create_timestamp_desc + about: + website: https://www.wikinews.org/ + wikidata_id: Q964 + + - name: wikiquote + engine: mediawiki + weight: 0.5 + shortcut: wq + categories: [general, wikimedia] + base_url: 'https://{language}.wikiquote.org/' + search_type: text + disabled: true + additional_tests: + rosebud: *test_rosebud + about: + website: https://www.wikiquote.org/ + wikidata_id: Q369 + + - name: wikisource + engine: mediawiki + weight: 0.5 + shortcut: ws + categories: [general, wikimedia] + base_url: 'https://{language}.wikisource.org/' + search_type: text + disabled: true + about: + website: https://www.wikisource.org/ + wikidata_id: Q263 + + - name: wikispecies + engine: mediawiki + shortcut: wsp + categories: [general, science, wikimedia] + base_url: 'https://species.wikimedia.org/' + search_type: text + disabled: true + about: + website: https://species.wikimedia.org/ + wikidata_id: Q13679 + + - name: wiktionary + engine: mediawiki + shortcut: wt + categories: [dictionaries, wikimedia] + base_url: 'https://{language}.wiktionary.org/' + search_type: text + about: + website: https://www.wiktionary.org/ + wikidata_id: Q151 + + - name: wikiversity + engine: mediawiki + weight: 0.5 + shortcut: wv + categories: [general, wikimedia] + base_url: 'https://{language}.wikiversity.org/' + search_type: text + disabled: true + about: + website: https://www.wikiversity.org/ + wikidata_id: Q370 + + - name: wikivoyage + engine: mediawiki + weight: 0.5 + shortcut: wy + categories: [general, wikimedia] + base_url: 'https://{language}.wikivoyage.org/' + search_type: text + disabled: true + about: + website: https://www.wikivoyage.org/ + wikidata_id: Q373 + + - name: wikicommons.images + engine: wikicommons + shortcut: wc + categories: images + number_of_results: 10 + + - name: wolframalpha + shortcut: wa + # You can use the engine using the official stable API, but you need an API + # key. See: https://products.wolframalpha.com/api/ + # + # engine: wolframalpha_api + # api_key: '' + # + # Or you can use the html non-stable engine, activated by default + engine: wolframalpha_noapi + timeout: 6.0 + categories: general + disabled: false + + - name: dictzone + engine: dictzone + shortcut: dc + + - name: mymemory translated + engine: translated + shortcut: tl + timeout: 5.0 + # You can use without an API key, but you are limited to 1000 words/day + # See: https://mymemory.translated.net/doc/usagelimits.php + # api_key: '' + + # Required dependency: mysql-connector-python + # - name: mysql + # engine: mysql_server + # database: mydatabase + # username: user + # password: pass + # limit: 10 + # query_str: 'SELECT * from mytable WHERE fieldname=%(query)s' + # shortcut: mysql + + - name: 1337x + engine: 1337x + shortcut: 1337x + disabled: true + + - name: duden + engine: duden + shortcut: du + disabled: true + + - name: seznam + shortcut: szn + engine: seznam + disabled: true + + # - name: deepl + # engine: deepl + # shortcut: dpl + # # You can use the engine using the official stable API, but you need an API key + # # See: https://www.deepl.com/pro-api?cta=header-pro-api + # api_key: '' # required! + # timeout: 5.0 + # disabled: true + + - name: mojeek + shortcut: mjk + engine: xpath + paging: true + categories: [general, web] + search_url: https://www.mojeek.com/search?q={query}&s={pageno}&lang={lang}&lb={lang} + results_xpath: //ul[@class="results-standard"]/li/a[@class="ob"] + url_xpath: ./@href + title_xpath: ../h2/a + content_xpath: ..//p[@class="s"] + suggestion_xpath: //div[@class="top-info"]/p[@class="top-info spell"]/em/a + first_page_num: 0 + page_size: 10 + max_page: 100 + disabled: true + about: + website: https://www.mojeek.com/ + wikidata_id: Q60747299 + official_api_documentation: https://www.mojeek.com/services/api.html/ + use_official_api: false + require_api_key: false + results: HTML + + - name: moviepilot + engine: moviepilot + shortcut: mp + disabled: true + + - name: naver + shortcut: nvr + categories: [general, web] + engine: xpath + paging: true + search_url: https://search.naver.com/search.naver?where=webkr&sm=osp_hty&ie=UTF-8&query={query}&start={pageno} + url_xpath: //a[@class="link_tit"]/@href + title_xpath: //a[@class="link_tit"] + content_xpath: //a[@class="total_dsc"]/div + first_page_num: 1 + page_size: 10 + disabled: true + about: + website: https://www.naver.com/ + wikidata_id: Q485639 + official_api_documentation: https://developers.naver.com/docs/nmt/examples/ + use_official_api: false + require_api_key: false + results: HTML + language: ko + + - name: rubygems + shortcut: rbg + engine: xpath + paging: true + search_url: https://rubygems.org/search?page={pageno}&query={query} + results_xpath: /html/body/main/div/a[@class="gems__gem"] + url_xpath: ./@href + title_xpath: ./span/h2 + content_xpath: ./span/p + suggestion_xpath: /html/body/main/div/div[@class="search__suggestions"]/p/a + first_page_num: 1 + categories: [it, packages] + disabled: true + about: + website: https://rubygems.org/ + wikidata_id: Q1853420 + official_api_documentation: https://guides.rubygems.org/rubygems-org-api/ + use_official_api: false + require_api_key: false + results: HTML + + - name: peertube + engine: peertube + shortcut: ptb + paging: true + # alternatives see: https://instances.joinpeertube.org/instances + # base_url: https://tube.4aem.com + categories: videos + disabled: true + timeout: 6.0 + + - name: mediathekviewweb + engine: mediathekviewweb + shortcut: mvw + disabled: true + + - name: yacy + engine: yacy + categories: general + search_type: text + base_url: https://yacy.searchlab.eu + shortcut: ya + disabled: true + # required if you aren't using HTTPS for your local yacy instance + # https://docs.searxng.org/dev/engines/online/yacy.html + # enable_http: true + # timeout: 3.0 + # search_mode: 'global' + + - name: yacy images + engine: yacy + categories: images + search_type: image + base_url: https://yacy.searchlab.eu + shortcut: yai + disabled: true + + - name: rumble + engine: rumble + shortcut: ru + base_url: https://rumble.com/ + paging: true + categories: videos + disabled: true + + - name: livespace + engine: livespace + shortcut: ls + categories: videos + disabled: true + timeout: 5.0 + + - name: wordnik + engine: wordnik + shortcut: def + base_url: https://www.wordnik.com/ + categories: [dictionaries] + timeout: 5.0 + + - name: woxikon.de synonyme + engine: xpath + shortcut: woxi + categories: [dictionaries] + timeout: 5.0 + disabled: true + search_url: https://synonyme.woxikon.de/synonyme/{query}.php + url_xpath: //div[@class="upper-synonyms"]/a/@href + content_xpath: //div[@class="synonyms-list-group"] + title_xpath: //div[@class="upper-synonyms"]/a + no_result_for_http_status: [404] + about: + website: https://www.woxikon.de/ + wikidata_id: # No Wikidata ID + use_official_api: false + require_api_key: false + results: HTML + language: de + + - name: seekr news + engine: seekr + shortcut: senews + categories: news + seekr_category: news + disabled: true + + - name: seekr images + engine: seekr + network: seekr news + shortcut: seimg + categories: images + seekr_category: images + disabled: true + + - name: seekr videos + engine: seekr + network: seekr news + shortcut: sevid + categories: videos + seekr_category: videos + disabled: true + + - name: sjp.pwn + engine: sjp + shortcut: sjp + base_url: https://sjp.pwn.pl/ + timeout: 5.0 + disabled: true + + - name: stract + engine: stract + shortcut: str + disabled: true + + - name: svgrepo + engine: svgrepo + shortcut: svg + timeout: 10.0 + disabled: true + + - name: tootfinder + engine: tootfinder + shortcut: toot + + - name: wallhaven + engine: wallhaven + # api_key: abcdefghijklmnopqrstuvwxyz + shortcut: wh + + # wikimini: online encyclopedia for children + # The fulltext and title parameter is necessary for Wikimini because + # sometimes it will not show the results and redirect instead + - name: wikimini + engine: xpath + shortcut: wkmn + search_url: https://fr.wikimini.org/w/index.php?search={query}&title=Sp%C3%A9cial%3ASearch&fulltext=Search + url_xpath: //li/div[@class="mw-search-result-heading"]/a/@href + title_xpath: //li//div[@class="mw-search-result-heading"]/a + content_xpath: //li/div[@class="searchresult"] + categories: general + disabled: true + about: + website: https://wikimini.org/ + wikidata_id: Q3568032 + use_official_api: false + require_api_key: false + results: HTML + language: fr + + - name: wttr.in + engine: wttr + shortcut: wttr + timeout: 9.0 + + - name: yummly + engine: yummly + shortcut: yum + disabled: true + + - name: brave + engine: brave + shortcut: br + time_range_support: true + paging: true + categories: [general, web] + brave_category: search + # brave_spellcheck: true + + - name: brave.images + engine: brave + network: brave + shortcut: brimg + categories: [images, web] + brave_category: images + + - name: brave.videos + engine: brave + network: brave + shortcut: brvid + categories: [videos, web] + brave_category: videos + + - name: brave.news + engine: brave + network: brave + shortcut: brnews + categories: news + brave_category: news + + # - name: brave.goggles + # engine: brave + # network: brave + # shortcut: brgog + # time_range_support: true + # paging: true + # categories: [general, web] + # brave_category: goggles + # Goggles: # required! This should be a URL ending in .goggle + + - name: lib.rs + shortcut: lrs + engine: xpath + search_url: https://lib.rs/search?q={query} + results_xpath: /html/body/main/div/ol/li/a + url_xpath: ./@href + title_xpath: ./div[@class="h"]/h4 + content_xpath: ./div[@class="h"]/p + categories: [it, packages] + disabled: true + about: + website: https://lib.rs + wikidata_id: Q113486010 + use_official_api: false + require_api_key: false + results: HTML + + - name: sourcehut + shortcut: srht + engine: xpath + paging: true + search_url: https://sr.ht/projects?page={pageno}&search={query} + results_xpath: (//div[@class="event-list"])[1]/div[@class="event"] + url_xpath: ./h4/a[2]/@href + title_xpath: ./h4/a[2] + content_xpath: ./p + first_page_num: 1 + categories: [it, repos] + disabled: true + about: + website: https://sr.ht + wikidata_id: Q78514485 + official_api_documentation: https://man.sr.ht/ + use_official_api: false + require_api_key: false + results: HTML + + - name: goo + shortcut: goo + engine: xpath + paging: true + search_url: https://search.goo.ne.jp/web.jsp?MT={query}&FR={pageno}0 + url_xpath: //div[@class="result"]/p[@class='title fsL1']/a/@href + title_xpath: //div[@class="result"]/p[@class='title fsL1']/a + content_xpath: //p[contains(@class,'url fsM')]/following-sibling::p + first_page_num: 0 + categories: [general, web] + disabled: true + timeout: 4.0 + about: + website: https://search.goo.ne.jp + wikidata_id: Q249044 + use_official_api: false + require_api_key: false + results: HTML + language: ja + + - name: bt4g + engine: bt4g + shortcut: bt4g + + - name: pkg.go.dev + engine: xpath + shortcut: pgo + search_url: https://pkg.go.dev/search?limit=100&m=package&q={query} + results_xpath: /html/body/main/div[contains(@class,"SearchResults")]/div[not(@class)]/div[@class="SearchSnippet"] + url_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a/@href + title_xpath: ./div[@class="SearchSnippet-headerContainer"]/h2/a + content_xpath: ./p[@class="SearchSnippet-synopsis"] + categories: [packages, it] + timeout: 3.0 + disabled: true + about: + website: https://pkg.go.dev/ + use_official_api: false + require_api_key: false + results: HTML + +# Doku engine lets you access to any Doku wiki instance: +# A public one or a privete/corporate one. +# - name: ubuntuwiki +# engine: doku +# shortcut: uw +# base_url: 'https://doc.ubuntu-fr.org' + +# Be careful when enabling this engine if you are +# running a public instance. Do not expose any sensitive +# information. You can restrict access by configuring a list +# of access tokens under tokens. +# - name: git grep +# engine: command +# command: ['git', 'grep', '{{QUERY}}'] +# shortcut: gg +# tokens: [] +# disabled: true +# delimiter: +# chars: ':' +# keys: ['filepath', 'code'] + +# Be careful when enabling this engine if you are +# running a public instance. Do not expose any sensitive +# information. You can restrict access by configuring a list +# of access tokens under tokens. +# - name: locate +# engine: command +# command: ['locate', '{{QUERY}}'] +# shortcut: loc +# tokens: [] +# disabled: true +# delimiter: +# chars: ' ' +# keys: ['line'] + +# Be careful when enabling this engine if you are +# running a public instance. Do not expose any sensitive +# information. You can restrict access by configuring a list +# of access tokens under tokens. +# - name: find +# engine: command +# command: ['find', '.', '-name', '{{QUERY}}'] +# query_type: path +# shortcut: fnd +# tokens: [] +# disabled: true +# delimiter: +# chars: ' ' +# keys: ['line'] + +# Be careful when enabling this engine if you are +# running a public instance. Do not expose any sensitive +# information. You can restrict access by configuring a list +# of access tokens under tokens. +# - name: pattern search in files +# engine: command +# command: ['fgrep', '{{QUERY}}'] +# shortcut: fgr +# tokens: [] +# disabled: true +# delimiter: +# chars: ' ' +# keys: ['line'] + +# Be careful when enabling this engine if you are +# running a public instance. Do not expose any sensitive +# information. You can restrict access by configuring a list +# of access tokens under tokens. +# - name: regex search in files +# engine: command +# command: ['grep', '{{QUERY}}'] +# shortcut: gr +# tokens: [] +# disabled: true +# delimiter: +# chars: ' ' +# keys: ['line'] + +doi_resolvers: + oadoi.org: 'https://oadoi.org/' + doi.org: 'https://doi.org/' + doai.io: 'https://dissem.in/' + sci-hub.se: 'https://sci-hub.se/' + sci-hub.st: 'https://sci-hub.st/' + sci-hub.ru: 'https://sci-hub.ru/' + +default_doi_resolver: 'oadoi.org' diff --git a/On host/AIServerSetup/04-ComfyUI/ComfyUISetup.md b/On host/AIServerSetup/04-ComfyUI/ComfyUISetup.md new file mode 100644 index 0000000..5ef727f --- /dev/null +++ b/On host/AIServerSetup/04-ComfyUI/ComfyUISetup.md @@ -0,0 +1,123 @@ + +# ComfyUI Docker Setup with GGUF Support and ComfyUI Manager + +This guide provides detailed steps to build and run **ComfyUI** with **GGUF support** and **ComfyUI Manager** using Docker. The GGUF format is optimized for quantized models, and ComfyUI Manager is included for easy node management. + +## Prerequisites + +Before starting, ensure you have the following installed on your system: + +- **Docker** +- **NVIDIA GPU with CUDA support** (if using GPU acceleration) +- **Create Directory structure for git repo Models and Checkpoints** + +```bash +mkdir -p ~/dev-ai/vison/models/checkpoints +``` + +### 1. Clone the ComfyUI Repository + +First, navigate to `~/dev-ai/vison` directory and clone the ComfyUI repository to your local machine: + +```bash +cd ~/dev-ai/vison +``` + +```bash +git clone https://github.com/comfyanonymous/ComfyUI.git +cd ComfyUI +``` + +### 2. Create the Dockerfile + +Copy the provided `Dockerfile` in the root of your ComfyUI directory. This file contains the necessary configurations for building the Docker container with GGUF support. + +### 3. Build the Docker Image + +```bash +docker build -t comfyui-gguf:latest . +``` + +This will create a Docker image named `comfyui-gguf:latest` with both **ComfyUI Manager** and **GGUF support** built in. + +### 4. Run the Docker Container + +Once the image is built, you can run the Docker container with volume mapping for your models. + +```bash +docker run --name comfyui -p 8188:8188 --gpus all \ + -v /home/mukul/dev-ai/vison/models:/app/models \ + -d comfyui-gguf:latest +``` + +This command maps your local `models` directory to `/app/models` inside the container and exposes ComfyUI on port `8188`. + +### 5. Download and Place Checkpoint Models + +Download and place your civitai checkpoint models in the `checkpoints` directory inside the container: +https://civitai.com/models/139562/realvisxl-v50 + +To use GGUF models or other safetensor models, follow the steps below to download them directly into the `checkpoints` directory. + +1. **Navigate to the Checkpoints Directory**: + ```bash + cd /home/mukul/dev-ai/vison/models/checkpoints + ``` + +2. **Download `flux1-schnell-fp8.safetensors`**: + ```bash + wget https://huggingface.co/Comfy-Org/flux1-schnell/resolve/main/flux1-schnell-fp8.safetensors?download=true -O flux1-schnell-fp8.safetensors + ``` + +3. **Download `flux1-dev-fp8.safetensors`**: + ```bash + wget https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors?download=true -O flux1-dev-fp8.safetensors + ``` + +These commands will place the corresponding `.safetensors` files into the `checkpoints` directory. + +### 6. Access ComfyUI + +After starting the container, access the ComfyUI interface in your web browser: + +```bash +http://:8188 +``` + +Replace `` with your server's IP address or use `localhost` if you're running it locally. + +### 7. Using GGUF Models + +In the ComfyUI interface: +- Use the **UnetLoaderGGUF** node (found in the `bootleg` category) to load GGUF models. +- Ensure your GGUF files are correctly named and placed in the `/app/models/checkpoints` directory for detection by the loader node. + +### 8. Managing Nodes with ComfyUI Manager + +With **ComfyUI Manager** built into the image: +- **Install** missing nodes as needed when uploading workflows. +- **Enable/Disable** conflicting nodes from the ComfyUI Manager interface. + +### 9. Stopping and Restarting the Docker Container + +To stop the running container: + +```bash +docker stop comfyui +``` + +To restart the container: + +```bash +docker start comfyui +``` + +### 10. Logs and Troubleshooting + +To view the container logs: + +```bash +docker logs comfyui +``` + +This will provide details if anything goes wrong or if you encounter issues with GGUF models or node management. diff --git a/On host/AIServerSetup/04-ComfyUI/Dockerfile b/On host/AIServerSetup/04-ComfyUI/Dockerfile new file mode 100644 index 0000000..b423ee1 --- /dev/null +++ b/On host/AIServerSetup/04-ComfyUI/Dockerfile @@ -0,0 +1,33 @@ +# Base image with Python 3.11 and CUDA 12.5 support +FROM nvidia/cuda:12.5.0-runtime-ubuntu22.04 + +# Install system dependencies +RUN apt-get update && apt-get install -y \ + git \ + python3-pip \ + libgl1-mesa-glx \ + && rm -rf /var/lib/apt/lists/* + +# Set working directory +WORKDIR /app + +# Copy the cloned ComfyUI repository +COPY . /app + +# Install Python dependencies +RUN pip install --upgrade pip +RUN pip install -r requirements.txt + +# Clone and install ComfyUI Manager +RUN git clone https://github.com/ltdrdata/ComfyUI-Manager.git /app/custom_nodes/ComfyUI-Manager && \ + pip install -r /app/custom_nodes/ComfyUI-Manager/requirements.txt + +# Clone and install GGUF support for ComfyUI +RUN git clone https://github.com/city96/ComfyUI-GGUF.git /app/custom_nodes/ComfyUI-GGUF && \ + pip install --upgrade gguf + +# Expose the port used by ComfyUI +EXPOSE 8188 + +# Run ComfyUI with the server binding to 0.0.0.0 +CMD ["python3", "main.py", "--listen", "0.0.0.0"] \ No newline at end of file diff --git a/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/01-ComfyUISetup.md b/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/01-ComfyUISetup.md new file mode 100644 index 0000000..76f0014 --- /dev/null +++ b/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/01-ComfyUISetup.md @@ -0,0 +1,113 @@ +## Running uncensored models on the NVIDIA Jetson Orin Nano Super Developer Kit + +This guide is aimed at helping you set up uncensored models seamlessly on your Jetson Orin Nano, ensuring you can run powerful image generation models on this compact, yet powerful device. + +This tutorial will walk you through each step of the process. Even if you're starting from a fresh installation, following along should ensure everything is set up correctly. And if anything doesn’t work as expected, feel free to reach out—I'll keep this guide updated to keep it running smoothly. + +--- + +## Let’s Dive In + +### Step 1: Installing Miniconda and Setting Up a Python Environment + +First, we need to install Miniconda on your Jetson Nano. This will allow us to create an isolated Python environment for managing dependencies. Let's set up our project environment. + +```bash +wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh +chmod +x Miniconda3-latest-Linux-aarch64.sh +./Miniconda3-latest-Linux-aarch64.sh + +conda update conda +``` + +Now, we create and activate a Python 3.10 environment for our project. + +```bash +conda create -n comfyui python=3.10 +conda activate comfyui +``` + +### Step 2: Installing CUDA, cuDNN, TensorRT, and Verifying nvcc + +```bash +Preconfigured on JetPack 6.1! +``` + +Next, confirm that CUDA is installed correctly by checking the `nvcc` version. + +```bash +nvcc --version +``` + +### Step 3: Installing PyTorch, TorchVision, and TorchAudio + +Now let's install the essential libraries for image generation: PyTorch, TorchVision, and Torchaudio from here [devpi - cu12.6](http://jetson.webredirect.org/jp6/cu126) + +```bash +pip install https://pypi.jetson-ai-lab.dev/jp6/cu126/+f/5cf/9ed17e35cb752/torch-2.5.0-cp310-cp310-linux_aarch64.whl +pip install https://pypi.jetson-ai-lab.dev/jp6/cu126/+f/9d2/6fac77a4e832a/torchvision-0.19.1a0+6194369-cp310-cp310-linux_aarch64.whl +pip install https://pypi.jetson-ai-lab.dev/jp6/cu126/+f/812/4fbc4ba6df0a3/torchaudio-2.5.0-cp310-cp310-linux_aarch64.whl +``` + +### Step 4: Cloning the Project Repository + +Now, we clone the necessary source code for the project from GitHub. This will include the files for running uncensored models from civtai.com. + +```bash +git clone https://github.com/comfyanonymous/ComfyUI.git +cd ComfyUI +``` + +### Step 5: Installing Project Dependencies + +Next, install the required dependencies for the project by running the `requirements.txt` file. + +```bash +pip install -r requirements.txt +``` + +### Step 6: Resolving Issues with NumPy (if necessary) + +If you encounter issues with NumPy, such as compatibility problems, you can fix it by downgrading to a version below 2.0. + +```bash +pip install "numpy<2" +``` + +### Step 7: Running ComfyUI + +Finally, we can run ComfyUI to check if everything is set up properly. Start the app with the following command: + +```bash +python main.py --listen 0.0.0.0 +``` + +--- + +## Great! Now that you've got ComfyUI up and running, it’s time to load your first uncensored model. + +1. Navigate to [civitai.com](https://civitai.com) and select a model. For example, you can choose the following model: + + [RealVisionBabes v1.0](https://civitai.com/models/543456?modelVersionId=604282) + +2. Download the model file: [realvisionbabes_v10.safetensors](https://civitai.com/api/download/models/604282?type=Model&format=SafeTensor&size=pruned&fp=fp16) + +3. Place it inside the `models/checkpoints` folder. + +4. Download the VAE file: [ClearVAE_V2.3_fp16.pt](https://civitai.com/api/download/models/604282?type=VAE) + +5. Place it inside the `models/vae` folder. + + + +--- + +## You're all set to launch your first run! + +Visit the provided URL by ComfyUI (`http://jetson:8188`) on your Jetson Nano. + +Go to the [ControlNet reference demo](https://civitai.com/posts/3943573), download the workflow (also available in the repo as workflow-api.json) and, import it in comfyUI. + +And hit the "Queue Prompt" button, and watch the magic unfold! + +Happy generating! 🎉 diff --git a/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/prompt.md b/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/prompt.md new file mode 100644 index 0000000..14e3fb8 --- /dev/null +++ b/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/prompt.md @@ -0,0 +1,91 @@ + + +You are an expert prompt generator. Your task is to transform user requests into detailed, vivid, and imaginative prompts that can be used to generate visually captivating images with a diffusion model. You should: + +- **Analyze** the user's request carefully and extract the key visual elements. +- **Generate a prompt** that describes the image in clear and evocative terms, ensuring it’s visually rich and imaginative. +- **Ensure details** are specific about the atmosphere, setting, colors, lighting, textures, and unique characteristics. +- **Use creative language** to enhance the visual quality, whether the style is realistic, surreal, or abstract. +- **Consider mood and style** (e.g., dark and moody, bright and lively, minimalist, detailed, etc.). + +Here are some examples to guide you: + +--- + +### **Few-Shot Examples:** + +**Example 1:** + +**User Request:** +"I want to see a futuristic cityscape at night with neon lights and flying cars." + +**Generated Prompt:** +"A vibrant futuristic cityscape at night, with towering skyscrapers that stretch towards a starless sky. The buildings are covered in shimmering neon lights—bright blues, purples, and pinks—casting colorful reflections onto the glossy streets below. Flying cars zip through the air, leaving glowing trails in their wake, while holographic billboards advertise virtual products. The atmosphere is electric, bustling with energy and technology, as a soft mist rises from the ground, adding a touch of mystery to the scene." + +--- + +**Example 2:** + +**User Request:** +"I’d love a serene mountain landscape with a calm lake and a small wooden cabin." + +**Generated Prompt:** +"A tranquil mountain landscape at dawn, with majestic snow-capped peaks towering in the distance. A serene, glassy lake reflects the vibrant colors of the early morning sky—soft pinks, oranges, and purples. A small, rustic wooden cabin sits by the lakeshore, its smoke rising from the chimney, blending gently with the mist above the water. Pine trees surround the cabin, their dark green needles adding depth to the peaceful scene. The air is crisp, and the whole environment exudes a sense of quiet solitude." + +--- + +**Example 3:** + +**User Request:** +"I want a mystical creature in a dark enchanted forest." + +**Generated Prompt:** +"A mystical creature standing tall in the heart of a dark, enchanted forest. The creature has the body of a lion, but its fur is deep indigo, shimmering with silver flecks like stars. Its eyes glow with an ethereal light, casting an otherworldly glow across the forest floor. The forest is dense with towering trees whose bark is twisted, covered in glowing moss. Fog weaves through the trees, and mysterious flowers glow faintly in the shadows. The atmosphere is magical, filled with the sense of an ancient, forgotten world full of wonder." + +--- + +**Example 4:** + +**User Request:** +"Can you create a vibrant sunset on a tropical beach with palm trees?" + +**Generated Prompt:** +"A stunning tropical beach at sunset, where the sky is ablaze with fiery hues of red, orange, and pink, melting into the calm blue of the ocean. The golden sand is warm, and the gentle waves lap against the shore. Silhouetted palm trees frame the scene, their long leaves swaying in the soft breeze. The sun is just dipping below the horizon, casting a golden glow across the water. The atmosphere is peaceful yet vibrant, with the serene sounds of the ocean adding to the beauty of the moment." + +--- + +**Example 5:** + +**User Request:** +"Imagine an underwater scene with colorful coral reefs and exotic fish." + +**Generated Prompt:** +"A vibrant underwater scene, where the sunlight filters down through crystal-clear water, illuminating the colorful coral reefs below. The corals are in shades of purple, pink, and yellow, teeming with life. Schools of exotic fish dart through the scene—brightly colored in hues of electric blue, orange, and green. The water is calm, with soft ripples distorting the light, while gentle seaweed sways with the current. The scene is peaceful and full of life, a kaleidoscope of color beneath the ocean's surface." + +--- + +### **User Request:** +"Please create a scene with a magical waterfall in a forest." + +**Generated Prompt:** +"A breathtaking magical waterfall cascading down from a high cliff, surrounded by an ancient forest. The water sparkles with iridescent hues, as if glowing with a soft, mystical light. Lush green foliage and towering trees frame the waterfall, with delicate vines hanging down like nature’s curtains. Mist rises from the base of the waterfall, creating a rainbow in the air. Sunlight filters through the canopy above, casting dappled light across the mossy rocks and the peaceful forest floor. The atmosphere is serene, almost dreamlike, filled with the sound of the water’s soothing rush." + +--- + +### **User Request:** +"I want to see an alien landscape on another planet with strange rock formations." + +**Generated Prompt:** +"A surreal alien landscape on a distant planet, bathed in the pale light of two suns setting on the horizon. The ground is rocky, with bizarre rock formations that defy gravity, twisting and spiraling upward like ancient sculptures. The sky above is a vibrant shade of purple, dotted with swirling clouds and distant stars. The air is thick with an otherworldly mist, and strange, bioluminescent plants glow faintly in the twilight. The scene is alien and unearthly, with a sense of wonder and curiosity as the landscape stretches endlessly into the unknown." + +--- + +### **User Request:** +"Could you create a winter scene with a frozen lake and a snowman?" + +**Generated Prompt:** +"A peaceful winter scene with a frozen lake covered in a smooth sheet of ice, reflecting the soft pale blue of the overcast sky. Snow gently falls from the sky, coating the landscape in a thick layer of white. A cheerful snowman stands at the edge of the lake, its coal-black eyes and carrot nose adding a touch of whimsy to the quiet surroundings. Snow-covered pine trees line the shore, their branches weighed down by the snow. The air is crisp and fresh, and the entire scene feels calm, still, and full of the quiet beauty of winter." + +--- + +### **End of Few-Shot Examples** \ No newline at end of file diff --git a/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/workflow-api.json b/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/workflow-api.json new file mode 100644 index 0000000..49fa94b --- /dev/null +++ b/On host/AIServerSetup/05-Jetson Orin Nano Developer Kit/workflow-api.json @@ -0,0 +1,129 @@ +{ + "1": { + "inputs": { + "ckpt_name": "realvisionbabes_v10.safetensors" + }, + "class_type": "CheckpointLoaderSimple", + "_meta": { + "title": "Load Checkpoint" + } + }, + "2": { + "inputs": { + "stop_at_clip_layer": -1, + "clip": [ + "1", + 1 + ] + }, + "class_type": "CLIPSetLastLayer", + "_meta": { + "title": "CLIP Set Last Layer" + } + }, + "3": { + "inputs": { + "text": "amateur, instagram photo, beautiful face", + "clip": [ + "2", + 0 + ] + }, + "class_type": "CLIPTextEncode", + "_meta": { + "title": "CLIP Text Encode (Prompt)" + } + }, + "4": { + "inputs": { + "text": "Watermark, Text, censored, deformed, bad anatomy, disfigured, poorly drawn face, mutated, ugly, cropped, worst quality, low quality, mutation, poorly drawn, abnormal eye proportion, bad\nart, ugly face, messed up face, high forehead, professional photo shoot, makeup, photoshop, doll, plastic_doll, silicone, anime, cartoon, fake, filter, airbrush, 3d max, infant, featureless, colourless, impassive, shaders, two heads, crop,", + "clip": [ + "2", + 0 + ] + }, + "class_type": "CLIPTextEncode", + "_meta": { + "title": "CLIP Text Encode (Prompt)" + } + }, + "5": { + "inputs": { + "seed": 411040191827786, + "steps": 30, + "cfg": 3, + "sampler_name": "dpmpp_2m_sde", + "scheduler": "normal", + "denoise": 1, + "model": [ + "1", + 0 + ], + "positive": [ + "3", + 0 + ], + "negative": [ + "4", + 0 + ], + "latent_image": [ + "6", + 0 + ] + }, + "class_type": "KSampler", + "_meta": { + "title": "KSampler" + } + }, + "6": { + "inputs": { + "width": 512, + "height": 768, + "batch_size": 1 + }, + "class_type": "EmptyLatentImage", + "_meta": { + "title": "Empty Latent Image" + } + }, + "7": { + "inputs": { + "samples": [ + "5", + 0 + ], + "vae": [ + "8", + 0 + ] + }, + "class_type": "VAEDecode", + "_meta": { + "title": "VAE Decode" + } + }, + "8": { + "inputs": { + "vae_name": "ClearVAE_V2.3_fp16.pt" + }, + "class_type": "VAELoader", + "_meta": { + "title": "Load VAE" + } + }, + "9": { + "inputs": { + "filename_prefix": "ComfyUI", + "images": [ + "7", + 0 + ] + }, + "class_type": "SaveImage", + "_meta": { + "title": "Save Image" + } + } +} \ No newline at end of file diff --git a/On host/AIServerSetup/06-DeepSeek-R1-0528/01-DeepSeek-R1-0528-KTransformers-Setup-Guide.md b/On host/AIServerSetup/06-DeepSeek-R1-0528/01-DeepSeek-R1-0528-KTransformers-Setup-Guide.md new file mode 100644 index 0000000..384c393 --- /dev/null +++ b/On host/AIServerSetup/06-DeepSeek-R1-0528/01-DeepSeek-R1-0528-KTransformers-Setup-Guide.md @@ -0,0 +1,316 @@ +# Running DeepSeek-R1-0528 (FP8 Hybrid) with KTransformers + +This guide provides instructions to run the DeepSeek-R1-0528 model locally using a hybrid FP8 (GPU) and Q4_K_M GGUF (CPU) approach with KTransformers, managed via Docker. This setup is optimized for high-end hardware (e.g., NVIDIA RTX 4090, high-core count CPU, significant RAM). + +**Model Version:** DeepSeek-R1-0528 +**KTransformers Version (Working):** `approachingai/ktransformers:v0.2.4post1-AVX512` + +## Table of Contents + +1. [Prerequisites](#prerequisites) +2. [Model Preparation](#model-preparation) + * [Step 2a: Download FP8 Base Model (Host)](#step-2a-download-fp8-base-model-host) + * [Step 2b: Download Q4\_K\_M GGUF Model (Host)](#step-2b-download-q4_k_m-gguf-model-host) + * [Step 2c: Merge Models (Inside Docker)](#step-2c-merge-models-inside-docker) + * [Step 2d: Set Ownership & Permissions (Host)](#step-2d-set-ownership--permissions-host) +3. [Running the Model with KTransformers](#running-the-model-with-ktransformers) + * [Single GPU (e.g., 1x RTX 4090)](#single-gpu-eg-1x-rtx-4090) + * [Multi-GPU (e.g., 2x RTX 4090)](#multi-gpu-eg-2x-rtx-4090) +4. [Testing the Server](#testing-the-server) +5. [Key Server Parameters](#key-server-parameters) +6. [Notes on KTransformers v0.3.1](#notes-on-ktransformers-v031) +7. [Available Optimize Config YAMLs (for reference)](#available-optimize-config-yamls-for-reference) +8. [Troubleshooting Tips](#troubleshooting-tips) + +--- + +## 1. Prerequisites + +* **Hardware:** + * NVIDIA GPU with FP8 support (e.g., RTX 40-series, Hopper series). + * High core-count CPU (e.g., Intel Xeon, AMD Threadripper). + * Significant System RAM (ideally 512GB for larger GGUF experts and context). The Q4_K_M experts for a large model can consume 320GB+ alone. + * Fast SSD (NVMe recommended) for model storage. +* **Software (on Host):** + * Linux OS (Ubuntu 24.04 LTS recommended). + * NVIDIA Drivers (ensure they are up-to-date and support your GPU and CUDA version). + * Docker Engine. + * NVIDIA Container Toolkit (for GPU access within Docker). + * Conda or a Python virtual environment manager. + * Python 3.9+ + * `huggingface_hub` and `hf_transfer` + * Git (for cloning KTransformers if you need to inspect YAMLs or contribute). + +--- + +## 2. Model Preparation + +We assume your models will be downloaded and stored under `/home/mukul/dev-ai/models` on your host system. This path will be mounted into the Docker container as `/models`. Adjust paths if your setup differs. + +### Step 2a: Download FP8 Base Model (Host) + +Download the official DeepSeek-R1-0528 FP8 base model components. + + +```bash +# Ensure that correct packages are installed. Conda is recommended for environemnt management. +pip install -U huggingface_hub hf_transfer +export HF_HUB_ENABLE_HF_TRANSFER=1 # For faster downloads +``` + +```bash +# Define your host model directory +HOST_MODEL_DIR="/home/mukul/dev-ai/models" +BASE_MODEL_HF_ID="deepseek-ai/DeepSeek-R1-0528" +LOCAL_BASE_MODEL_PATH="${HOST_MODEL_DIR}/${BASE_MODEL_HF_ID}" + +mkdir -p "${LOCAL_BASE_MODEL_PATH}" + +echo "Downloading base model to: ${LOCAL_BASE_MODEL_PATH}" +huggingface-cli download --resume-download "${BASE_MODEL_HF_ID}" \ + --local-dir "${LOCAL_BASE_MODEL_PATH}"``` +``` + +### Step 2b: Download Q4_K_M GGUF Model (Host) + +Download the Unsloth Q4_K_M GGUF version of DeepSeek-R1-0528 using the attached python script. + +### Step 2c: Merge Models (Inside Docker) + +This step uses the KTransformers Docker image to merge the FP8 base and Q4\_K\_M GGUF weights. + +```bash +docker stop ktransformers +docker run --rm --gpus '"device=1"' \ + -v /home/mukul/dev-ai/models:/models \ + --name ktransformers \ + -itd approachingai/ktransformers:v0.2.4post1-AVX512 + +docker exec -it ktransformers /bin/bash +``` + +```bash +python merge_tensors/merge_safetensor_gguf.py \ + --safetensor_path /models/deepseek-ai/DeepSeek-R1-0528 \ + --gguf_path /models/unsloth/DeepSeek-R1-0528-GGUF/Q4_K_M \ + --output_path /models/mukul/DeepSeek-R1-0528-GGML-FP8-Hybrid/Q4_K_M_FP8 +``` + + +### Step 2d: Set Ownership & Permissions (Host) + +After Docker creates the merged files, fix ownership and permissions on the host. + +```bash +HOST_OUTPUT_DIR_QUANT="/home/mukul/dev-ai/models/mukul/DeepSeek-R1-0528-GGML-FP8-Hybrid/Q4_K_M_FP8" # As defined above + +echo "Setting ownership for merged files in: ${HOST_OUTPUT_DIR_QUANT}" +sudo chown -R $USER:$USER "${HOST_OUTPUT_DIR_QUANT}" +sudo find "${HOST_OUTPUT_DIR_QUANT}" -type f -exec chmod 664 {} \; +sudo find "${HOST_OUTPUT_DIR_QUANT}" -type d -exec chmod 775 {} \; + +echo "Ownership and permissions set. Verification:" +ls -la "${HOST_OUTPUT_DIR_QUANT}" +``` + +--- + +## 3. Running the Model with KTransformers + +Ensure the Docker image `approachingai/ktransformers:v0.2.4post1-AVX512` is pulled. + +### Single GPU (e.g., 1x RTX 4090) + +**1. Start Docker Container:** + +```bash +# Stop any previous instance +docker stop ktransformers || true # Allow if not running +docker rm ktransformers || true # Allow if not existing + +# Define your host model directory +HOST_MODEL_DIR="/home/mukul/dev-ai/models" +TARGET_GPU="1" # Specify GPU ID, e.g., "0", "1", or "all" + +docker run --rm --gpus "\"device=${TARGET_GPU}\"" \ + -v "${HOST_MODEL_DIR}:/models" \ + -p 10002:10002 \ + --name ktransformers \ + -itd approachingai/ktransformers:v0.2.4post1-AVX512 + +docker exec -it ktransformers /bin/bash +``` + +**2. Inside the Docker container shell, launch the server:** + +```bash +# Set environment variable for PyTorch CUDA memory allocation +export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True +CONTAINER_MERGED_MODEL_PATH="/models/mukul/DeepSeek-R1-0528-GGML-FP8-Hybrid/Q4_K_M_FP8" +CONTAINER_BASE_MODEL_CONFIG_PATH="/models/deepseek-ai/DeepSeek-R1-0528" + +# Launch server +python3 ktransformers/server/main.py \ + --gguf_path "${CONTAINER_MERGED_MODEL_PATH}" \ + --model_path "${CONTAINER_BASE_MODEL_CONFIG_PATH}" \ + --model_name KVCache-ai/DeepSeek-R1-0528-q4km-fp8 \ + --cpu_infer 57 \ + --max_new_tokens 16384 \ + --cache_lens 24576 \ + --cache_q4 true \ + --temperature 0.6 \ + --top_p 0.95 \ + --optimize_config_path ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat-fp8-linear-ggml-experts.yaml \ + --force_think \ + --use_cuda_graph \ + --host 0.0.0.0 \ + --port 10002 +``` +*Note: The `--optimize_config_path` still refers to a `DeepSeek-V3` YAML. This V3 config is compatible and recommended. + +### Multi-GPU (e.g., 2x RTX 4090) + +**1. Start Docker Container:** + +```bash +# Stop any previous instance +docker stop ktransformers || true +docker rm ktransformers || true + +# Define your host model directory +HOST_MODEL_DIR="/home/mukul/dev-ai/models" +TARGET_GPUS="0,1" # Specify GPU IDs + +docker run --rm --gpus "\"device=${TARGET_GPUS}\"" \ + -v "${HOST_MODEL_DIR}:/models" \ + -p 10002:10002 \ + --name ktransformers \ + -itd approachingai/ktransformers:v0.2.4post1-AVX512 + +docker exec -it ktransformers /bin/bash +``` + +**2. Inside the Docker container shell, launch the server:** +```bash +# Set environment variable (optional for multi-GPU, but can be helpful) +# export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True + +# Define container paths +CONTAINER_MERGED_MODEL_PATH="/models/mukul/DeepSeek-R1-0528-GGML-FP8-Hybrid/Q4_K_M_FP8" +CONTAINER_BASE_MODEL_CONFIG_PATH="/models/deepseek-ai/DeepSeek-R1-0528" + +# Launch server +python3 ktransformers/server/main.py \ + --gguf_path "${CONTAINER_MERGED_MODEL_PATH}" \ + --model_path "${CONTAINER_BASE_MODEL_CONFIG_PATH}" \ + --model_name KVCache-ai/DeepSeek-R1-0528-q4km-fp8 \ + --cpu_infer 57 \ + --max_new_tokens 24576 \ + --cache_lens 32768 \ + --cache_q4 true \ + --temperature 0.6 \ + --top_p 0.95 \ + --optimize_config_path ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat-multi-gpu-fp8-linear-ggml-experts.yaml \ + --force_think \ + --use_cuda_graph \ + --host 0.0.0.0 \ + --port 10002 +``` +*Note: The `--optimize_config_path` still refers to a `DeepSeek-V3` YAML. This is intentional.* + +--- + +## 4. Testing the Server + +Once the server is running inside Docker (look for "Uvicorn running on http://0.0.0.0:10002"), open a **new terminal on your host machine** and test with `curl`: + +```bash +curl http://localhost:10002/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "KVCache-ai/DeepSeek-R1-0528-q4km-fp8", + "messages": [{"role": "user", "content": "Explain the concept of Mixture of Experts in large language models in a simple way."}], + "max_tokens": 250, + "temperature": 0.6, + "top_p": 0.95 + }' +``` +A JSON response containing the model's output indicates success. + +--- + +## 5. Key Server Parameters + +* `--gguf_path`: Path inside the container to your **merged** hybrid model files. +* `--model_path`: Path inside the container to the **original base model's** directory (containing `config.json`, `tokenizer.json`, etc.). KTransformers needs this for model configuration. +* `--model_name`: Arbitrary name for the API endpoint. Used in client requests. +* `--cpu_infer`: Number of CPU threads for GGUF expert inference. Tune based on your CPU cores (e.g., `57` for a 56-core/112-thread CPU might leave some cores for other tasks, or you could try higher). +* `--max_new_tokens`: Maximum number of tokens the model can generate in a single response. +* `--cache_lens`: Maximum KV cache size in tokens. Directly impacts context length capacity and VRAM usage. +* `--cache_q4`: (Boolean) If `true`, quantizes the KV cache to 4-bit. **Crucial for saving VRAM**, especially with long contexts. +* `--temperature`, `--top_p`: Control generation randomness. +* `--optimize_config_path`: Path to the KTransformers YAML file defining the layer offloading strategy (FP8 on GPU, GGUF on CPU). **Essential for the hybrid setup.** +* `--force_think`: (KTransformers specific) Potentially related to how the model processes or plans. +* `--use_cuda_graph`: Enables CUDA graphs for potentially faster GPU execution by reducing kernel launch overhead. +* `--host`, `--port`: Network interface and port for the server. +* `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`: Environment variable to help PyTorch manage CUDA memory more flexibly and potentially avoid OOM errors. + +--- + +## 6. Notes on KTransformers v0.3.1 + +As of 2025-06-02, the `approachingai/ktransformers:v0.3.1-AVX512` image was reported as **not working** with the provided single GPU or multi-GPU configuration. + +**Attempted Docker Start Command (v0.3.1 - Non-Functional):** +```bash +# docker stop ktransformers # (if attempting to switch) +# docker run --rm --gpus '"device=0,1"' \ +# -v /home/mukul/dev-ai/models:/models \ +# -p 10002:10002 \ +# --name ktransformers \ +# -itd approachingai/ktransformers:v0.3.1-AVX512 +# +# docker exec -it ktransformers /bin/bash +``` + +**Attempted Server Launch (v0.3.1 - Non-Functional):** +```bash +# # Inside the v0.3.1 Docker container shell +# PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True python3 ktransformers/server/main.py \ +# --gguf_path /models/mukul/DeepSeek-R1-0528-GGML-FP8-Hybrid/Q4_K_M_FP8 \ +# --model_path /models/deepseek-ai/DeepSeek-R1-0528 \ +# --model_name KVCache-ai/DeepSeek-R1-0528-q4km-fp8 \ +# --cpu_infer 57 \ +# --max_new_tokens 32768 \ +# --cache_lens 65536 \ +# --cache_q4 true \ +# --temperature 0.6 \ +# --top_p 0.95 \ +# --optimize_config_path ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat-multi-gpu-fp8-linear-ggml-experts.yaml \ +# --force_think \ +# --use_cuda_graph \ +# --host 0.0.0.0 \ +# --port 10002 +``` +Stick to `approachingai/ktransformers:v0.2.4post1-AVX512` for the configurations described above until compatibility issues with newer versions are resolved for this specific model and setup. + +--- + +## 7. Available Optimize Config YAMLs (for reference) + +The KTransformers repository contains various optimization YAML files. The ones used in this guide are for `DeepSeek-V3` but are being applied to `DeepSeek-R1-0528`. Their direct compatibility or optimality for R1-0528 should be verified. If KTransformers releases specific YAMLs for DeepSeek-R1-0528, those should be preferred. + +Reference list of some `DeepSeek-V3` YAMLs (path `ktransformers/optimize/optimize_rules/` inside the container): +``` +DeepSeek-V3-Chat-amx.yaml +DeepSeek-V3-Chat-fp8-linear-ggml-experts-serve-amx.yaml +DeepSeek-V3-Chat-fp8-linear-ggml-experts-serve.yaml +DeepSeek-V3-Chat-fp8-linear-ggml-experts.yaml +DeepSeek-V3-Chat-multi-gpu-4.yaml +DeepSeek-V3-Chat-multi-gpu-8.yaml +DeepSeek-V3-Chat-multi-gpu-fp8-linear-ggml-experts.yaml +DeepSeek-V3-Chat-multi-gpu-marlin.yaml +DeepSeek-V3-Chat-multi-gpu.yaml +DeepSeek-V3-Chat-serve.yaml +DeepSeek-V3-Chat.yaml +``` \ No newline at end of file diff --git a/On host/AIServerSetup/06-DeepSeek-R1-0528/download-gguf.py b/On host/AIServerSetup/06-DeepSeek-R1-0528/download-gguf.py new file mode 100644 index 0000000..0e04e66 --- /dev/null +++ b/On host/AIServerSetup/06-DeepSeek-R1-0528/download-gguf.py @@ -0,0 +1,65 @@ +from huggingface_hub import hf_hub_download, list_repo_files # Import list_repo_files +import os + +# Configuration +repo_id = "unsloth/DeepSeek-R1-0528-GGUF" +folder_in_repo = "Q4_K_M" +file_extension = ".gguf" +# Expand the tilde (~) to the user's home directory +local_base_dir = os.path.expanduser("~/dev-ai/models/unsloth/DeepSeek-R1-0528-GGUF") + +# Create base directory +# The hf_hub_download function will create the directory if it doesn't exist +# when local_dir_use_symlinks=False. However, explicit creation is fine. +os.makedirs(local_base_dir, exist_ok=True) + +# Download files +print(f"Listing files from {repo_id} in folder {folder_in_repo} with extension {file_extension}...") +try: + all_repo_files = list_repo_files(repo_id, repo_type='model') + files_to_download = [ + f for f in all_repo_files + if f.startswith(folder_in_repo + "/") and f.endswith(file_extension) + ] + + if not files_to_download: + print(f"No files found in '{folder_in_repo}' with extension '{file_extension}'.") + else: + print(f"Found {len(files_to_download)} file(s) to download.") + + for filename_in_repo in files_to_download: + print(f"Downloading {filename_in_repo}...") + # The filename parameter in hf_hub_download should be the path within the repo + # The local_dir parameter specifies where the file (maintaining its repo path structure) + # will be saved under. + # For example, if filename_in_repo is "UD-Q4_K_XL/file.gguf", + # it will be saved as local_base_dir/UD-Q4_K_XL/file.gguf + try: + downloaded_file_path = hf_hub_download( + repo_id=repo_id, + filename=filename_in_repo, # This is the path of the file within the repository + local_dir=local_base_dir, + local_dir_use_symlinks=False, + # Set resume_download=True if you want to resume interrupted downloads + # resume_download=True, + ) + # The hf_hub_download function returns the full path to the downloaded file. + # The way files are saved when local_dir is used can be tricky. + # If filename_in_repo is "folder/file.txt", it will be saved as "local_dir/folder/file.txt". + # If you want all files directly in local_base_dir without the repo's folder structure, + # you would need to adjust the local_dir or rename/move the file post-download. + # However, for GGUF files from a specific folder, saving them under that folder structure locally is usually fine. + + print(f"Successfully downloaded and saved to: {downloaded_file_path}") + # If you want to confirm the exact path as per your original print statement's intent: + # expected_local_path = os.path.join(local_base_dir, filename_in_repo) + # print(f"Saved to: {expected_local_path}") + + + except Exception as e: + print(f"Error downloading {filename_in_repo}: {str(e)}") + +except Exception as e: + print(f"Error listing files from repository: {str(e)}") + +print("Download process complete.") diff --git a/On host/AIServerSetup/99-Tips-And-Tricks/01-port-forward-trick.md b/On host/AIServerSetup/99-Tips-And-Tricks/01-port-forward-trick.md new file mode 100644 index 0000000..82060b9 --- /dev/null +++ b/On host/AIServerSetup/99-Tips-And-Tricks/01-port-forward-trick.md @@ -0,0 +1,137 @@ +# **Port Forwarding Magic: Set Up Bolt.New with Remote Ollama Server and Qwen2.5-Coder:32B** + +This guide demonstrates how to use **port forwarding** to connect your local **Bolt.New** setup to a **remote Ollama server**, solving issues with apps that don’t allow full customization. We’ll use the open-source [Bolt.New repository](https://github.com/coleam00/bolt.new-any-llm) as our example, and we’ll even show you how to extend the context length for the popular **Qwen2.5-Coder:32B model**. + +If you encounter installation issues, submit an [issue](https://github.com/coleam00/bolt.new-any-llm/issues) or contribute by forking and improving this guide. + +--- + +## **What You'll Learn** +- Clone and configure **Bolt.New** for your local development. +- Use **SSH tunneling** to seamlessly forward traffic to a remote server. +- Extend the context length of AI models for enhanced capabilities. +- Run **Bolt.New** locally. + +--- + +## **Prerequisites** + +Download and install Node.js from [https://nodejs.org/en/download/](https://nodejs.org/en/download/). + +--- + +## **Step 1: Clone the Repository** + +1. Open Terminal. +2. Clone the repository: + ```bash + git clone https://github.com/coleam00/bolt.new-any-llm.git + ``` + +--- + +## **Step 2: Stop Local Ollama Service** + +If Ollama is already running on your machine, stop it to avoid conflicts with the remote server. + +- **Stop the service**: + ```bash + sudo systemctl stop ollama.service + ``` +- **OPTIONAL: Disable it from restarting**: + ```bash + sudo systemctl disable ollama.service + ``` + +--- + +## **Step 3: Forward Local Traffic to the Remote Ollama Server** + +To forward all traffic from `localhost:11434` to your remote Ollama server (`ai.mtcl.lan:11434`), set up SSH tunneling: + +1. Open a terminal and run: + ```bash + ssh -L 11434:ai.mtcl.lan:11434 mukul@ai.mtcl.lan + ``` + - Replace `mukul` with your remote username. + - Replace `ai.mtcl.lan` with your server's hostname or IP. + +2. Keep this terminal session running while using Bolt.New. This ensures your app communicates with the remote server as if it’s local. + +--- + +## **Step 4: OPTIONAL: Extend Ollama Model Context Length** + +By default, Ollama models have a context length of 2048 tokens. For tasks requiring larger input, extend this limit for **Qwen2.5-Coder:32B**: + +1. SSH into your remote server: + ```bash + ssh mukul@ai.mtcl.lan + ``` +2. Access the Docker container running Ollama: + ```bash + docker exec -it ollama /bin/bash + ``` +3. Create a `Modelfile`: + + While inside the Docker container, run the following commands to create the Modelfile: + + ```bash + echo "FROM qwen2.5-coder:32b" > /tmp/Modelfile + echo "PARAMETER num_ctx 32768" >> /tmp/Modelfile + ``` + If you prefer, you can use cat to directly create the file: + ```bash + cat > /tmp/Modelfile << EOF + FROM qwen2.5-coder:32b + PARAMETER num_ctx 32768 + EOF + ``` + + +4. Create the new model: + ```bash + ollama create -f /tmp/Modelfile qwen2.5-coder-extra-ctx:32b + ``` +5. Verify the new model: + ```bash + ollama list + ``` + You should see `qwen2.5-coder-extra-ctx:32b` listed. + +6. Exit the Docker container: + ```bash + exit + ``` + +--- + +## **Step 5: Run Bolt.New Without Docker** + +1. **Install Dependencies** + Navigate to the cloned repository: + ```bash + cd bolt.new-any-llm + pnpm install + ``` + +2. **Start the Development Server** + Run: + ```bash + pnpm run dev + ``` + +--- + +## **Summary** + +This guide walks you through setting up **Bolt.New** with a **remote Ollama server**, ensuring seamless communication through SSH tunneling. We’ve also shown you how to extend the context length for **Qwen2.5-Coder:32B**, making it ideal for advanced development tasks. + +With this setup: +- You’ll offload heavy computation to your remote server. +- Your local machine remains light and responsive. +- Buggy `localhost` configurations? No problem—SSH tunneling has you covered. + +Credits: [Bolt.New repository](https://github.com/coleam00/bolt.new-any-llm). + +Let’s build something amazing! 🚀 \ No newline at end of file diff --git a/On host/AIServerSetup/99-Tips-And-Tricks/02-Set Up Bridge Networking on Ubuntu for Virtual Machines.md b/On host/AIServerSetup/99-Tips-And-Tricks/02-Set Up Bridge Networking on Ubuntu for Virtual Machines.md new file mode 100644 index 0000000..c9e8ca6 --- /dev/null +++ b/On host/AIServerSetup/99-Tips-And-Tricks/02-Set Up Bridge Networking on Ubuntu for Virtual Machines.md @@ -0,0 +1,107 @@ +### **Guide to Set Up Bridge Networking on Ubuntu for Virtual Machines** + +This guide explains how to configure bridge networking on Ubuntu to allow virtual machines (VMs) to directly access the network, obtaining their own IP addresses from the DHCP server. + +By following this guide, you can successfully set up bridge networking, enabling your virtual machines to directly access the network as if they were standalone devices. + +--- + +#### **Step 1: Identify Your Primary Network Interface** +The primary network interface is the one currently used by the server for network access. Identify it with the following command: + +```bash +ip link show +``` + +Look for the name of the interface (e.g., `enp8s0`) with `state UP`. + +--- + +#### **Step 2: Backup Your Current Network Configuration** +Before making any changes, back up the existing netplan configuration file: + +```bash +sudo cp /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bak +``` + +--- + +#### **Step 3: Configure the Bridge** +Edit the netplan configuration file: + +```bash +sudo nano /etc/netplan/00-installer-config.yaml +``` + +Replace its content with the following, adjusted for your environment: + +```yaml +network: + version: 2 + ethernets: + enp8s0: + dhcp4: no + bridges: + br0: + interfaces: [enp8s0] + dhcp4: true +``` + +- `enp8s0`: Your physical network interface. +- `br0`: The new bridge interface that will be used by the virtual machines and the host. + +Save and exit the file. + +--- + +#### **Step 4: Apply the Configuration** +Apply the new network configuration to create the bridge: + +```bash +sudo netplan apply +``` + +--- + +#### **Step 5: Verify the Bridge Configuration** +Check that the bridge `br0` is active and has an IP address: + +```bash +ip addr show br0 +``` + +You should see an output like this: + +```plaintext +3: br0: mtu 1500 qdisc noqueue state UP group default qlen 1000 + link/ether 46:10:cc:63:f4:37 brd ff:ff:ff:ff:ff:ff + inet 192.168.1.10/24 metric 100 brd 192.168.1.255 scope global dynamic br0 + valid_lft 7102sec preferred_lft 7102sec +``` + +--- + +#### **Step 6: Configure Virtual Machines to Use the Bridge** +For VMs created with tools like `virt-manager` or `virsh`: +1. When configuring the VM’s network interface, choose **Bridge** as the network source. +2. Set `br0` as the bridge interface. +3. The VM will now obtain an IP address dynamically from the same DHCP server as the host. + +For `virt-manager`: +- Go to **Add Hardware > Network**. +- Choose **Bridge br0** as the source. + +--- + +#### **Step 7: Test the Setup** +1. Start a VM and ensure it obtains a dynamic IP address from the network. +2. Test connectivity by pinging the gateway or external servers from the VM. + +--- + +### **Key Considerations** +1. **Dynamic IP for Host:** The host server's IP address will now be associated with the bridge (`br0`) instead of the physical interface (`enp8s0`). This is expected behavior. +2. **Backup Configuration:** Always maintain a backup of your original network configuration to revert changes if needed. +3. **Network Manager vs. Netplan:** Use only one method (`netplan` or `nmcli`) for managing network configurations to avoid conflicts. +4. **Alternative Access:** If you are working on a remote server, ensure alternative access (e.g., a second network interface) before applying network changes. +