Added gitea-mirror
This commit is contained in:
56
Divers/gitea-mirror/scripts/README-dev.md
Normal file
56
Divers/gitea-mirror/scripts/README-dev.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Development Environment Setup
|
||||
|
||||
This directory contains scripts to help set up a development environment with a pre-configured Gitea instance.
|
||||
|
||||
## Default Credentials
|
||||
|
||||
For development convenience, the Gitea instance is pre-configured with:
|
||||
|
||||
- **Admin Username**: `admin`
|
||||
- **Admin Password**: `admin123`
|
||||
- **Gitea URL**: http://localhost:3001
|
||||
|
||||
## Files
|
||||
|
||||
- `gitea-app.ini` - Pre-configured Gitea settings for development
|
||||
- `gitea-dev-init.sh` - Initialization script that copies the config on first run
|
||||
- `gitea-init.sql` - SQL script to create default admin user (not currently used)
|
||||
|
||||
## Usage
|
||||
|
||||
1. Start the development environment:
|
||||
```bash
|
||||
docker compose -f docker-compose.dev.yml down
|
||||
docker volume rm gitea-mirror_gitea-data gitea-mirror_gitea-config
|
||||
docker compose -f docker-compose.dev.yml up -d
|
||||
```
|
||||
|
||||
2. Wait for Gitea to start (check logs):
|
||||
```bash
|
||||
docker logs -f gitea
|
||||
```
|
||||
|
||||
3. Access Gitea at http://localhost:3001 and login with:
|
||||
- Username: `admin`
|
||||
- Password: `admin123`
|
||||
|
||||
4. Generate an API token:
|
||||
- Go to Settings → Applications
|
||||
- Generate New Token
|
||||
- Give it a name like "gitea-mirror"
|
||||
- Select all permissions (for development)
|
||||
- Copy the token
|
||||
|
||||
5. Configure gitea-mirror with the token in your `.env` file or through the web UI.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If Gitea doesn't start properly:
|
||||
|
||||
1. Check logs: `docker logs gitea`
|
||||
2. Ensure volumes are clean: `docker volume rm gitea-mirror_gitea-data gitea-mirror_gitea-config`
|
||||
3. Restart: `docker compose -f docker-compose.dev.yml up -d`
|
||||
|
||||
## Security Note
|
||||
|
||||
⚠️ **These credentials are for development only!** Never use these settings in production.
|
||||
78
Divers/gitea-mirror/scripts/README-docker.md
Normal file
78
Divers/gitea-mirror/scripts/README-docker.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Scripts Directory
|
||||
|
||||
This directory contains utility scripts for the gitea-mirror project.
|
||||
|
||||
## Docker Build Script
|
||||
|
||||
### build-docker.sh
|
||||
|
||||
This script simplifies the process of building and publishing multi-architecture Docker images for the gitea-mirror project.
|
||||
|
||||
#### Usage
|
||||
|
||||
```bash
|
||||
./build-docker.sh [--load] [--push]
|
||||
```
|
||||
|
||||
Options:
|
||||
- `--load`: Load the built image into the local Docker daemon
|
||||
- `--push`: Push the image to the configured Docker registry
|
||||
|
||||
Without any flags, the script will build the image but leave it in the build cache only.
|
||||
|
||||
#### Configuration
|
||||
|
||||
The script uses environment variables from the `.env` file in the project root:
|
||||
|
||||
- `DOCKER_REGISTRY`: The Docker registry to push to (default: ghcr.io)
|
||||
- `DOCKER_IMAGE`: The image name (default: gitea-mirror)
|
||||
- `DOCKER_TAG`: The image tag (default: latest)
|
||||
|
||||
#### Examples
|
||||
|
||||
1. Build for multiple architectures and load into Docker:
|
||||
```bash
|
||||
./scripts/build-docker.sh --load
|
||||
```
|
||||
|
||||
2. Build and push to the registry:
|
||||
```bash
|
||||
./scripts/build-docker.sh --push
|
||||
```
|
||||
|
||||
3. Using with docker-compose:
|
||||
```bash
|
||||
# Ensure dependencies are installed and database is initialized
|
||||
bun run setup
|
||||
|
||||
# First build the image
|
||||
./scripts/build-docker.sh --load
|
||||
|
||||
# Then run using docker-compose for development
|
||||
docker-compose -f ../docker-compose.dev.yml up -d
|
||||
|
||||
# Or for production
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Diagnostics Script
|
||||
|
||||
### docker-diagnostics.sh
|
||||
|
||||
This utility script helps diagnose issues with your Docker setup for building and running Gitea Mirror.
|
||||
|
||||
#### Usage
|
||||
|
||||
```bash
|
||||
./scripts/docker-diagnostics.sh
|
||||
```
|
||||
|
||||
The script checks:
|
||||
- Docker and Docker Compose installation
|
||||
- Docker Buildx configuration
|
||||
- QEMU availability for multi-architecture builds
|
||||
- Docker resources (memory, CPU)
|
||||
- Environment configuration
|
||||
- Provides recommendations for building and troubleshooting
|
||||
|
||||
Run this script before building if you're experiencing issues with Docker builds or want to validate your environment.
|
||||
137
Divers/gitea-mirror/scripts/README-lxc.md
Normal file
137
Divers/gitea-mirror/scripts/README-lxc.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# LXC Container Deployment Guide
|
||||
|
||||
## Overview
|
||||
Run **Gitea Mirror** in an isolated LXC container:
|
||||
|
||||
1. **Proxmox VE (Recommended)** – Using the community-maintained script
|
||||
2. **Local Development** – Using the local LXC script for testing
|
||||
|
||||
---
|
||||
|
||||
## 1. Proxmox VE Installation (Recommended)
|
||||
|
||||
### Prerequisites
|
||||
* Proxmox VE host with internet access
|
||||
* Root shell access on the Proxmox node
|
||||
|
||||
### One-command install
|
||||
|
||||
```bash
|
||||
# Community-maintained script from the Proxmox VE Community Scripts project
|
||||
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/gitea-mirror.sh)"
|
||||
```
|
||||
|
||||
### What the script does:
|
||||
|
||||
* Creates a privileged Alpine Linux LXC container
|
||||
* Installs Bun runtime environment
|
||||
* Clones the Gitea Mirror repository
|
||||
* Builds the application
|
||||
* Configures a systemd service for automatic startup
|
||||
* Sets up the application to run on port 4321
|
||||
* Generates a secure `JWT_SECRET` automatically
|
||||
|
||||
### Accessing Gitea Mirror:
|
||||
|
||||
```
|
||||
http://<container-ip>:4321
|
||||
```
|
||||
|
||||
### Additional Information:
|
||||
* **Script Source**: [Community Scripts for Proxmox VE](https://github.com/community-scripts/ProxmoxVE)
|
||||
* **Documentation**: [Gitea Mirror Script Documentation](https://community-scripts.github.io/ProxmoxVE/scripts?id=gitea-mirror)
|
||||
* **Support**: [Community Scripts Discord](https://discord.gg/fiXVvSHnBU)
|
||||
|
||||
---
|
||||
|
||||
## 2. Local testing (LXD on a workstation, works offline)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* `lxd` installed (`sudo apt install lxd`; `lxd init --auto`)
|
||||
* Your repo cloned locally – e.g. `~/Development/gitea-mirror`
|
||||
* Bun ZIP downloaded once:
|
||||
`https://github.com/oven-sh/bun/releases/latest/download/bun-linux-x64.zip`
|
||||
|
||||
### Offline installer script
|
||||
|
||||
```bash
|
||||
git clone https://github.com/RayLabsHQ/gitea-mirror.git # if not already
|
||||
curl -fsSL https://raw.githubusercontent.com/raylabshq/gitea-mirror:/main/scripts/gitea-mirror-lxc-local.sh -o gitea-mirror-lxc-local.sh
|
||||
chmod +x gitea-mirror-lxc-local.sh
|
||||
|
||||
sudo LOCAL_REPO_DIR=~/Development/gitea-mirror \
|
||||
./gitea-mirror-lxc-local.sh
|
||||
```
|
||||
|
||||
What it does:
|
||||
|
||||
* Launches privileged LXC `gitea-test` (`lxc launch ubuntu:22.04 ...`)
|
||||
* Pushes **Bun ZIP** + tarred **local repo** into `/opt`
|
||||
* Unpacks, builds, initializes DB
|
||||
* Symlinks both `bun` and `bunx` → `/usr/local/bin`
|
||||
* Creates a root systemd unit and starts it
|
||||
|
||||
Access from host:
|
||||
|
||||
```
|
||||
http://$(lxc exec gitea-test -- hostname -I | awk '{print $1}'):4321
|
||||
```
|
||||
|
||||
(Optional) forward to host localhost:
|
||||
|
||||
```bash
|
||||
sudo lxc config device add gitea-test mirror proxy \
|
||||
listen=tcp:0.0.0.0:4321 connect=tcp:127.0.0.1:4321
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Health-check endpoint
|
||||
|
||||
Gitea Mirror includes a built-in health check endpoint at `/api/health` that provides:
|
||||
|
||||
- System status and uptime
|
||||
- Database connectivity check
|
||||
- Memory usage statistics
|
||||
- Environment information
|
||||
|
||||
You can use this endpoint for monitoring your deployment:
|
||||
|
||||
```bash
|
||||
# Basic check (returns 200 OK if healthy)
|
||||
curl -I http://<container-ip>:4321/api/health
|
||||
|
||||
# Detailed health information (JSON)
|
||||
curl http://<container-ip>:4321/api/health
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Check | Command |
|
||||
| -------------- | ----------------------------------------------------- |
|
||||
| Service status | `systemctl status gitea-mirror` |
|
||||
| Live logs | `journalctl -u gitea-mirror -f` |
|
||||
| Verify Bun | `bun --version && bunx --version` |
|
||||
| DB perms | `chown -R root:root /opt/gitea-mirror/data` (Proxmox) |
|
||||
|
||||
---
|
||||
|
||||
## Connecting LXC and Docker Containers
|
||||
|
||||
If you need your LXC container to communicate with Docker containers:
|
||||
|
||||
1. On your host machine, create a bridge network:
|
||||
```bash
|
||||
docker network create gitea-network
|
||||
```
|
||||
|
||||
2. Find the bridge interface created by Docker:
|
||||
```bash
|
||||
ip a | grep docker
|
||||
# Look for something like docker0 or br-xxxxxxxx
|
||||
```
|
||||
|
||||
3. In Proxmox, edit the LXC container's network configuration to use this bridge.
|
||||
140
Divers/gitea-mirror/scripts/README.md
Normal file
140
Divers/gitea-mirror/scripts/README.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# Scripts Directory
|
||||
|
||||
This folder contains utility scripts for database management, event management, Docker builds, and LXC container deployment.
|
||||
|
||||
## Database Management
|
||||
|
||||
### Database Management Tool (manage-db.ts)
|
||||
|
||||
This is a consolidated database management tool that handles all database-related operations. It combines the functionality of the previous separate scripts into a single, more intelligent script that can check, fix, and initialize the database as needed.
|
||||
|
||||
#### Features
|
||||
|
||||
- **Check Mode**: Validates the existence and integrity of the database
|
||||
- **Init Mode**: Creates the database only if it doesn't already exist
|
||||
- **Fix Mode**: Corrects database file location issues
|
||||
- **Reset Users Mode**: Removes all users and their data
|
||||
- **Auto Mode**: Automatically checks, fixes, and initializes the database if needed
|
||||
|
||||
#### Running the Database Management Tool
|
||||
|
||||
You can execute the database management tool using your package manager with various commands:
|
||||
|
||||
```bash
|
||||
# Checks database status (default action if no command is specified)
|
||||
bun run manage-db
|
||||
|
||||
# Check database status
|
||||
bun run check-db
|
||||
|
||||
# Initialize the database (only if it doesn't exist)
|
||||
bun run init-db
|
||||
|
||||
# Fix database location issues
|
||||
bun run fix-db
|
||||
|
||||
# Automatic check, fix, and initialize if needed
|
||||
bun run db-auto
|
||||
|
||||
# Reset all users (for testing signup flow)
|
||||
bun run reset-users
|
||||
|
||||
# Remove database files completely
|
||||
bun run cleanup-db
|
||||
|
||||
# Complete setup (install dependencies and initialize database)
|
||||
bun run setup
|
||||
|
||||
# Start development server with a fresh database
|
||||
bun run dev:clean
|
||||
|
||||
# Start production server with a fresh database
|
||||
bun run start:fresh
|
||||
```
|
||||
|
||||
#### Database File Location
|
||||
|
||||
The database file should be located in the `./data/gitea-mirror.db` directory. If the file is found in the root directory, the fix mode will move it to the correct location.
|
||||
|
||||
## Event Management
|
||||
|
||||
The following scripts help manage events in the SQLite database:
|
||||
|
||||
> **Note**: For a more user-friendly approach, you can use the cleanup button in the Activity Log page of the web interface to delete all activities with a single click.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Remove Duplicate Events (remove-duplicate-events.ts)
|
||||
|
||||
Specifically removes duplicate events based on deduplication keys without affecting old events.
|
||||
|
||||
```bash
|
||||
# Remove duplicate events for all users
|
||||
bun scripts/remove-duplicate-events.ts
|
||||
|
||||
# Remove duplicate events for a specific user
|
||||
bun scripts/remove-duplicate-events.ts <userId>
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Fix Interrupted Jobs (fix-interrupted-jobs.ts)
|
||||
|
||||
Fixes interrupted jobs that might be preventing cleanup by marking them as failed.
|
||||
|
||||
```bash
|
||||
# Fix all interrupted jobs
|
||||
bun scripts/fix-interrupted-jobs.ts
|
||||
|
||||
# Fix interrupted jobs for a specific user
|
||||
bun scripts/fix-interrupted-jobs.ts <userId>
|
||||
```
|
||||
|
||||
Use this script if you're having trouble cleaning up activities due to "interrupted" jobs that won't delete.
|
||||
|
||||
### Startup Recovery (startup-recovery.ts)
|
||||
|
||||
Runs job recovery during application startup to handle any interrupted jobs from previous runs.
|
||||
|
||||
```bash
|
||||
# Run startup recovery (normal mode)
|
||||
bun scripts/startup-recovery.ts
|
||||
|
||||
# Force recovery even if recent attempt was made
|
||||
bun scripts/startup-recovery.ts --force
|
||||
|
||||
# Set custom timeout (default: 30000ms)
|
||||
bun scripts/startup-recovery.ts --timeout=60000
|
||||
|
||||
# Using npm scripts
|
||||
bun run startup-recovery
|
||||
bun run startup-recovery-force
|
||||
```
|
||||
|
||||
This script is automatically run by the Docker entrypoint during container startup. It ensures that any jobs interrupted by container restarts or application crashes are properly recovered or marked as failed.
|
||||
|
||||
## Deployment Scripts
|
||||
|
||||
### Docker Deployment
|
||||
|
||||
- **build-docker.sh**: Builds the Docker image for the application
|
||||
- **docker-diagnostics.sh**: Provides diagnostic information for Docker deployments
|
||||
|
||||
### LXC Container Deployment
|
||||
|
||||
Two deployment options are available for LXC containers:
|
||||
|
||||
1. **Proxmox VE (online)**: Using the community-maintained script by Tobias ([CrazyWolf13](https://github.com/CrazyWolf13))
|
||||
- Author: Tobias ([CrazyWolf13](https://github.com/CrazyWolf13))
|
||||
- Available at: [community-scripts/ProxmoxVED](https://github.com/community-scripts/ProxmoxVED/blob/main/install/gitea-mirror-install.sh)
|
||||
- Pulls everything from GitHub
|
||||
- Creates a privileged container with the application
|
||||
- Sets up systemd service
|
||||
|
||||
2. **gitea-mirror-lxc-local.sh**: For offline/LAN-only deployment on a developer laptop
|
||||
- Pushes your local checkout + Bun ZIP to the container
|
||||
- Useful for testing without internet access
|
||||
|
||||
For detailed instructions on LXC deployment, see [README-lxc.md](./README-lxc.md).
|
||||
104
Divers/gitea-mirror/scripts/build-docker.sh
Executable file
104
Divers/gitea-mirror/scripts/build-docker.sh
Executable file
@@ -0,0 +1,104 @@
|
||||
#!/bin/bash
|
||||
# Build and push the Gitea Mirror docker image for multiple architectures
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Load environment variables if .env file exists
|
||||
if [ -f .env ]; then
|
||||
echo "Loading environment variables from .env"
|
||||
export $(grep -v '^#' .env | xargs)
|
||||
fi
|
||||
|
||||
# Set default values if not set in environment
|
||||
DOCKER_REGISTRY=${DOCKER_REGISTRY:-ghcr.io}
|
||||
DOCKER_IMAGE=${DOCKER_IMAGE:-gitea-mirror}
|
||||
DOCKER_TAG=${DOCKER_TAG:-latest}
|
||||
|
||||
FULL_IMAGE_NAME="$DOCKER_REGISTRY/$DOCKER_IMAGE:$DOCKER_TAG"
|
||||
echo "Building image: $FULL_IMAGE_NAME"
|
||||
|
||||
# Parse command line arguments
|
||||
LOAD=false
|
||||
PUSH=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
key="$1"
|
||||
case $key in
|
||||
--load)
|
||||
LOAD=true
|
||||
shift
|
||||
;;
|
||||
--push)
|
||||
PUSH=true
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $key"
|
||||
echo "Usage: $0 [--load] [--push]"
|
||||
echo " --load Load the image into Docker after build"
|
||||
echo " --push Push the image to the registry after build"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Build command construction
|
||||
BUILD_CMD="docker buildx build --platform linux/amd64,linux/arm64 -t $FULL_IMAGE_NAME"
|
||||
|
||||
# Add load or push flag if specified
|
||||
if [ "$LOAD" = true ]; then
|
||||
BUILD_CMD="$BUILD_CMD --load"
|
||||
fi
|
||||
|
||||
if [ "$PUSH" = true ]; then
|
||||
BUILD_CMD="$BUILD_CMD --push"
|
||||
fi
|
||||
|
||||
# Add context directory
|
||||
BUILD_CMD="$BUILD_CMD ."
|
||||
|
||||
# Execute the build command
|
||||
echo "Executing: $BUILD_CMD"
|
||||
|
||||
# Function to execute with retries
|
||||
execute_with_retry() {
|
||||
local cmd="$1"
|
||||
local max_attempts=${2:-3}
|
||||
local attempt=1
|
||||
local delay=5
|
||||
|
||||
while [ $attempt -le $max_attempts ]; do
|
||||
echo "Attempt $attempt of $max_attempts..."
|
||||
if eval "$cmd"; then
|
||||
echo "Command succeeded!"
|
||||
return 0
|
||||
else
|
||||
echo "Command failed, waiting $delay seconds before retry..."
|
||||
sleep $delay
|
||||
attempt=$((attempt + 1))
|
||||
delay=$((delay * 2)) # Exponential backoff
|
||||
fi
|
||||
done
|
||||
|
||||
echo "All attempts failed!"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Execute with retry
|
||||
execute_with_retry "$BUILD_CMD"
|
||||
BUILD_RESULT=$?
|
||||
|
||||
if [ $BUILD_RESULT -eq 0 ]; then
|
||||
echo "✅ Build successful!"
|
||||
else
|
||||
echo "❌ Build failed after multiple attempts."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Print help message if neither --load nor --push was specified
|
||||
if [ "$LOAD" = false ] && [ "$PUSH" = false ]; then
|
||||
echo
|
||||
echo "NOTE: Image was built but not loaded or pushed. To use this image, run again with:"
|
||||
echo " $0 --load # to load into local Docker"
|
||||
echo " $0 --push # to push to registry $DOCKER_REGISTRY"
|
||||
fi
|
||||
129
Divers/gitea-mirror/scripts/cleanup-duplicate-repos.ts
Normal file
129
Divers/gitea-mirror/scripts/cleanup-duplicate-repos.ts
Normal file
@@ -0,0 +1,129 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
/**
|
||||
* Script to find and clean up duplicate repositories in the database
|
||||
* Keeps the most recent entry and removes older duplicates
|
||||
*
|
||||
* Usage: bun scripts/cleanup-duplicate-repos.ts [--dry-run] [--repo-name=<name>]
|
||||
*/
|
||||
|
||||
import { db, repositories, mirrorJobs } from "@/lib/db";
|
||||
import { eq, and, desc } from "drizzle-orm";
|
||||
|
||||
const isDryRun = process.argv.includes("--dry-run");
|
||||
const specificRepo = process.argv.find(arg => arg.startsWith("--repo-name="))?.split("=")[1];
|
||||
|
||||
async function findDuplicateRepositories() {
|
||||
console.log("🔍 Finding duplicate repositories");
|
||||
console.log("=" .repeat(40));
|
||||
|
||||
if (isDryRun) {
|
||||
console.log("🔍 DRY RUN MODE - No changes will be made");
|
||||
console.log("");
|
||||
}
|
||||
|
||||
if (specificRepo) {
|
||||
console.log(`🎯 Targeting specific repository: ${specificRepo}`);
|
||||
console.log("");
|
||||
}
|
||||
|
||||
try {
|
||||
// Find all repositories, grouped by name and fullName
|
||||
let allRepos = await db.select().from(repositories);
|
||||
|
||||
if (specificRepo) {
|
||||
allRepos = allRepos.filter(repo => repo.name === specificRepo);
|
||||
}
|
||||
|
||||
// Group repositories by name and fullName
|
||||
const repoGroups = new Map<string, typeof allRepos>();
|
||||
|
||||
for (const repo of allRepos) {
|
||||
const key = `${repo.name}|${repo.fullName}`;
|
||||
if (!repoGroups.has(key)) {
|
||||
repoGroups.set(key, []);
|
||||
}
|
||||
repoGroups.get(key)!.push(repo);
|
||||
}
|
||||
|
||||
// Find groups with duplicates
|
||||
const duplicateGroups = Array.from(repoGroups.entries())
|
||||
.filter(([_, repos]) => repos.length > 1);
|
||||
|
||||
if (duplicateGroups.length === 0) {
|
||||
console.log("✅ No duplicate repositories found");
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`📋 Found ${duplicateGroups.length} sets of duplicate repositories:`);
|
||||
console.log("");
|
||||
|
||||
let totalDuplicates = 0;
|
||||
let totalRemoved = 0;
|
||||
|
||||
for (const [key, repos] of duplicateGroups) {
|
||||
const [name, fullName] = key.split("|");
|
||||
console.log(`🔄 Processing duplicates for: ${name} (${fullName})`);
|
||||
console.log(` Found ${repos.length} entries:`);
|
||||
|
||||
// Sort by updatedAt descending to keep the most recent
|
||||
repos.sort((a, b) => new Date(b.updatedAt).getTime() - new Date(a.updatedAt).getTime());
|
||||
|
||||
const keepRepo = repos[0];
|
||||
const removeRepos = repos.slice(1);
|
||||
|
||||
console.log(` ✅ Keeping: ID ${keepRepo.id} (Status: ${keepRepo.status}, Updated: ${new Date(keepRepo.updatedAt).toISOString()})`);
|
||||
|
||||
for (const repo of removeRepos) {
|
||||
console.log(` ❌ Removing: ID ${repo.id} (Status: ${repo.status}, Updated: ${new Date(repo.updatedAt).toISOString()})`);
|
||||
|
||||
if (!isDryRun) {
|
||||
try {
|
||||
// First, delete related mirror jobs
|
||||
await db
|
||||
.delete(mirrorJobs)
|
||||
.where(eq(mirrorJobs.repositoryId, repo.id!));
|
||||
|
||||
// Then delete the repository
|
||||
await db
|
||||
.delete(repositories)
|
||||
.where(eq(repositories.id, repo.id!));
|
||||
|
||||
console.log(` 🗑️ Deleted repository and related mirror jobs`);
|
||||
totalRemoved++;
|
||||
} catch (error) {
|
||||
console.log(` ❌ Error deleting repository: ${error instanceof Error ? error.message : String(error)}`);
|
||||
}
|
||||
} else {
|
||||
console.log(` 🗑️ Would delete repository and related mirror jobs`);
|
||||
totalRemoved++;
|
||||
}
|
||||
}
|
||||
|
||||
totalDuplicates += removeRepos.length;
|
||||
console.log("");
|
||||
}
|
||||
|
||||
console.log("📊 Cleanup Summary:");
|
||||
console.log(` Duplicate sets found: ${duplicateGroups.length}`);
|
||||
console.log(` Total duplicates: ${totalDuplicates}`);
|
||||
console.log(` ${isDryRun ? 'Would remove' : 'Removed'}: ${totalRemoved}`);
|
||||
|
||||
if (isDryRun && totalRemoved > 0) {
|
||||
console.log("");
|
||||
console.log("💡 To apply these changes, run the script without --dry-run");
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error("❌ Error during cleanup process:", error);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the cleanup
|
||||
findDuplicateRepositories().then(() => {
|
||||
console.log("Cleanup process complete.");
|
||||
process.exit(0);
|
||||
}).catch((error) => {
|
||||
console.error("Fatal error:", error);
|
||||
process.exit(1);
|
||||
});
|
||||
125
Divers/gitea-mirror/scripts/docker-diagnostics.sh
Normal file
125
Divers/gitea-mirror/scripts/docker-diagnostics.sh
Normal file
@@ -0,0 +1,125 @@
|
||||
#!/bin/bash
|
||||
# Docker setup diagnostics tool for Gitea Mirror
|
||||
|
||||
# ANSI color codes
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[0;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${BLUE}=====================================================${NC}"
|
||||
echo -e "${BLUE} Gitea Mirror Docker Setup Diagnostics ${NC}"
|
||||
echo -e "${BLUE}=====================================================${NC}"
|
||||
|
||||
# Check if Docker is installed and running
|
||||
echo -e "\n${YELLOW}Checking Docker...${NC}"
|
||||
if command -v docker &> /dev/null; then
|
||||
echo -e "${GREEN}✓ Docker is installed${NC}"
|
||||
if docker info &> /dev/null; then
|
||||
echo -e "${GREEN}✓ Docker daemon is running${NC}"
|
||||
|
||||
# Get Docker version
|
||||
DOCKER_VERSION=$(docker version --format '{{.Server.Version}}')
|
||||
echo -e "${GREEN}✓ Docker version: $DOCKER_VERSION${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Docker daemon is not running${NC}"
|
||||
echo -e " Run: ${YELLOW}open -a Docker${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}✗ Docker is not installed${NC}"
|
||||
echo -e " Visit: ${BLUE}https://www.docker.com/products/docker-desktop${NC}"
|
||||
fi
|
||||
|
||||
# Check for Docker Compose
|
||||
echo -e "\n${YELLOW}Checking Docker Compose...${NC}"
|
||||
if docker compose version &> /dev/null; then
|
||||
COMPOSE_VERSION=$(docker compose version --short)
|
||||
echo -e "${GREEN}✓ Docker Compose is installed (v$COMPOSE_VERSION)${NC}"
|
||||
elif command -v docker-compose &> /dev/null; then
|
||||
COMPOSE_VERSION=$(docker-compose --version | awk '{print $3}' | sed 's/,//')
|
||||
echo -e "${GREEN}✓ Docker Compose is installed (v$COMPOSE_VERSION)${NC}"
|
||||
echo -e "${YELLOW}⚠ Using legacy docker-compose - consider upgrading${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Docker Compose is not installed${NC}"
|
||||
fi
|
||||
|
||||
# Check for Docker Buildx
|
||||
echo -e "\n${YELLOW}Checking Docker Buildx...${NC}"
|
||||
if docker buildx version &> /dev/null; then
|
||||
BUILDX_VERSION=$(docker buildx version | head -n1 | awk '{print $2}')
|
||||
echo -e "${GREEN}✓ Docker Buildx is installed (v$BUILDX_VERSION)${NC}"
|
||||
|
||||
# List available builders
|
||||
echo -e "\n${YELLOW}Available builders:${NC}"
|
||||
docker buildx ls
|
||||
else
|
||||
echo -e "${RED}✗ Docker Buildx is not installed or not activated${NC}"
|
||||
fi
|
||||
|
||||
# Check for QEMU
|
||||
echo -e "\n${YELLOW}Checking QEMU for multi-platform builds...${NC}"
|
||||
if docker run --rm --privileged multiarch/qemu-user-static --reset -p yes &> /dev/null; then
|
||||
echo -e "${GREEN}✓ QEMU is available for multi-architecture builds${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ QEMU setup issue - multi-platform builds may fail${NC}"
|
||||
echo -e " Run: ${YELLOW}docker run --rm --privileged multiarch/qemu-user-static --reset -p yes${NC}"
|
||||
fi
|
||||
|
||||
# Check Docker resources
|
||||
echo -e "\n${YELLOW}Checking Docker resources...${NC}"
|
||||
if [ "$(uname)" == "Darwin" ]; then
|
||||
# macOS
|
||||
if command -v osascript &> /dev/null; then
|
||||
SYS_MEM=$(( $(sysctl -n hw.memsize) / 1024 / 1024 / 1024 ))
|
||||
echo -e "System memory: ${GREEN}$SYS_MEM GB${NC}"
|
||||
echo -e "NOTE: Check Docker Desktop settings to see allocated resources"
|
||||
echo -e "Recommended: At least 4GB RAM and 2 CPUs for multi-platform builds"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check environment file
|
||||
echo -e "\n${YELLOW}Checking environment configuration...${NC}"
|
||||
if [ -f .env ]; then
|
||||
echo -e "${GREEN}✓ .env file exists${NC}"
|
||||
|
||||
# Parse .env file safely
|
||||
if [ -f .env ]; then
|
||||
REGISTRY=$(grep DOCKER_REGISTRY .env | cut -d= -f2)
|
||||
IMAGE=$(grep DOCKER_IMAGE .env | cut -d= -f2)
|
||||
TAG=$(grep DOCKER_TAG .env | cut -d= -f2)
|
||||
|
||||
echo -e "Docker image configuration:"
|
||||
echo -e " Registry: ${BLUE}${REGISTRY:-"Not set (will use default)"}${NC}"
|
||||
echo -e " Image: ${BLUE}${IMAGE:-"Not set (will use default)"}${NC}"
|
||||
echo -e " Tag: ${BLUE}${TAG:-"Not set (will use default)"}${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}⚠ .env file not found${NC}"
|
||||
echo -e " Run: ${YELLOW}cp .env.example .env${NC}"
|
||||
fi
|
||||
|
||||
# Conclusion and recommendations
|
||||
echo -e "\n${BLUE}=====================================================${NC}"
|
||||
echo -e "${BLUE} Recommendations ${NC}"
|
||||
echo -e "${BLUE}=====================================================${NC}"
|
||||
|
||||
echo -e "\n${YELLOW}For local development:${NC}"
|
||||
echo -e "1. ${GREEN}bun run setup${NC} (initialize database and install dependencies)"
|
||||
echo -e "2. ${GREEN}./scripts/build-docker.sh --load${NC} (build and load into Docker)"
|
||||
echo -e "3. ${GREEN}docker-compose -f docker-compose.dev.yml up -d${NC} (start the development container)"
|
||||
|
||||
echo -e "\n${YELLOW}For production deployment (using Docker Compose):${NC}"
|
||||
echo -e "1. ${GREEN}bun run setup${NC} (if not already done, to ensure database schema is ready)"
|
||||
echo -e "2. ${GREEN}docker-compose --profile production up -d${NC} (start the production container)"
|
||||
|
||||
echo -e "\n${YELLOW}For CI/CD builds:${NC}"
|
||||
echo -e "1. Use GitHub Actions workflow with retry mechanism"
|
||||
echo -e "2. If build fails, try running with: ${GREEN}DOCKER_BUILDKIT=1${NC}"
|
||||
echo -e "3. Consider breaking the build into multiple steps for better reliability"
|
||||
|
||||
echo -e "\n${YELLOW}For troubleshooting:${NC}"
|
||||
echo -e "1. Check container logs: ${GREEN}docker logs gitea-mirror-dev${NC} (for development) or ${GREEN}docker logs gitea-mirror${NC} (for production)"
|
||||
echo -e "2. Check health status: ${GREEN}docker inspect --format='{{.State.Health.Status}}' gitea-mirror-dev${NC} (for development) or ${GREEN}docker inspect --format='{{.State.Health.Status}}' gitea-mirror${NC} (for production)"
|
||||
echo -e "3. See full documentation: ${BLUE}.github/workflows/TROUBLESHOOTING.md${NC}"
|
||||
echo -e ""
|
||||
69
Divers/gitea-mirror/scripts/fix-interrupted-jobs.ts
Normal file
69
Divers/gitea-mirror/scripts/fix-interrupted-jobs.ts
Normal file
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* Script to fix interrupted jobs that might be preventing cleanup
|
||||
* This script marks all in-progress jobs as failed to allow them to be deleted
|
||||
*
|
||||
* Usage:
|
||||
* bun scripts/fix-interrupted-jobs.ts [userId]
|
||||
*
|
||||
* Where [userId] is optional - if provided, only fixes jobs for that user
|
||||
*/
|
||||
|
||||
import { db, mirrorJobs } from "../src/lib/db";
|
||||
import { eq, and } from "drizzle-orm";
|
||||
|
||||
// Parse command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
const userId = args.length > 0 ? args[0] : undefined;
|
||||
|
||||
async function fixInterruptedJobs() {
|
||||
try {
|
||||
console.log("Checking for interrupted jobs...");
|
||||
|
||||
// Build the query
|
||||
const whereConditions = userId
|
||||
? and(eq(mirrorJobs.inProgress, true), eq(mirrorJobs.userId, userId))
|
||||
: eq(mirrorJobs.inProgress, true);
|
||||
|
||||
if (userId) {
|
||||
console.log(`Filtering for user: ${userId}`);
|
||||
}
|
||||
|
||||
// Find all in-progress jobs
|
||||
const inProgressJobs = await db
|
||||
.select()
|
||||
.from(mirrorJobs)
|
||||
.where(whereConditions);
|
||||
|
||||
if (inProgressJobs.length === 0) {
|
||||
console.log("No interrupted jobs found.");
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${inProgressJobs.length} interrupted jobs:`);
|
||||
inProgressJobs.forEach(job => {
|
||||
console.log(`- Job ${job.id}: ${job.message} (${job.repositoryName || job.organizationName || 'Unknown'})`);
|
||||
});
|
||||
|
||||
// Mark all in-progress jobs as failed
|
||||
await db
|
||||
.update(mirrorJobs)
|
||||
.set({
|
||||
inProgress: false,
|
||||
completedAt: new Date(),
|
||||
status: "failed",
|
||||
message: "Job interrupted and marked as failed by cleanup script"
|
||||
})
|
||||
.where(whereConditions);
|
||||
|
||||
console.log(`✅ Successfully marked ${inProgressJobs.length} interrupted jobs as failed.`);
|
||||
console.log("These jobs can now be deleted through the normal cleanup process.");
|
||||
|
||||
} catch (error) {
|
||||
console.error("Error fixing interrupted jobs:", error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the fix
|
||||
fixInterruptedJobs();
|
||||
110
Divers/gitea-mirror/scripts/generate-better-auth-schema.ts
Normal file
110
Divers/gitea-mirror/scripts/generate-better-auth-schema.ts
Normal file
@@ -0,0 +1,110 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import { betterAuth } from "better-auth";
|
||||
import { drizzleAdapter } from "better-auth/adapters/drizzle";
|
||||
import Database from "bun:sqlite";
|
||||
import { drizzle } from "drizzle-orm/bun-sqlite";
|
||||
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core";
|
||||
|
||||
// Create a minimal auth instance just for schema generation
|
||||
const tempDb = new Database(":memory:");
|
||||
const db = drizzle({ client: tempDb });
|
||||
|
||||
// Minimal auth config for schema generation
|
||||
const auth = betterAuth({
|
||||
database: drizzleAdapter(db, {
|
||||
provider: "sqlite",
|
||||
usePlural: true,
|
||||
}),
|
||||
emailAndPassword: {
|
||||
enabled: true,
|
||||
},
|
||||
});
|
||||
|
||||
// Generate the schema
|
||||
// Note: $internal API is not available in current better-auth version
|
||||
// const schema = auth.$internal.schema;
|
||||
|
||||
console.log("Better Auth Tables Required:");
|
||||
console.log("============================");
|
||||
|
||||
// Convert Better Auth schema to Drizzle schema definitions
|
||||
const drizzleSchemaCode = `// Better Auth Tables - Generated Schema
|
||||
import { sqliteTable, text, integer, index } from "drizzle-orm/sqlite-core";
|
||||
import { sql } from "drizzle-orm";
|
||||
|
||||
// Sessions table
|
||||
export const sessions = sqliteTable("sessions", {
|
||||
id: text("id").primaryKey(),
|
||||
token: text("token").notNull().unique(),
|
||||
userId: text("user_id").notNull().references(() => users.id),
|
||||
expiresAt: integer("expires_at", { mode: "timestamp" }).notNull(),
|
||||
ipAddress: text("ip_address"),
|
||||
userAgent: text("user_agent"),
|
||||
createdAt: integer("created_at", { mode: "timestamp" })
|
||||
.notNull()
|
||||
.default(sql\`(unixepoch())\`),
|
||||
updatedAt: integer("updated_at", { mode: "timestamp" })
|
||||
.notNull()
|
||||
.default(sql\`(unixepoch())\`),
|
||||
}, (table) => {
|
||||
return {
|
||||
userIdIdx: index("idx_sessions_user_id").on(table.userId),
|
||||
tokenIdx: index("idx_sessions_token").on(table.token),
|
||||
expiresAtIdx: index("idx_sessions_expires_at").on(table.expiresAt),
|
||||
};
|
||||
});
|
||||
|
||||
// Accounts table (for OAuth providers and credentials)
|
||||
export const accounts = sqliteTable("accounts", {
|
||||
id: text("id").primaryKey(),
|
||||
userId: text("user_id").notNull().references(() => users.id),
|
||||
providerId: text("provider_id").notNull(),
|
||||
providerUserId: text("provider_user_id").notNull(),
|
||||
accessToken: text("access_token"),
|
||||
refreshToken: text("refresh_token"),
|
||||
expiresAt: integer("expires_at", { mode: "timestamp" }),
|
||||
password: text("password"), // For credential provider
|
||||
createdAt: integer("created_at", { mode: "timestamp" })
|
||||
.notNull()
|
||||
.default(sql\`(unixepoch())\`),
|
||||
updatedAt: integer("updated_at", { mode: "timestamp" })
|
||||
.notNull()
|
||||
.default(sql\`(unixepoch())\`),
|
||||
}, (table) => {
|
||||
return {
|
||||
userIdIdx: index("idx_accounts_user_id").on(table.userId),
|
||||
providerIdx: index("idx_accounts_provider").on(table.providerId, table.providerUserId),
|
||||
};
|
||||
});
|
||||
|
||||
// Verification tokens table
|
||||
export const verificationTokens = sqliteTable("verification_tokens", {
|
||||
id: text("id").primaryKey(),
|
||||
token: text("token").notNull().unique(),
|
||||
identifier: text("identifier").notNull(),
|
||||
type: text("type").notNull(), // email, password-reset, etc
|
||||
expiresAt: integer("expires_at", { mode: "timestamp" }).notNull(),
|
||||
createdAt: integer("created_at", { mode: "timestamp" })
|
||||
.notNull()
|
||||
.default(sql\`(unixepoch())\`),
|
||||
}, (table) => {
|
||||
return {
|
||||
tokenIdx: index("idx_verification_tokens_token").on(table.token),
|
||||
identifierIdx: index("idx_verification_tokens_identifier").on(table.identifier),
|
||||
};
|
||||
});
|
||||
|
||||
// Future: SSO and OIDC Provider tables will be added when we enable those plugins
|
||||
`;
|
||||
|
||||
console.log(drizzleSchemaCode);
|
||||
|
||||
// Output information about the schema
|
||||
console.log("\n\nSummary:");
|
||||
console.log("=========");
|
||||
console.log("- Better Auth will modify the existing 'users' table");
|
||||
console.log("- New tables required: sessions, accounts, verification_tokens");
|
||||
console.log("\nNote: The 'users' table needs emailVerified field added");
|
||||
|
||||
tempDb.close();
|
||||
68
Divers/gitea-mirror/scripts/gitea-app.ini
Normal file
68
Divers/gitea-mirror/scripts/gitea-app.ini
Normal file
@@ -0,0 +1,68 @@
|
||||
APP_NAME = Gitea: Git with a cup of tea
|
||||
RUN_MODE = prod
|
||||
|
||||
[database]
|
||||
DB_TYPE = sqlite3
|
||||
PATH = /data/gitea/gitea.db
|
||||
|
||||
[repository]
|
||||
ROOT = /data/git/repositories
|
||||
|
||||
[server]
|
||||
SSH_DOMAIN = localhost
|
||||
DOMAIN = localhost
|
||||
HTTP_PORT = 3000
|
||||
ROOT_URL = http://localhost:3001/
|
||||
DISABLE_SSH = false
|
||||
SSH_PORT = 22
|
||||
LFS_START_SERVER = true
|
||||
LFS_JWT_SECRET = _oaWNP5sCH5cSECa-K_HvCXeXhg-zN5H0cU5vVQAZr4
|
||||
OFFLINE_MODE = false
|
||||
|
||||
[security]
|
||||
INSTALL_LOCK = true
|
||||
SECRET_KEY = vLu5OuX0EweZjDNxKPQ5V9DXXXX8cJiKpJyQylKkMVTrNdFAzlUlNdYLYfiCybu
|
||||
INTERNAL_TOKEN = eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYmYiOjE3MjgzMTk1MDB9.Lz0cJB_DCLmJFh8FqDX0z9IUcxfY9jPftHEGvz_WeHo
|
||||
PASSWORD_HASH_ALGO = pbkdf2
|
||||
|
||||
[service]
|
||||
DISABLE_REGISTRATION = false
|
||||
REQUIRE_SIGNIN_VIEW = false
|
||||
REGISTER_EMAIL_CONFIRM = false
|
||||
ENABLE_NOTIFY_MAIL = false
|
||||
ALLOW_ONLY_EXTERNAL_REGISTRATION = false
|
||||
ENABLE_CAPTCHA = false
|
||||
DEFAULT_KEEP_EMAIL_PRIVATE = false
|
||||
DEFAULT_ALLOW_CREATE_ORGANIZATION = true
|
||||
DEFAULT_ENABLE_TIMETRACKING = true
|
||||
NO_REPLY_ADDRESS = noreply.localhost
|
||||
|
||||
[oauth2]
|
||||
JWT_SECRET = gQXt_D8B-VJGCvFfJ9xEj5yp8mOd6fAza8TKc9rJJYw
|
||||
|
||||
[lfs]
|
||||
PATH = /data/git/lfs
|
||||
|
||||
[mailer]
|
||||
ENABLED = false
|
||||
|
||||
[openid]
|
||||
ENABLE_OPENID_SIGNIN = true
|
||||
ENABLE_OPENID_SIGNUP = true
|
||||
|
||||
[session]
|
||||
PROVIDER = file
|
||||
|
||||
[log]
|
||||
MODE = console
|
||||
LEVEL = Info
|
||||
ROOT_PATH = /data/gitea/log
|
||||
|
||||
[repository.pull-request]
|
||||
DEFAULT_MERGE_STYLE = merge
|
||||
|
||||
[repository.signing]
|
||||
DEFAULT_TRUST_MODEL = committer
|
||||
|
||||
[actions]
|
||||
ENABLED = false
|
||||
14
Divers/gitea-mirror/scripts/gitea-create-admin.sh
Executable file
14
Divers/gitea-mirror/scripts/gitea-create-admin.sh
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/sh
|
||||
# Create admin user for Gitea development instance
|
||||
|
||||
echo "Creating admin user for Gitea..."
|
||||
docker exec -u git gitea gitea admin user create \
|
||||
--username admin \
|
||||
--password admin123 \
|
||||
--email admin@localhost \
|
||||
--admin \
|
||||
--must-change-password=false
|
||||
|
||||
echo "Admin user created!"
|
||||
echo "Username: admin"
|
||||
echo "Password: admin123"
|
||||
32
Divers/gitea-mirror/scripts/gitea-dev-init.sh
Executable file
32
Divers/gitea-mirror/scripts/gitea-dev-init.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/bin/sh
|
||||
# Initialize Gitea for development with pre-configured settings
|
||||
|
||||
# Create necessary directories
|
||||
mkdir -p /data/gitea/conf
|
||||
|
||||
# Copy pre-configured app.ini if it doesn't exist
|
||||
if [ ! -f /data/gitea/conf/app.ini ]; then
|
||||
echo "Initializing Gitea with development configuration..."
|
||||
cp /tmp/app.ini /data/gitea/conf/app.ini
|
||||
chown 1000:1000 /data/gitea/conf/app.ini
|
||||
fi
|
||||
|
||||
# Start Gitea in background
|
||||
/usr/bin/entrypoint "$@" &
|
||||
GITEA_PID=$!
|
||||
|
||||
# Wait for Gitea to be ready
|
||||
echo "Waiting for Gitea to start..."
|
||||
until wget --no-verbose --tries=1 --spider http://localhost:3000/ 2>/dev/null; do
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Create admin user if it doesn't exist
|
||||
if [ ! -f /data/.admin_created ]; then
|
||||
echo "Creating default admin user..."
|
||||
su git -c "gitea admin user create --username admin --password admin123 --email admin@localhost --admin --must-change-password=false" && \
|
||||
touch /data/.admin_created
|
||||
fi
|
||||
|
||||
# Keep Gitea running in foreground
|
||||
wait $GITEA_PID
|
||||
86
Divers/gitea-mirror/scripts/gitea-mirror-lxc-local.sh
Executable file
86
Divers/gitea-mirror/scripts/gitea-mirror-lxc-local.sh
Executable file
@@ -0,0 +1,86 @@
|
||||
#!/usr/bin/env bash
|
||||
# gitea-mirror-lxc-local.sh (offline, local repo, verbose)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
CONTAINER="gitea-test"
|
||||
IMAGE="ubuntu:22.04"
|
||||
INSTALL_DIR="/opt/gitea-mirror"
|
||||
PORT=4321
|
||||
BETTER_AUTH_SECRET="$(openssl rand -hex 32)"
|
||||
|
||||
BUN_ZIP="/tmp/bun-linux-x64.zip"
|
||||
BUN_URL="https://github.com/oven-sh/bun/releases/latest/download/bun-linux-x64.zip"
|
||||
|
||||
LOCAL_REPO_DIR="${LOCAL_REPO_DIR:-./gitea-mirror}"
|
||||
REPO_TAR="/tmp/gitea-mirror-local.tar.gz"
|
||||
|
||||
need() { command -v "$1" >/dev/null || { echo "Missing $1"; exit 1; }; }
|
||||
need curl; need lxc; need tar; need unzip
|
||||
|
||||
# ── build host artefacts ────────────────────────────────────────────────
|
||||
[[ -d $LOCAL_REPO_DIR ]] || { echo "❌ LOCAL_REPO_DIR not found"; exit 1; }
|
||||
[[ -f $LOCAL_REPO_DIR/package.json ]] || { echo "❌ package.json missing"; exit 1; }
|
||||
[[ -f $BUN_ZIP ]] || curl -L --retry 5 --retry-delay 5 -o "$BUN_ZIP" "$BUN_URL"
|
||||
tar -czf "$REPO_TAR" -C "$(dirname "$LOCAL_REPO_DIR")" "$(basename "$LOCAL_REPO_DIR")"
|
||||
|
||||
# ── ensure container exists ─────────────────────────────────────────────
|
||||
lxd init --auto >/dev/null 2>&1 || true
|
||||
lxc info "$CONTAINER" >/dev/null 2>&1 || lxc launch "$IMAGE" "$CONTAINER"
|
||||
|
||||
echo "🔧 installing base packages…"
|
||||
sudo lxc exec "$CONTAINER" -- bash -c 'set -ex; apt update; apt install -y unzip tar openssl sqlite3'
|
||||
|
||||
echo "⬆️ pushing artefacts…"
|
||||
sudo lxc file push "$BUN_ZIP" "$CONTAINER/opt/"
|
||||
sudo lxc file push "$REPO_TAR" "$CONTAINER/opt/"
|
||||
|
||||
echo "📦 unpacking Bun + repo…"
|
||||
sudo lxc exec "$CONTAINER" -- bash -ex <<'IN'
|
||||
cd /opt
|
||||
# Bun
|
||||
unzip -oq bun-linux-x64.zip -d bun
|
||||
BIN=$(find /opt/bun -type f -name bun -perm -111 | head -n1)
|
||||
ln -sf "$BIN" /usr/local/bin/bun # bun
|
||||
ln -sf "$BIN" /usr/local/bin/bunx # bunx shim
|
||||
# Repo
|
||||
rm -rf /opt/gitea-mirror
|
||||
mkdir -p /opt/gitea-mirror
|
||||
tar -xzf gitea-mirror-local.tar.gz --strip-components=1 -C /opt/gitea-mirror
|
||||
IN
|
||||
|
||||
echo "🏗️ bun install / build…"
|
||||
sudo lxc exec "$CONTAINER" -- bash -ex <<'IN'
|
||||
cd /opt/gitea-mirror
|
||||
bun install
|
||||
bun run build
|
||||
bun run manage-db init
|
||||
IN
|
||||
|
||||
echo "📝 systemd unit…"
|
||||
sudo lxc exec "$CONTAINER" -- bash -ex <<IN
|
||||
cat >/etc/systemd/system/gitea-mirror.service <<SERVICE
|
||||
[Unit]
|
||||
Description=Gitea Mirror
|
||||
After=network.target
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=$INSTALL_DIR
|
||||
ExecStart=/usr/local/bin/bun dist/server/entry.mjs
|
||||
Restart=on-failure
|
||||
RestartSec=10
|
||||
Environment=NODE_ENV=production
|
||||
Environment=HOST=0.0.0.0
|
||||
Environment=PORT=$PORT
|
||||
Environment=DATABASE_URL=file:data/gitea-mirror.db
|
||||
Environment=BETTER_AUTH_SECRET=$BETTER_AUTH_SECRET
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
SERVICE
|
||||
systemctl daemon-reload
|
||||
systemctl enable gitea-mirror
|
||||
systemctl restart gitea-mirror
|
||||
IN
|
||||
|
||||
echo -e "\n✅ finished; service status:"
|
||||
sudo lxc exec "$CONTAINER" -- systemctl status gitea-mirror --no-pager
|
||||
178
Divers/gitea-mirror/scripts/investigate-repo.ts
Normal file
178
Divers/gitea-mirror/scripts/investigate-repo.ts
Normal file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
/**
|
||||
* Script to investigate a specific repository's mirroring status
|
||||
* Usage: bun scripts/investigate-repo.ts [repository-name]
|
||||
*/
|
||||
|
||||
import { db, repositories, mirrorJobs, configs } from "@/lib/db";
|
||||
import { eq, desc, and } from "drizzle-orm";
|
||||
|
||||
const repoName = process.argv[2] || "EruditionPaper";
|
||||
|
||||
async function investigateRepository() {
|
||||
console.log(`🔍 Investigating repository: ${repoName}`);
|
||||
console.log("=" .repeat(50));
|
||||
|
||||
try {
|
||||
// Find the repository in the database
|
||||
const repos = await db
|
||||
.select()
|
||||
.from(repositories)
|
||||
.where(eq(repositories.name, repoName));
|
||||
|
||||
if (repos.length === 0) {
|
||||
console.log(`❌ Repository "${repoName}" not found in database`);
|
||||
return;
|
||||
}
|
||||
|
||||
const repo = repos[0];
|
||||
console.log(`✅ Found repository: ${repo.name}`);
|
||||
console.log(` ID: ${repo.id}`);
|
||||
console.log(` Full Name: ${repo.fullName}`);
|
||||
console.log(` Owner: ${repo.owner}`);
|
||||
console.log(` Organization: ${repo.organization || "None"}`);
|
||||
console.log(` Status: ${repo.status}`);
|
||||
console.log(` Is Private: ${repo.isPrivate}`);
|
||||
console.log(` Is Forked: ${repo.isForked}`);
|
||||
console.log(` Mirrored Location: ${repo.mirroredLocation || "Not set"}`);
|
||||
console.log(` Last Mirrored: ${repo.lastMirrored ? new Date(repo.lastMirrored).toISOString() : "Never"}`);
|
||||
console.log(` Error Message: ${repo.errorMessage || "None"}`);
|
||||
console.log(` Created At: ${new Date(repo.createdAt).toISOString()}`);
|
||||
console.log(` Updated At: ${new Date(repo.updatedAt).toISOString()}`);
|
||||
|
||||
console.log("\n📋 Recent Mirror Jobs:");
|
||||
console.log("-".repeat(30));
|
||||
|
||||
// Find recent mirror jobs for this repository
|
||||
const jobs = await db
|
||||
.select()
|
||||
.from(mirrorJobs)
|
||||
.where(eq(mirrorJobs.repositoryId, repo.id))
|
||||
.orderBy(desc(mirrorJobs.timestamp))
|
||||
.limit(10);
|
||||
|
||||
if (jobs.length === 0) {
|
||||
console.log(" No mirror jobs found for this repository");
|
||||
} else {
|
||||
jobs.forEach((job, index) => {
|
||||
console.log(` ${index + 1}. ${new Date(job.timestamp).toISOString()}`);
|
||||
console.log(` Status: ${job.status}`);
|
||||
console.log(` Message: ${job.message}`);
|
||||
if (job.details) {
|
||||
console.log(` Details: ${job.details}`);
|
||||
}
|
||||
console.log("");
|
||||
});
|
||||
}
|
||||
|
||||
// Get user configuration
|
||||
console.log("⚙️ User Configuration:");
|
||||
console.log("-".repeat(20));
|
||||
|
||||
const config = await db
|
||||
.select()
|
||||
.from(configs)
|
||||
.where(eq(configs.id, repo.configId))
|
||||
.limit(1);
|
||||
|
||||
if (config.length > 0) {
|
||||
const userConfig = config[0];
|
||||
console.log(` User ID: ${userConfig.userId}`);
|
||||
console.log(` GitHub Owner: ${userConfig.githubConfig?.owner || "Not set"}`);
|
||||
console.log(` Gitea URL: ${userConfig.giteaConfig?.url || "Not set"}`);
|
||||
console.log(` Gitea Default Owner: ${userConfig.giteaConfig?.defaultOwner || "Not set"}`);
|
||||
console.log(` Mirror Strategy: ${userConfig.githubConfig?.mirrorStrategy || "preserve"}`);
|
||||
console.log(` Include Starred: ${userConfig.githubConfig?.includeStarred || false}`);
|
||||
}
|
||||
|
||||
// Check for any active jobs
|
||||
console.log("\n🔄 Active Jobs:");
|
||||
console.log("-".repeat(15));
|
||||
|
||||
const activeJobs = await db
|
||||
.select()
|
||||
.from(mirrorJobs)
|
||||
.where(
|
||||
and(
|
||||
eq(mirrorJobs.repositoryId, repo.id),
|
||||
eq(mirrorJobs.inProgress, true)
|
||||
)
|
||||
);
|
||||
|
||||
if (activeJobs.length === 0) {
|
||||
console.log(" No active jobs found");
|
||||
} else {
|
||||
activeJobs.forEach((job, index) => {
|
||||
console.log(` ${index + 1}. Job ID: ${job.id}`);
|
||||
console.log(` Type: ${job.jobType || "mirror"}`);
|
||||
console.log(` Batch ID: ${job.batchId || "None"}`);
|
||||
console.log(` Started: ${job.startedAt ? new Date(job.startedAt).toISOString() : "Unknown"}`);
|
||||
console.log(` Last Checkpoint: ${job.lastCheckpoint ? new Date(job.lastCheckpoint).toISOString() : "None"}`);
|
||||
console.log(` Progress: ${job.completedItems || 0}/${job.totalItems || 0}`);
|
||||
console.log("");
|
||||
});
|
||||
}
|
||||
|
||||
// Check if repository exists in Gitea
|
||||
if (config.length > 0) {
|
||||
const userConfig = config[0];
|
||||
console.log("\n🔗 Gitea Repository Check:");
|
||||
console.log("-".repeat(25));
|
||||
|
||||
try {
|
||||
const giteaUrl = userConfig.giteaConfig?.url;
|
||||
const giteaToken = userConfig.giteaConfig?.token;
|
||||
const giteaUsername = userConfig.giteaConfig?.defaultOwner;
|
||||
|
||||
if (giteaUrl && giteaToken && giteaUsername) {
|
||||
const checkUrl = `${giteaUrl}/api/v1/repos/${giteaUsername}/${repo.name}`;
|
||||
console.log(` Checking: ${checkUrl}`);
|
||||
|
||||
const response = await fetch(checkUrl, {
|
||||
headers: {
|
||||
Authorization: `token ${giteaToken}`,
|
||||
},
|
||||
});
|
||||
|
||||
console.log(` Response Status: ${response.status} ${response.statusText}`);
|
||||
|
||||
if (response.ok) {
|
||||
const repoData = await response.json();
|
||||
console.log(` ✅ Repository exists in Gitea`);
|
||||
console.log(` Name: ${repoData.name}`);
|
||||
console.log(` Full Name: ${repoData.full_name}`);
|
||||
console.log(` Private: ${repoData.private}`);
|
||||
console.log(` Mirror: ${repoData.mirror}`);
|
||||
console.log(` Clone URL: ${repoData.clone_url}`);
|
||||
console.log(` Created: ${new Date(repoData.created_at).toISOString()}`);
|
||||
console.log(` Updated: ${new Date(repoData.updated_at).toISOString()}`);
|
||||
if (repoData.mirror_updated) {
|
||||
console.log(` Mirror Updated: ${new Date(repoData.mirror_updated).toISOString()}`);
|
||||
}
|
||||
} else {
|
||||
console.log(` ❌ Repository not found in Gitea`);
|
||||
const errorText = await response.text();
|
||||
console.log(` Error: ${errorText}`);
|
||||
}
|
||||
} else {
|
||||
console.log(" ⚠️ Missing Gitea configuration");
|
||||
}
|
||||
} catch (error) {
|
||||
console.log(` ❌ Error checking Gitea: ${error instanceof Error ? error.message : String(error)}`);
|
||||
}
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error("❌ Error investigating repository:", error);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the investigation
|
||||
investigateRepository().then(() => {
|
||||
console.log("Investigation complete.");
|
||||
process.exit(0);
|
||||
}).catch((error) => {
|
||||
console.error("Fatal error:", error);
|
||||
process.exit(1);
|
||||
});
|
||||
239
Divers/gitea-mirror/scripts/manage-db.ts
Normal file
239
Divers/gitea-mirror/scripts/manage-db.ts
Normal file
@@ -0,0 +1,239 @@
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
import { Database } from "bun:sqlite";
|
||||
import { drizzle } from "drizzle-orm/bun-sqlite";
|
||||
import { migrate } from "drizzle-orm/bun-sqlite/migrator";
|
||||
import { v4 as uuidv4 } from "uuid";
|
||||
import { users, configs, repositories, organizations, mirrorJobs, events } from "../src/lib/db/schema";
|
||||
import bcrypt from "bcryptjs";
|
||||
import { eq } from "drizzle-orm";
|
||||
|
||||
// Command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
const command = args[0] || "check";
|
||||
|
||||
// Ensure data directory exists
|
||||
const dataDir = path.join(process.cwd(), "data");
|
||||
if (!fs.existsSync(dataDir)) {
|
||||
fs.mkdirSync(dataDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Database path - ensure we use absolute path
|
||||
const dbPath = path.join(dataDir, "gitea-mirror.db");
|
||||
|
||||
/**
|
||||
* Initialize database with migrations
|
||||
*/
|
||||
async function initDatabase() {
|
||||
console.log("📦 Initializing database...");
|
||||
|
||||
// Create an empty database file if it doesn't exist
|
||||
if (!fs.existsSync(dbPath)) {
|
||||
fs.writeFileSync(dbPath, "");
|
||||
}
|
||||
|
||||
// Create SQLite instance
|
||||
const sqlite = new Database(dbPath);
|
||||
const db = drizzle({ client: sqlite });
|
||||
|
||||
// Run migrations
|
||||
console.log("🔄 Running migrations...");
|
||||
try {
|
||||
migrate(db, { migrationsFolder: "./drizzle" });
|
||||
console.log("✅ Migrations completed successfully");
|
||||
} catch (error) {
|
||||
console.error("❌ Error running migrations:", error);
|
||||
throw error;
|
||||
}
|
||||
|
||||
sqlite.close();
|
||||
console.log("✅ Database initialized successfully");
|
||||
}
|
||||
|
||||
/**
|
||||
* Check database status
|
||||
*/
|
||||
async function checkDatabase() {
|
||||
console.log("🔍 Checking database status...");
|
||||
|
||||
if (!fs.existsSync(dbPath)) {
|
||||
console.log("❌ Database does not exist at:", dbPath);
|
||||
console.log("💡 Run 'bun run init-db' to create the database");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const sqlite = new Database(dbPath);
|
||||
const db = drizzle({ client: sqlite });
|
||||
|
||||
try {
|
||||
// Check tables
|
||||
const tables = sqlite.query(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
|
||||
).all() as Array<{name: string}>;
|
||||
|
||||
console.log("\n📊 Tables found:");
|
||||
for (const table of tables) {
|
||||
const count = sqlite.query(`SELECT COUNT(*) as count FROM ${table.name}`).get() as {count: number};
|
||||
console.log(` - ${table.name}: ${count.count} records`);
|
||||
}
|
||||
|
||||
// Check migrations
|
||||
const migrations = sqlite.query(
|
||||
"SELECT * FROM __drizzle_migrations ORDER BY created_at DESC LIMIT 5"
|
||||
).all() as Array<{hash: string, created_at: number}>;
|
||||
|
||||
if (migrations.length > 0) {
|
||||
console.log("\n📋 Recent migrations:");
|
||||
for (const migration of migrations) {
|
||||
const date = new Date(migration.created_at);
|
||||
console.log(` - ${migration.hash} (${date.toLocaleString()})`);
|
||||
}
|
||||
}
|
||||
|
||||
sqlite.close();
|
||||
console.log("\n✅ Database check complete");
|
||||
} catch (error) {
|
||||
console.error("❌ Error checking database:", error);
|
||||
sqlite.close();
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset user accounts (development only)
|
||||
*/
|
||||
async function resetUsers() {
|
||||
console.log("🗑️ Resetting all user accounts...");
|
||||
|
||||
if (!fs.existsSync(dbPath)) {
|
||||
console.log("❌ Database does not exist");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const sqlite = new Database(dbPath);
|
||||
const db = drizzle({ client: sqlite });
|
||||
|
||||
try {
|
||||
// Delete all data in order of foreign key dependencies
|
||||
await db.delete(events);
|
||||
await db.delete(mirrorJobs);
|
||||
await db.delete(repositories);
|
||||
await db.delete(organizations);
|
||||
await db.delete(configs);
|
||||
await db.delete(users);
|
||||
|
||||
console.log("✅ All user accounts and related data have been removed");
|
||||
|
||||
sqlite.close();
|
||||
} catch (error) {
|
||||
console.error("❌ Error resetting users:", error);
|
||||
sqlite.close();
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up database files
|
||||
*/
|
||||
async function cleanupDatabase() {
|
||||
console.log("🧹 Cleaning up database files...");
|
||||
|
||||
const filesToRemove = [
|
||||
dbPath,
|
||||
path.join(dataDir, "gitea-mirror-dev.db"),
|
||||
path.join(process.cwd(), "gitea-mirror.db"),
|
||||
path.join(process.cwd(), "gitea-mirror-dev.db"),
|
||||
];
|
||||
|
||||
for (const file of filesToRemove) {
|
||||
if (fs.existsSync(file)) {
|
||||
fs.unlinkSync(file);
|
||||
console.log(` - Removed: ${file}`);
|
||||
}
|
||||
}
|
||||
|
||||
console.log("✅ Database cleanup complete");
|
||||
}
|
||||
|
||||
/**
|
||||
* Fix database location issues
|
||||
*/
|
||||
async function fixDatabase() {
|
||||
console.log("🔧 Fixing database location issues...");
|
||||
|
||||
// Legacy database paths
|
||||
const rootDbFile = path.join(process.cwd(), "gitea-mirror.db");
|
||||
const rootDevDbFile = path.join(process.cwd(), "gitea-mirror-dev.db");
|
||||
const dataDevDbFile = path.join(dataDir, "gitea-mirror-dev.db");
|
||||
|
||||
// Check for databases in wrong locations
|
||||
if (fs.existsSync(rootDbFile)) {
|
||||
console.log("📁 Found database in root directory");
|
||||
if (!fs.existsSync(dbPath)) {
|
||||
console.log(" → Moving to data directory...");
|
||||
fs.renameSync(rootDbFile, dbPath);
|
||||
console.log("✅ Database moved successfully");
|
||||
} else {
|
||||
console.log(" ⚠️ Database already exists in data directory");
|
||||
console.log(" → Keeping existing data directory database");
|
||||
fs.unlinkSync(rootDbFile);
|
||||
console.log(" → Removed root directory database");
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up dev databases
|
||||
if (fs.existsSync(rootDevDbFile)) {
|
||||
fs.unlinkSync(rootDevDbFile);
|
||||
console.log(" → Removed root dev database");
|
||||
}
|
||||
if (fs.existsSync(dataDevDbFile)) {
|
||||
fs.unlinkSync(dataDevDbFile);
|
||||
console.log(" → Removed data dev database");
|
||||
}
|
||||
|
||||
console.log("✅ Database location fixed");
|
||||
}
|
||||
|
||||
/**
|
||||
* Auto mode - check and initialize if needed
|
||||
*/
|
||||
async function autoMode() {
|
||||
if (!fs.existsSync(dbPath)) {
|
||||
console.log("📦 Database not found, initializing...");
|
||||
await initDatabase();
|
||||
} else {
|
||||
console.log("✅ Database already exists");
|
||||
await checkDatabase();
|
||||
}
|
||||
}
|
||||
|
||||
// Execute command
|
||||
switch (command) {
|
||||
case "init":
|
||||
await initDatabase();
|
||||
break;
|
||||
case "check":
|
||||
await checkDatabase();
|
||||
break;
|
||||
case "fix":
|
||||
await fixDatabase();
|
||||
break;
|
||||
case "reset-users":
|
||||
await resetUsers();
|
||||
break;
|
||||
case "cleanup":
|
||||
await cleanupDatabase();
|
||||
break;
|
||||
case "auto":
|
||||
await autoMode();
|
||||
break;
|
||||
default:
|
||||
console.log("Available commands:");
|
||||
console.log(" init - Initialize database with migrations");
|
||||
console.log(" check - Check database status");
|
||||
console.log(" fix - Fix database location issues");
|
||||
console.log(" reset-users - Remove all users and related data");
|
||||
console.log(" cleanup - Remove all database files");
|
||||
console.log(" auto - Auto initialize if needed");
|
||||
process.exit(1);
|
||||
}
|
||||
44
Divers/gitea-mirror/scripts/remove-duplicate-events.ts
Normal file
44
Divers/gitea-mirror/scripts/remove-duplicate-events.ts
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* Script to remove duplicate events from the database
|
||||
* This script identifies and removes events with duplicate deduplication keys
|
||||
*
|
||||
* Usage:
|
||||
* bun scripts/remove-duplicate-events.ts [userId]
|
||||
*
|
||||
* Where [userId] is optional - if provided, only removes duplicates for that user
|
||||
*/
|
||||
|
||||
import { removeDuplicateEvents } from "../src/lib/events";
|
||||
|
||||
// Parse command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
const userId = args.length > 0 ? args[0] : undefined;
|
||||
|
||||
async function runDuplicateRemoval() {
|
||||
try {
|
||||
if (userId) {
|
||||
console.log(`Starting duplicate event removal for user: ${userId}...`);
|
||||
} else {
|
||||
console.log("Starting duplicate event removal for all users...");
|
||||
}
|
||||
|
||||
// Call the removeDuplicateEvents function
|
||||
const result = await removeDuplicateEvents(userId);
|
||||
|
||||
console.log(`Duplicate removal summary:`);
|
||||
console.log(`- Duplicate events removed: ${result.duplicatesRemoved}`);
|
||||
|
||||
if (result.duplicatesRemoved > 0) {
|
||||
console.log("Duplicate event removal completed successfully");
|
||||
} else {
|
||||
console.log("No duplicate events found");
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Error running duplicate event removal:", error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the duplicate removal
|
||||
runDuplicateRemoval();
|
||||
279
Divers/gitea-mirror/scripts/repair-mirrored-repos.ts
Normal file
279
Divers/gitea-mirror/scripts/repair-mirrored-repos.ts
Normal file
@@ -0,0 +1,279 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
/**
|
||||
* Script to repair repositories that exist in Gitea but have incorrect status in the database
|
||||
* This fixes the issue where repositories show as "imported" but are actually mirrored in Gitea
|
||||
*
|
||||
* Usage: bun scripts/repair-mirrored-repos.ts [--dry-run] [--repo-name=<name>]
|
||||
*/
|
||||
|
||||
import { db, repositories, configs } from "@/lib/db";
|
||||
import { eq, and, or } from "drizzle-orm";
|
||||
import { createMirrorJob } from "@/lib/helpers";
|
||||
import { repoStatusEnum } from "@/types/Repository";
|
||||
|
||||
const isDryRun = process.argv.includes("--dry-run");
|
||||
const specificRepo = process.argv.find(arg => arg.startsWith("--repo-name="))?.split("=")[1];
|
||||
const isStartupMode = process.argv.includes("--startup");
|
||||
|
||||
async function checkRepoInGitea(config: any, owner: string, repoName: string): Promise<boolean> {
|
||||
try {
|
||||
if (!config.giteaConfig?.url || !config.giteaConfig?.token) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`${config.giteaConfig.url}/api/v1/repos/${owner}/${repoName}`,
|
||||
{
|
||||
headers: {
|
||||
Authorization: `token ${config.giteaConfig.token}`,
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
return response.ok;
|
||||
} catch (error) {
|
||||
console.error(`Error checking repo ${owner}/${repoName} in Gitea:`, error);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async function getRepoDetailsFromGitea(config: any, owner: string, repoName: string): Promise<any> {
|
||||
try {
|
||||
if (!config.giteaConfig?.url || !config.giteaConfig?.token) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`${config.giteaConfig.url}/api/v1/repos/${owner}/${repoName}`,
|
||||
{
|
||||
headers: {
|
||||
Authorization: `token ${config.giteaConfig.token}`,
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
if (response.ok) {
|
||||
return await response.json();
|
||||
}
|
||||
return null;
|
||||
} catch (error) {
|
||||
console.error(`Error getting repo details for ${owner}/${repoName}:`, error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async function repairMirroredRepositories() {
|
||||
if (!isStartupMode) {
|
||||
console.log("🔧 Repairing mirrored repositories database status");
|
||||
console.log("=" .repeat(60));
|
||||
|
||||
if (isDryRun) {
|
||||
console.log("🔍 DRY RUN MODE - No changes will be made");
|
||||
console.log("");
|
||||
}
|
||||
|
||||
if (specificRepo) {
|
||||
console.log(`🎯 Targeting specific repository: ${specificRepo}`);
|
||||
console.log("");
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// Find repositories that might need repair
|
||||
const whereConditions = specificRepo
|
||||
? and(
|
||||
or(
|
||||
eq(repositories.status, "imported"),
|
||||
eq(repositories.status, "failed")
|
||||
),
|
||||
eq(repositories.name, specificRepo)
|
||||
)
|
||||
: or(
|
||||
eq(repositories.status, "imported"),
|
||||
eq(repositories.status, "failed")
|
||||
);
|
||||
|
||||
const repos = await db
|
||||
.select()
|
||||
.from(repositories)
|
||||
.where(whereConditions);
|
||||
|
||||
if (repos.length === 0) {
|
||||
if (!isStartupMode) {
|
||||
console.log("✅ No repositories found that need repair");
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (!isStartupMode) {
|
||||
console.log(`📋 Found ${repos.length} repositories to check:`);
|
||||
console.log("");
|
||||
}
|
||||
|
||||
let repairedCount = 0;
|
||||
let skippedCount = 0;
|
||||
let errorCount = 0;
|
||||
|
||||
for (const repo of repos) {
|
||||
if (!isStartupMode) {
|
||||
console.log(`🔍 Checking repository: ${repo.name}`);
|
||||
console.log(` Current status: ${repo.status}`);
|
||||
console.log(` Mirrored location: ${repo.mirroredLocation || "Not set"}`);
|
||||
}
|
||||
|
||||
try {
|
||||
// Get user configuration
|
||||
const config = await db
|
||||
.select()
|
||||
.from(configs)
|
||||
.where(eq(configs.id, repo.configId))
|
||||
.limit(1);
|
||||
|
||||
if (config.length === 0) {
|
||||
if (!isStartupMode) {
|
||||
console.log(` ❌ No configuration found for repository`);
|
||||
}
|
||||
errorCount++;
|
||||
continue;
|
||||
}
|
||||
|
||||
const userConfig = config[0];
|
||||
const giteaUsername = userConfig.giteaConfig?.defaultOwner;
|
||||
|
||||
if (!giteaUsername) {
|
||||
if (!isStartupMode) {
|
||||
console.log(` ❌ No Gitea username in configuration`);
|
||||
}
|
||||
errorCount++;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if repository exists in Gitea (try both user and organization)
|
||||
let existsInGitea = false;
|
||||
let actualOwner = giteaUsername;
|
||||
let giteaRepoDetails = null;
|
||||
|
||||
// First check user location
|
||||
existsInGitea = await checkRepoInGitea(userConfig, giteaUsername, repo.name);
|
||||
if (existsInGitea) {
|
||||
giteaRepoDetails = await getRepoDetailsFromGitea(userConfig, giteaUsername, repo.name);
|
||||
}
|
||||
|
||||
// If not found in user location and repo has organization, check organization
|
||||
if (!existsInGitea && repo.organization) {
|
||||
existsInGitea = await checkRepoInGitea(userConfig, repo.organization, repo.name);
|
||||
if (existsInGitea) {
|
||||
actualOwner = repo.organization;
|
||||
giteaRepoDetails = await getRepoDetailsFromGitea(userConfig, repo.organization, repo.name);
|
||||
}
|
||||
}
|
||||
|
||||
if (!existsInGitea) {
|
||||
if (!isStartupMode) {
|
||||
console.log(` ⏭️ Repository not found in Gitea - skipping`);
|
||||
}
|
||||
skippedCount++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!isStartupMode) {
|
||||
console.log(` ✅ Repository found in Gitea at: ${actualOwner}/${repo.name}`);
|
||||
|
||||
if (giteaRepoDetails) {
|
||||
console.log(` 📊 Gitea details:`);
|
||||
console.log(` Mirror: ${giteaRepoDetails.mirror}`);
|
||||
console.log(` Created: ${new Date(giteaRepoDetails.created_at).toISOString()}`);
|
||||
console.log(` Updated: ${new Date(giteaRepoDetails.updated_at).toISOString()}`);
|
||||
if (giteaRepoDetails.mirror_updated) {
|
||||
console.log(` Mirror Updated: ${new Date(giteaRepoDetails.mirror_updated).toISOString()}`);
|
||||
}
|
||||
}
|
||||
} else if (repairedCount === 0) {
|
||||
// In startup mode, only log the first repair to indicate activity
|
||||
console.log(`Repairing repository status inconsistencies...`);
|
||||
}
|
||||
|
||||
if (!isDryRun) {
|
||||
// Update repository status in database
|
||||
const mirrorUpdated = giteaRepoDetails?.mirror_updated
|
||||
? new Date(giteaRepoDetails.mirror_updated)
|
||||
: new Date();
|
||||
|
||||
await db
|
||||
.update(repositories)
|
||||
.set({
|
||||
status: repoStatusEnum.parse("mirrored"),
|
||||
updatedAt: new Date(),
|
||||
lastMirrored: mirrorUpdated,
|
||||
errorMessage: null,
|
||||
mirroredLocation: `${actualOwner}/${repo.name}`,
|
||||
})
|
||||
.where(eq(repositories.id, repo.id!));
|
||||
|
||||
// Create a mirror job log entry
|
||||
await createMirrorJob({
|
||||
userId: userConfig.userId || "",
|
||||
repositoryId: repo.id,
|
||||
repositoryName: repo.name,
|
||||
message: `Repository status repaired - found existing mirror in Gitea`,
|
||||
details: `Repository ${repo.name} was found to already exist in Gitea at ${actualOwner}/${repo.name} and database status was updated from ${repo.status} to mirrored.`,
|
||||
status: "mirrored",
|
||||
});
|
||||
|
||||
if (!isStartupMode) {
|
||||
console.log(` 🔧 Repaired: Updated status to 'mirrored'`);
|
||||
}
|
||||
} else {
|
||||
if (!isStartupMode) {
|
||||
console.log(` 🔧 Would repair: Update status from '${repo.status}' to 'mirrored'`);
|
||||
}
|
||||
}
|
||||
|
||||
repairedCount++;
|
||||
|
||||
} catch (error) {
|
||||
if (!isStartupMode) {
|
||||
console.log(` ❌ Error processing repository: ${error instanceof Error ? error.message : String(error)}`);
|
||||
}
|
||||
errorCount++;
|
||||
}
|
||||
|
||||
if (!isStartupMode) {
|
||||
console.log("");
|
||||
}
|
||||
}
|
||||
|
||||
if (isStartupMode) {
|
||||
// In startup mode, only log if there were repairs or errors
|
||||
if (repairedCount > 0) {
|
||||
console.log(`Repaired ${repairedCount} repository status inconsistencies`);
|
||||
}
|
||||
if (errorCount > 0) {
|
||||
console.log(`Warning: ${errorCount} repositories had errors during repair`);
|
||||
}
|
||||
} else {
|
||||
console.log("📊 Repair Summary:");
|
||||
console.log(` Repaired: ${repairedCount}`);
|
||||
console.log(` Skipped: ${skippedCount}`);
|
||||
console.log(` Errors: ${errorCount}`);
|
||||
|
||||
if (isDryRun && repairedCount > 0) {
|
||||
console.log("");
|
||||
console.log("💡 To apply these changes, run the script without --dry-run");
|
||||
}
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error("❌ Error during repair process:", error);
|
||||
}
|
||||
}
|
||||
|
||||
// Run the repair
|
||||
repairMirroredRepositories().then(() => {
|
||||
console.log("Repair process complete.");
|
||||
process.exit(0);
|
||||
}).catch((error) => {
|
||||
console.error("Fatal error:", error);
|
||||
process.exit(1);
|
||||
});
|
||||
31
Divers/gitea-mirror/scripts/run-migration.ts
Normal file
31
Divers/gitea-mirror/scripts/run-migration.ts
Normal file
@@ -0,0 +1,31 @@
|
||||
import { Database } from "bun:sqlite";
|
||||
import { readFileSync } from "fs";
|
||||
import path from "path";
|
||||
|
||||
const dbPath = path.join(process.cwd(), "data/gitea-mirror.db");
|
||||
const db = new Database(dbPath);
|
||||
|
||||
// Read the migration file
|
||||
const migrationPath = path.join(process.cwd(), "drizzle/0001_polite_exodus.sql");
|
||||
const migration = readFileSync(migrationPath, "utf-8");
|
||||
|
||||
// Split by statement-breakpoint and execute each statement
|
||||
const statements = migration.split("--> statement-breakpoint").map(s => s.trim()).filter(s => s);
|
||||
|
||||
try {
|
||||
db.run("BEGIN TRANSACTION");
|
||||
|
||||
for (const statement of statements) {
|
||||
console.log(`Executing: ${statement.substring(0, 50)}...`);
|
||||
db.run(statement);
|
||||
}
|
||||
|
||||
db.run("COMMIT");
|
||||
console.log("Migration completed successfully!");
|
||||
} catch (error) {
|
||||
db.run("ROLLBACK");
|
||||
console.error("Migration failed:", error);
|
||||
process.exit(1);
|
||||
} finally {
|
||||
db.close();
|
||||
}
|
||||
184
Divers/gitea-mirror/scripts/setup-authentik-test.sh
Executable file
184
Divers/gitea-mirror/scripts/setup-authentik-test.sh
Executable file
@@ -0,0 +1,184 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Setup script for testing Authentik SSO with Gitea Mirror
|
||||
# This script helps configure Authentik for testing SSO integration
|
||||
|
||||
set -e
|
||||
|
||||
echo "======================================"
|
||||
echo "Authentik SSO Test Environment Setup"
|
||||
echo "======================================"
|
||||
echo ""
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Check if docker and docker-compose are installed
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo -e "${RED}Docker is not installed. Please install Docker first.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v docker-compose &> /dev/null && ! docker compose version &> /dev/null; then
|
||||
echo -e "${RED}Docker Compose is not installed. Please install Docker Compose first.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to generate random secret
|
||||
generate_secret() {
|
||||
openssl rand -base64 32 | tr -d '\n' | tr -d '=' | tr -d '/' | tr -d '+'
|
||||
}
|
||||
|
||||
# Function to wait for service
|
||||
wait_for_service() {
|
||||
local service=$1
|
||||
local port=$2
|
||||
local max_attempts=30
|
||||
local attempt=1
|
||||
|
||||
echo -n "Waiting for $service to be ready"
|
||||
while ! nc -z localhost $port 2>/dev/null; do
|
||||
if [ $attempt -eq $max_attempts ]; then
|
||||
echo -e "\n${RED}Timeout waiting for $service${NC}"
|
||||
return 1
|
||||
fi
|
||||
echo -n "."
|
||||
sleep 2
|
||||
((attempt++))
|
||||
done
|
||||
echo -e " ${GREEN}Ready!${NC}"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
ACTION=${1:-start}
|
||||
|
||||
case $ACTION in
|
||||
start)
|
||||
echo "Starting Authentik test environment..."
|
||||
echo ""
|
||||
|
||||
# Check if .env.authentik exists, if not create it
|
||||
if [ ! -f .env.authentik ]; then
|
||||
echo "Creating .env.authentik with secure defaults..."
|
||||
cat > .env.authentik << EOF
|
||||
# Authentik Configuration
|
||||
AUTHENTIK_SECRET_KEY=$(generate_secret)
|
||||
AUTHENTIK_DB_PASSWORD=$(generate_secret)
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD=admin-password
|
||||
AUTHENTIK_BOOTSTRAP_EMAIL=admin@example.com
|
||||
|
||||
# Gitea Mirror Configuration
|
||||
BETTER_AUTH_SECRET=$(generate_secret)
|
||||
BETTER_AUTH_URL=http://localhost:4321
|
||||
BETTER_AUTH_TRUSTED_ORIGINS=http://localhost:4321,http://localhost:9000
|
||||
|
||||
# URLs for testing
|
||||
AUTHENTIK_URL=http://localhost:9000
|
||||
GITEA_MIRROR_URL=http://localhost:4321
|
||||
EOF
|
||||
echo -e "${GREEN}Created .env.authentik with secure secrets${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Load environment variables
|
||||
source .env.authentik
|
||||
|
||||
# Start Authentik services
|
||||
echo "Starting Authentik services..."
|
||||
docker-compose -f docker-compose.authentik.yml --env-file .env.authentik up -d
|
||||
|
||||
# Wait for Authentik to be ready
|
||||
echo ""
|
||||
wait_for_service "Authentik" 9000
|
||||
|
||||
# Wait a bit more for initialization
|
||||
echo "Waiting for Authentik to initialize..."
|
||||
sleep 10
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Authentik is running!${NC}"
|
||||
echo ""
|
||||
echo "======================================"
|
||||
echo "Authentik Access Information:"
|
||||
echo "======================================"
|
||||
echo "URL: http://localhost:9000"
|
||||
echo "Admin Username: akadmin"
|
||||
echo "Admin Password: admin-password"
|
||||
echo ""
|
||||
echo "======================================"
|
||||
echo "Next Steps:"
|
||||
echo "======================================"
|
||||
echo "1. Access Authentik at http://localhost:9000"
|
||||
echo "2. Login with akadmin / admin-password"
|
||||
echo "3. Create an Authentik OIDC Provider for Gitea Mirror:"
|
||||
echo " - Name: gitea-mirror"
|
||||
echo " - Redirect URI:"
|
||||
echo " http://localhost:4321/api/auth/sso/callback/authentik"
|
||||
echo " - Scopes: openid, profile, email"
|
||||
echo ""
|
||||
echo "4. Create Application:"
|
||||
echo " - Name: Gitea Mirror"
|
||||
echo " - Slug: gitea-mirror"
|
||||
echo " - Provider: gitea-mirror (created above)"
|
||||
echo ""
|
||||
echo "5. Start Gitea Mirror with:"
|
||||
echo " bun run dev"
|
||||
echo ""
|
||||
echo "6. Configure SSO in Gitea Mirror:"
|
||||
echo " - Go to Settings → Authentication & SSO"
|
||||
echo " - Add provider with:"
|
||||
echo " - Provider ID: authentik"
|
||||
echo " - Issuer URL: http://localhost:9000/application/o/gitea-mirror/"
|
||||
echo " - Click Discover to pull Authentik endpoints"
|
||||
echo " - Client ID: (from Authentik provider)"
|
||||
echo " - Client Secret: (from Authentik provider)"
|
||||
echo ""
|
||||
echo "If you previously registered this provider on a version earlier than v3.8.10, delete it and re-add it after upgrading to avoid missing endpoint data."
|
||||
echo ""
|
||||
;;
|
||||
|
||||
stop)
|
||||
echo "Stopping Authentik test environment..."
|
||||
docker-compose -f docker-compose.authentik.yml down
|
||||
echo -e "${GREEN}✓ Authentik stopped${NC}"
|
||||
;;
|
||||
|
||||
clean)
|
||||
echo "Cleaning up Authentik test environment..."
|
||||
docker-compose -f docker-compose.authentik.yml down -v
|
||||
echo -e "${GREEN}✓ Authentik data cleaned${NC}"
|
||||
|
||||
read -p "Remove .env.authentik file? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
rm -f .env.authentik
|
||||
echo -e "${GREEN}✓ Configuration file removed${NC}"
|
||||
fi
|
||||
;;
|
||||
|
||||
logs)
|
||||
docker-compose -f docker-compose.authentik.yml logs -f
|
||||
;;
|
||||
|
||||
status)
|
||||
echo "Authentik Service Status:"
|
||||
echo "========================="
|
||||
docker-compose -f docker-compose.authentik.yml ps
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Usage: $0 {start|stop|clean|logs|status}"
|
||||
echo ""
|
||||
echo "Commands:"
|
||||
echo " start - Start Authentik test environment"
|
||||
echo " stop - Stop Authentik services"
|
||||
echo " clean - Stop and remove all data"
|
||||
echo " logs - Show Authentik logs"
|
||||
echo " status - Show service status"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
52
Divers/gitea-mirror/scripts/startup-env-config.ts
Normal file
52
Divers/gitea-mirror/scripts/startup-env-config.ts
Normal file
@@ -0,0 +1,52 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* Startup environment configuration script
|
||||
* This script loads configuration from environment variables before the application starts
|
||||
* It ensures that Docker environment variables are properly populated in the database
|
||||
*
|
||||
* Usage:
|
||||
* bun scripts/startup-env-config.ts
|
||||
*/
|
||||
|
||||
import { initializeConfigFromEnv } from "../src/lib/env-config-loader";
|
||||
|
||||
async function runEnvConfigInitialization() {
|
||||
console.log('=== Gitea Mirror Environment Configuration ===');
|
||||
console.log('Loading configuration from environment variables...');
|
||||
console.log('');
|
||||
|
||||
const startTime = Date.now();
|
||||
|
||||
try {
|
||||
await initializeConfigFromEnv();
|
||||
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
console.log(`✅ Environment configuration loaded successfully in ${duration}ms`);
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
console.error(`❌ Failed to load environment configuration after ${duration}ms:`, error);
|
||||
console.error('Application will start anyway, but environment configuration was not loaded.');
|
||||
|
||||
// Exit with error code but allow startup to continue
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle process signals gracefully
|
||||
process.on('SIGINT', () => {
|
||||
console.log('\n⚠️ Configuration loading interrupted by SIGINT');
|
||||
process.exit(130);
|
||||
});
|
||||
|
||||
process.on('SIGTERM', () => {
|
||||
console.log('\n⚠️ Configuration loading interrupted by SIGTERM');
|
||||
process.exit(143);
|
||||
});
|
||||
|
||||
// Run the environment configuration initialization
|
||||
runEnvConfigInitialization();
|
||||
113
Divers/gitea-mirror/scripts/startup-recovery.ts
Normal file
113
Divers/gitea-mirror/scripts/startup-recovery.ts
Normal file
@@ -0,0 +1,113 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* Startup recovery script
|
||||
* This script runs job recovery before the application starts serving requests
|
||||
* It ensures that any interrupted jobs from previous runs are properly handled
|
||||
*
|
||||
* Usage:
|
||||
* bun scripts/startup-recovery.ts [--force] [--timeout=30000]
|
||||
*
|
||||
* Options:
|
||||
* --force: Force recovery even if a recent attempt was made
|
||||
* --timeout: Maximum time to wait for recovery (in milliseconds, default: 30000)
|
||||
*/
|
||||
|
||||
import { initializeRecovery, hasJobsNeedingRecovery, getRecoveryStatus } from "../src/lib/recovery";
|
||||
|
||||
// Parse command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
const forceRecovery = args.includes('--force');
|
||||
const timeoutArg = args.find(arg => arg.startsWith('--timeout='));
|
||||
const timeout = timeoutArg ? parseInt(timeoutArg.split('=')[1], 10) : 30000;
|
||||
|
||||
if (isNaN(timeout) || timeout < 1000) {
|
||||
console.error("Error: Timeout must be at least 1000ms");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
async function runStartupRecovery() {
|
||||
console.log('=== Gitea Mirror Startup Recovery ===');
|
||||
console.log(`Timeout: ${timeout}ms`);
|
||||
console.log(`Force recovery: ${forceRecovery}`);
|
||||
console.log('');
|
||||
|
||||
const startTime = Date.now();
|
||||
|
||||
try {
|
||||
// Set up timeout
|
||||
const timeoutPromise = new Promise<never>((_, reject) => {
|
||||
setTimeout(() => {
|
||||
reject(new Error(`Recovery timeout after ${timeout}ms`));
|
||||
}, timeout);
|
||||
});
|
||||
|
||||
// Check if recovery is needed first
|
||||
console.log('Checking if recovery is needed...');
|
||||
const needsRecovery = await hasJobsNeedingRecovery();
|
||||
|
||||
if (!needsRecovery) {
|
||||
console.log('✅ No jobs need recovery. Startup can proceed.');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('⚠️ Jobs found that need recovery. Starting recovery process...');
|
||||
|
||||
// Run recovery with timeout
|
||||
const recoveryPromise = initializeRecovery({
|
||||
skipIfRecentAttempt: !forceRecovery,
|
||||
maxRetries: 3,
|
||||
retryDelay: 5000,
|
||||
});
|
||||
|
||||
const recoveryResult = await Promise.race([recoveryPromise, timeoutPromise]);
|
||||
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
if (recoveryResult) {
|
||||
console.log(`✅ Recovery completed successfully in ${duration}ms`);
|
||||
console.log('Application startup can proceed.');
|
||||
process.exit(0);
|
||||
} else {
|
||||
console.log(`⚠️ Recovery completed with some failures in ${duration}ms`);
|
||||
console.log('Application startup can proceed, but some jobs may have failed.');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
const endTime = Date.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
if (error instanceof Error && error.message.includes('timeout')) {
|
||||
console.error(`❌ Recovery timed out after ${duration}ms`);
|
||||
console.error('Application will start anyway, but some jobs may remain interrupted.');
|
||||
|
||||
// Get current recovery status
|
||||
const status = getRecoveryStatus();
|
||||
console.log('Recovery status:', status);
|
||||
|
||||
// Exit with warning code but allow startup to continue
|
||||
process.exit(1);
|
||||
} else {
|
||||
console.error(`❌ Recovery failed after ${duration}ms:`, error);
|
||||
console.error('Application will start anyway, but recovery was unsuccessful.');
|
||||
|
||||
// Exit with error code but allow startup to continue
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle process signals gracefully
|
||||
process.on('SIGINT', () => {
|
||||
console.log('\n⚠️ Recovery interrupted by SIGINT');
|
||||
process.exit(130);
|
||||
});
|
||||
|
||||
process.on('SIGTERM', () => {
|
||||
console.log('\n⚠️ Recovery interrupted by SIGTERM');
|
||||
process.exit(143);
|
||||
});
|
||||
|
||||
// Run the startup recovery
|
||||
runStartupRecovery();
|
||||
237
Divers/gitea-mirror/scripts/test-graceful-shutdown.ts
Normal file
237
Divers/gitea-mirror/scripts/test-graceful-shutdown.ts
Normal file
@@ -0,0 +1,237 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* Integration test for graceful shutdown functionality
|
||||
*
|
||||
* This script tests the complete graceful shutdown flow:
|
||||
* 1. Starts a mock job
|
||||
* 2. Initiates shutdown
|
||||
* 3. Verifies job state is saved correctly
|
||||
* 4. Tests recovery after restart
|
||||
*
|
||||
* Usage:
|
||||
* bun scripts/test-graceful-shutdown.ts [--cleanup]
|
||||
*/
|
||||
|
||||
import { db, mirrorJobs } from "../src/lib/db";
|
||||
import { eq } from "drizzle-orm";
|
||||
import {
|
||||
initializeShutdownManager,
|
||||
registerActiveJob,
|
||||
unregisterActiveJob,
|
||||
gracefulShutdown,
|
||||
getShutdownStatus,
|
||||
registerShutdownCallback
|
||||
} from "../src/lib/shutdown-manager";
|
||||
import { setupSignalHandlers, removeSignalHandlers } from "../src/lib/signal-handlers";
|
||||
import { createMirrorJob } from "../src/lib/helpers";
|
||||
|
||||
// Test configuration
|
||||
const TEST_USER_ID = "test-user-shutdown";
|
||||
const TEST_JOB_PREFIX = "test-shutdown-job";
|
||||
|
||||
// Parse command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
const shouldCleanup = args.includes('--cleanup');
|
||||
|
||||
/**
|
||||
* Create a test job for shutdown testing
|
||||
*/
|
||||
async function createTestJob(): Promise<string> {
|
||||
console.log('📝 Creating test job...');
|
||||
|
||||
const jobId = await createMirrorJob({
|
||||
userId: TEST_USER_ID,
|
||||
message: 'Test job for graceful shutdown testing',
|
||||
details: 'This job simulates a long-running mirroring operation',
|
||||
status: "mirroring",
|
||||
jobType: "mirror",
|
||||
totalItems: 10,
|
||||
itemIds: ['item-1', 'item-2', 'item-3', 'item-4', 'item-5'],
|
||||
inProgress: true,
|
||||
});
|
||||
|
||||
console.log(`✅ Created test job: ${jobId}`);
|
||||
return jobId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify that job state was saved correctly during shutdown
|
||||
*/
|
||||
async function verifyJobState(jobId: string): Promise<boolean> {
|
||||
console.log(`🔍 Verifying job state for ${jobId}...`);
|
||||
|
||||
const jobs = await db
|
||||
.select()
|
||||
.from(mirrorJobs)
|
||||
.where(eq(mirrorJobs.id, jobId));
|
||||
|
||||
if (jobs.length === 0) {
|
||||
console.error(`❌ Job ${jobId} not found in database`);
|
||||
return false;
|
||||
}
|
||||
|
||||
const job = jobs[0];
|
||||
|
||||
// Check that the job was marked as interrupted
|
||||
if (job.inProgress) {
|
||||
console.error(`❌ Job ${jobId} is still marked as in progress`);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!job.message?.includes('interrupted by application shutdown')) {
|
||||
console.error(`❌ Job ${jobId} does not have shutdown message. Message: ${job.message}`);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!job.lastCheckpoint) {
|
||||
console.error(`❌ Job ${jobId} does not have a checkpoint timestamp`);
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log(`✅ Job ${jobId} state verified correctly`);
|
||||
console.log(` - In Progress: ${job.inProgress}`);
|
||||
console.log(` - Message: ${job.message}`);
|
||||
console.log(` - Last Checkpoint: ${job.lastCheckpoint}`);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Test the graceful shutdown process
|
||||
*/
|
||||
async function testGracefulShutdown(): Promise<void> {
|
||||
console.log('\n🧪 Testing Graceful Shutdown Process');
|
||||
console.log('=====================================\n');
|
||||
|
||||
try {
|
||||
// Step 1: Initialize shutdown manager
|
||||
console.log('Step 1: Initializing shutdown manager...');
|
||||
initializeShutdownManager();
|
||||
setupSignalHandlers();
|
||||
|
||||
// Step 2: Create and register a test job
|
||||
console.log('\nStep 2: Creating and registering test job...');
|
||||
const jobId = await createTestJob();
|
||||
registerActiveJob(jobId);
|
||||
|
||||
// Step 3: Register a test shutdown callback
|
||||
console.log('\nStep 3: Registering shutdown callback...');
|
||||
let callbackExecuted = false;
|
||||
registerShutdownCallback(async () => {
|
||||
console.log('🔧 Test shutdown callback executed');
|
||||
callbackExecuted = true;
|
||||
});
|
||||
|
||||
// Step 4: Check initial status
|
||||
console.log('\nStep 4: Checking initial status...');
|
||||
const initialStatus = getShutdownStatus();
|
||||
console.log(` - Active jobs: ${initialStatus.activeJobs.length}`);
|
||||
console.log(` - Registered callbacks: ${initialStatus.registeredCallbacks}`);
|
||||
console.log(` - Shutdown in progress: ${initialStatus.inProgress}`);
|
||||
|
||||
// Step 5: Simulate graceful shutdown
|
||||
console.log('\nStep 5: Simulating graceful shutdown...');
|
||||
|
||||
// Override process.exit to prevent actual exit during test
|
||||
const originalExit = process.exit;
|
||||
let exitCode: number | undefined;
|
||||
process.exit = ((code?: number) => {
|
||||
exitCode = code;
|
||||
console.log(`🚪 Process.exit called with code: ${code}`);
|
||||
// Don't actually exit during test
|
||||
}) as any;
|
||||
|
||||
try {
|
||||
// This should save job state and execute callbacks
|
||||
await gracefulShutdown('TEST_SIGNAL');
|
||||
} catch (error) {
|
||||
// Expected since we're not actually exiting
|
||||
console.log(`⚠️ Graceful shutdown completed (exit intercepted)`);
|
||||
}
|
||||
|
||||
// Restore original process.exit
|
||||
process.exit = originalExit;
|
||||
|
||||
// Step 6: Verify job state was saved
|
||||
console.log('\nStep 6: Verifying job state was saved...');
|
||||
const jobStateValid = await verifyJobState(jobId);
|
||||
|
||||
// Step 7: Verify callback was executed
|
||||
console.log('\nStep 7: Verifying callback execution...');
|
||||
if (callbackExecuted) {
|
||||
console.log('✅ Shutdown callback was executed');
|
||||
} else {
|
||||
console.error('❌ Shutdown callback was not executed');
|
||||
}
|
||||
|
||||
// Step 8: Test results
|
||||
console.log('\n📊 Test Results:');
|
||||
console.log(` - Job state saved correctly: ${jobStateValid ? '✅' : '❌'}`);
|
||||
console.log(` - Shutdown callback executed: ${callbackExecuted ? '✅' : '❌'}`);
|
||||
console.log(` - Exit code: ${exitCode}`);
|
||||
|
||||
if (jobStateValid && callbackExecuted) {
|
||||
console.log('\n🎉 All tests passed! Graceful shutdown is working correctly.');
|
||||
} else {
|
||||
console.error('\n❌ Some tests failed. Please check the implementation.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('\n💥 Test failed with error:', error);
|
||||
process.exit(1);
|
||||
} finally {
|
||||
// Clean up signal handlers
|
||||
removeSignalHandlers();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up test data
|
||||
*/
|
||||
async function cleanupTestData(): Promise<void> {
|
||||
console.log('🧹 Cleaning up test data...');
|
||||
|
||||
const result = await db
|
||||
.delete(mirrorJobs)
|
||||
.where(eq(mirrorJobs.userId, TEST_USER_ID));
|
||||
|
||||
console.log('✅ Test data cleaned up');
|
||||
}
|
||||
|
||||
/**
|
||||
* Main test runner
|
||||
*/
|
||||
async function runTest(): Promise<void> {
|
||||
console.log('🧪 Graceful Shutdown Integration Test');
|
||||
console.log('====================================\n');
|
||||
|
||||
if (shouldCleanup) {
|
||||
await cleanupTestData();
|
||||
console.log('✅ Cleanup completed');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
await testGracefulShutdown();
|
||||
} finally {
|
||||
// Always clean up test data
|
||||
await cleanupTestData();
|
||||
}
|
||||
}
|
||||
|
||||
// Handle process signals gracefully during testing
|
||||
process.on('SIGINT', async () => {
|
||||
console.log('\n⚠️ Test interrupted by SIGINT');
|
||||
await cleanupTestData();
|
||||
process.exit(130);
|
||||
});
|
||||
|
||||
process.on('SIGTERM', async () => {
|
||||
console.log('\n⚠️ Test interrupted by SIGTERM');
|
||||
await cleanupTestData();
|
||||
process.exit(143);
|
||||
});
|
||||
|
||||
// Run the test
|
||||
runTest();
|
||||
183
Divers/gitea-mirror/scripts/test-recovery.ts
Normal file
183
Divers/gitea-mirror/scripts/test-recovery.ts
Normal file
@@ -0,0 +1,183 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* Test script for the recovery system
|
||||
* This script creates test jobs and verifies that the recovery system can handle them
|
||||
*
|
||||
* Usage:
|
||||
* bun scripts/test-recovery.ts [--cleanup]
|
||||
*
|
||||
* Options:
|
||||
* --cleanup: Clean up test jobs after testing
|
||||
*/
|
||||
|
||||
import { db, mirrorJobs } from "../src/lib/db";
|
||||
import { createMirrorJob } from "../src/lib/helpers";
|
||||
import { initializeRecovery, hasJobsNeedingRecovery, getRecoveryStatus } from "../src/lib/recovery";
|
||||
import { eq } from "drizzle-orm";
|
||||
import { v4 as uuidv4 } from "uuid";
|
||||
|
||||
// Parse command line arguments
|
||||
const args = process.argv.slice(2);
|
||||
const cleanup = args.includes('--cleanup');
|
||||
|
||||
// Test configuration
|
||||
const TEST_USER_ID = "test-user-recovery";
|
||||
const TEST_BATCH_ID = "test-batch-recovery";
|
||||
|
||||
async function runRecoveryTest() {
|
||||
console.log('=== Recovery System Test ===');
|
||||
console.log(`Cleanup mode: ${cleanup}`);
|
||||
console.log('');
|
||||
|
||||
try {
|
||||
if (cleanup) {
|
||||
await cleanupTestJobs();
|
||||
return;
|
||||
}
|
||||
|
||||
// Step 1: Create test jobs that simulate interrupted state
|
||||
console.log('Step 1: Creating test interrupted jobs...');
|
||||
await createTestInterruptedJobs();
|
||||
|
||||
// Step 2: Check if recovery system detects them
|
||||
console.log('Step 2: Checking if recovery system detects interrupted jobs...');
|
||||
const needsRecovery = await hasJobsNeedingRecovery();
|
||||
console.log(`Jobs needing recovery: ${needsRecovery}`);
|
||||
|
||||
if (!needsRecovery) {
|
||||
console.log('❌ Recovery system did not detect interrupted jobs');
|
||||
return;
|
||||
}
|
||||
|
||||
// Step 3: Get recovery status
|
||||
console.log('Step 3: Getting recovery status...');
|
||||
const status = getRecoveryStatus();
|
||||
console.log('Recovery status:', status);
|
||||
|
||||
// Step 4: Run recovery
|
||||
console.log('Step 4: Running recovery...');
|
||||
const recoveryResult = await initializeRecovery({
|
||||
skipIfRecentAttempt: false,
|
||||
maxRetries: 2,
|
||||
retryDelay: 2000,
|
||||
});
|
||||
|
||||
console.log(`Recovery result: ${recoveryResult}`);
|
||||
|
||||
// Step 5: Verify recovery completed
|
||||
console.log('Step 5: Verifying recovery completed...');
|
||||
const stillNeedsRecovery = await hasJobsNeedingRecovery();
|
||||
console.log(`Jobs still needing recovery: ${stillNeedsRecovery}`);
|
||||
|
||||
// Step 6: Check final job states
|
||||
console.log('Step 6: Checking final job states...');
|
||||
await checkTestJobStates();
|
||||
|
||||
console.log('');
|
||||
console.log('✅ Recovery test completed successfully!');
|
||||
console.log('Run with --cleanup to remove test jobs');
|
||||
|
||||
} catch (error) {
|
||||
console.error('❌ Recovery test failed:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create test jobs that simulate interrupted state
|
||||
*/
|
||||
async function createTestInterruptedJobs() {
|
||||
const testJobs = [
|
||||
{
|
||||
repositoryId: uuidv4(),
|
||||
repositoryName: "test-repo-1",
|
||||
message: "Test mirror job 1",
|
||||
status: "mirroring" as const,
|
||||
jobType: "mirror" as const,
|
||||
},
|
||||
{
|
||||
repositoryId: uuidv4(),
|
||||
repositoryName: "test-repo-2",
|
||||
message: "Test sync job 2",
|
||||
status: "syncing" as const,
|
||||
jobType: "sync" as const,
|
||||
},
|
||||
];
|
||||
|
||||
for (const job of testJobs) {
|
||||
const jobId = await createMirrorJob({
|
||||
userId: TEST_USER_ID,
|
||||
repositoryId: job.repositoryId,
|
||||
repositoryName: job.repositoryName,
|
||||
message: job.message,
|
||||
status: job.status,
|
||||
jobType: job.jobType,
|
||||
batchId: TEST_BATCH_ID,
|
||||
totalItems: 5,
|
||||
itemIds: [job.repositoryId, uuidv4(), uuidv4(), uuidv4(), uuidv4()],
|
||||
inProgress: true,
|
||||
skipDuplicateEvent: true,
|
||||
});
|
||||
|
||||
// Manually set the job to look interrupted (old timestamp)
|
||||
const oldTimestamp = new Date();
|
||||
oldTimestamp.setMinutes(oldTimestamp.getMinutes() - 15); // 15 minutes ago
|
||||
|
||||
await db
|
||||
.update(mirrorJobs)
|
||||
.set({
|
||||
startedAt: oldTimestamp,
|
||||
lastCheckpoint: oldTimestamp,
|
||||
})
|
||||
.where(eq(mirrorJobs.id, jobId));
|
||||
|
||||
console.log(`Created test job: ${jobId} (${job.repositoryName})`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check the final states of test jobs
|
||||
*/
|
||||
async function checkTestJobStates() {
|
||||
const testJobs = await db
|
||||
.select()
|
||||
.from(mirrorJobs)
|
||||
.where(eq(mirrorJobs.userId, TEST_USER_ID));
|
||||
|
||||
console.log(`Found ${testJobs.length} test jobs:`);
|
||||
|
||||
for (const job of testJobs) {
|
||||
console.log(`- Job ${job.id}: ${job.status} (inProgress: ${job.inProgress})`);
|
||||
console.log(` Message: ${job.message}`);
|
||||
console.log(` Started: ${job.startedAt ? new Date(job.startedAt).toISOString() : 'never'}`);
|
||||
console.log(` Completed: ${job.completedAt ? new Date(job.completedAt).toISOString() : 'never'}`);
|
||||
console.log('');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up test jobs
|
||||
*/
|
||||
async function cleanupTestJobs() {
|
||||
console.log('Cleaning up test jobs...');
|
||||
|
||||
const result = await db
|
||||
.delete(mirrorJobs)
|
||||
.where(eq(mirrorJobs.userId, TEST_USER_ID));
|
||||
|
||||
console.log('✅ Test jobs cleaned up successfully');
|
||||
}
|
||||
|
||||
// Handle process signals gracefully
|
||||
process.on('SIGINT', () => {
|
||||
console.log('\n⚠️ Test interrupted by SIGINT');
|
||||
process.exit(130);
|
||||
});
|
||||
|
||||
process.on('SIGTERM', () => {
|
||||
console.log('\n⚠️ Test interrupted by SIGTERM');
|
||||
process.exit(143);
|
||||
});
|
||||
|
||||
// Run the test
|
||||
runRecoveryTest();
|
||||
Reference in New Issue
Block a user