feat: add Docker deployment support and fix /users/me endpoint

- Add docker/ directory with Dockerfile for backend and frontend
- Backend: pytorch/pytorch CUDA base image with all qwen_tts deps
- Frontend: multi-stage nginx build with /api/ proxy to backend
- docker-compose.yml (CPU) + docker-compose.gpu.yml (GPU overlay)
- Fix /users/me returning 404 due to missing route (was caught by /{user_id})
- Update .gitignore to exclude docker/models, docker/data, docker/.env
- Update README and README.zh.md with Docker deployment instructions

Images: bdim404/qwen3-tts-backend, bdim404/qwen3-tts-frontend

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-06 15:15:27 +08:00
parent 964ebb824c
commit 38e00fd38c
13 changed files with 201 additions and 0 deletions

3
.gitignore vendored
View File

@@ -22,6 +22,9 @@ wheels/
venv/
env/
Qwen/
docker/models/
docker/data/
docker/.env
qwen3-tts-frontend/node_modules/
qwen3-tts-frontend/dist/
qwen3-tts-frontend/.env

View File

@@ -42,6 +42,32 @@
**Frontend**: React 19 + TypeScript + Vite + Tailwind + Shadcn/ui
## Docker Deployment
Pre-built images are available on Docker Hub: [bdim404/qwen3-tts-backend](https://hub.docker.com/r/bdim404/qwen3-tts-backend), [bdim404/qwen3-tts-frontend](https://hub.docker.com/r/bdim404/qwen3-tts-frontend)
**Prerequisites**: Docker, Docker Compose, NVIDIA GPU + [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
```bash
git clone https://github.com/bdim404/Qwen3-TTS-WebUI.git
cd Qwen3-TTS-webUI
# Download models to docker/models/ (see Installation > Download Models below)
mkdir -p docker/models docker/data
# Configure
cp docker/.env.example docker/.env
# Edit docker/.env and set SECRET_KEY
# Start (CPU only)
docker compose -f docker/docker-compose.yml up -d
# Start (with GPU)
docker compose -f docker/docker-compose.yml -f docker/docker-compose.gpu.yml up -d
```
Access the application at `http://localhost`. Default credentials: `admin` / `admin123456`
## Installation
### Prerequisites

View File

@@ -42,6 +42,32 @@
**前端**: React 19 + TypeScript + Vite + Tailwind + Shadcn/ui
## Docker 部署
预构建镜像已发布至 Docker Hub[bdim404/qwen3-tts-backend](https://hub.docker.com/r/bdim404/qwen3-tts-backend)、[bdim404/qwen3-tts-frontend](https://hub.docker.com/r/bdim404/qwen3-tts-frontend)
**前置要求**Docker、Docker Compose、NVIDIA GPU + [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
```bash
git clone https://github.com/bdim404/Qwen3-TTS-WebUI.git
cd Qwen3-TTS-webUI
# 下载模型到 docker/models/(参见下方"安装部署 > 下载模型"
mkdir -p docker/models docker/data
# 配置
cp docker/.env.example docker/.env
# 编辑 docker/.env设置 SECRET_KEY
# 启动(仅 CPU
docker compose -f docker/docker-compose.yml up -d
# 启动GPU 加速)
docker compose -f docker/docker-compose.yml -f docker/docker-compose.gpu.yml up -d
```
访问 `http://localhost`,默认账号:`admin` / `admin123456`
## 安装部署
### 环境要求

2
docker/.env.example Normal file
View File

@@ -0,0 +1,2 @@
SECRET_KEY=change-this-to-a-strong-random-key
MODEL_DEVICE=cuda:0

View File

@@ -0,0 +1,16 @@
.git
docs
images
**/__pycache__
**/*.pyc
**/*.pyo
qwen3-tts-backend/Qwen
qwen3-tts-backend/outputs
qwen3-tts-backend/voice_cache
qwen3-tts-backend/*.db
qwen3-tts-backend/.env
qwen3-tts-frontend/node_modules
qwen3-tts-frontend/dist
qwen3-tts-frontend/.env
models
data

22
docker/backend/Dockerfile Normal file
View File

@@ -0,0 +1,22 @@
FROM pytorch/pytorch:2.5.1-cuda12.1-cudnn9-runtime
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
libsndfile1 \
&& rm -rf /var/lib/apt/lists/*
COPY qwen3-tts-backend/requirements.txt .
COPY docker/backend/requirements.qwen_tts.txt .
RUN pip install --no-cache-dir -r requirements.txt -r requirements.qwen_tts.txt
COPY qwen_tts ./qwen_tts
COPY qwen3-tts-backend .
RUN mkdir -p /app/Qwen /app/data /app/voice_cache /app/outputs
ENV PYTHONPATH=/app
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

View File

@@ -0,0 +1,6 @@
librosa
einops
transformers>=4.40.0,<5.0.0
accelerate
onnxruntime
sox

View File

@@ -0,0 +1,9 @@
services:
backend:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

28
docker/docker-compose.yml Normal file
View File

@@ -0,0 +1,28 @@
services:
backend:
build:
context: ..
dockerfile: docker/backend/Dockerfile
environment:
SECRET_KEY: ${SECRET_KEY:?Set SECRET_KEY in docker/.env}
MODEL_DEVICE: ${MODEL_DEVICE:-cuda:0}
MODEL_BASE_PATH: /app/Qwen
DATABASE_URL: sqlite:////app/data/qwen_tts.db
CACHE_DIR: /app/voice_cache
OUTPUT_DIR: /app/outputs
volumes:
- ./models:/app/Qwen
- ./data/db:/app/data
- ./data/cache:/app/voice_cache
- ./data/outputs:/app/outputs
restart: unless-stopped
frontend:
build:
context: ..
dockerfile: docker/frontend/Dockerfile
ports:
- "80:80"
depends_on:
- backend
restart: unless-stopped

View File

@@ -0,0 +1,12 @@
.git
docs
images
**/__pycache__
**/*.pyc
qwen3-tts-backend
qwen_tts
qwen3-tts-frontend/node_modules
qwen3-tts-frontend/dist
qwen3-tts-frontend/.env
models
data

View File

@@ -0,0 +1,18 @@
FROM node:20-alpine AS builder
WORKDIR /app
COPY qwen3-tts-frontend/package*.json ./
RUN npm ci
COPY qwen3-tts-frontend/ .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY docker/frontend/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

View File

@@ -0,0 +1,25 @@
server {
listen 80;
root /usr/share/nginx/html;
index index.html;
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml text/javascript;
location /api/ {
proxy_pass http://backend:8000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
client_max_body_size 50m;
}
location / {
try_files $uri $uri/ /index.html;
}
}

View File

@@ -78,6 +78,14 @@ async def create_user(
return user
@router.get("/me", response_model=User)
@limiter.limit("30/minute")
async def get_current_user_info(
request: Request,
current_user: User = Depends(get_current_user)
):
return current_user
@router.get("/{user_id}", response_model=User)
@limiter.limit("30/minute")
async def get_user(