Compare commits
84 Commits
4df6522952
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| e3a65d8052 | |||
| f45ca77b79 | |||
| a5c31ef8a9 | |||
| 3662a5ab9e | |||
| 137251fe37 | |||
| 8101734c63 | |||
| 6cf8a6d5e3 | |||
| c5317cb639 | |||
| 5f66493558 | |||
| 5cc4cde223 | |||
| 5b89481104 | |||
| 33338ac100 | |||
| 66eef2c82e | |||
| 6f1a4e19d3 | |||
| dd69470ee4 | |||
| 23722a771b | |||
| 4cf7fdc1c2 | |||
| 91271bd83c | |||
| 49c7f7abd7 | |||
| 978ad1cdb0 | |||
| e1a885dcc9 | |||
| e849548e9e | |||
| d92d629fbc | |||
| aa5e3c28aa | |||
| 64138b07e4 | |||
| 6732576d06 | |||
| a3a421c0ac | |||
| c6c93766c7 | |||
| e101b44fa5 | |||
| 666b1f2753 | |||
| a7c3f1f92e | |||
| 187cd064fb | |||
| 4cc0e155bd | |||
| 175d0bb0bb | |||
| 311fdd4006 | |||
| 373e1c7c0a | |||
| d2412da057 | |||
| 8bdd5e2277 | |||
| 26343b08f8 | |||
| 6d350940b9 | |||
| ba6abad602 | |||
| c0da075f03 | |||
| 6bfb148e39 | |||
| 45c05b5720 | |||
| bd498b9bcb | |||
| 20e80ac28e | |||
| ad3a73f061 | |||
| c0fb8f69de | |||
| 8b8e8a8d8c | |||
| 05d2d77515 | |||
| 8b701745bf | |||
| a79ee1b228 | |||
| d8e39c66be | |||
| 055937fc3d | |||
| 1338f6bace | |||
| 995f5b1339 | |||
| 22bee4f32c | |||
| 5f1421f4bd | |||
| f8e8e85cb0 | |||
| c550790287 | |||
| d8dd047020 | |||
| c4d2e48f73 | |||
| 8ed8da2d90 | |||
| ce92b43205 | |||
| 7244349e97 | |||
| 6aa86b6666 | |||
| e7bd66ffdf | |||
| b25ae95626 | |||
| 5949ffc63b | |||
| 29e654cabc | |||
| 9d792249c9 | |||
| 50df852416 | |||
| d88afbfe2e | |||
| 0637504974 | |||
| 712cf1deb9 | |||
| 89036ba62d | |||
| 87f94fcc51 | |||
| 46333853d2 | |||
| 0e51b7c0f1 | |||
| 4ca8690a89 | |||
| d80d4e9741 | |||
| b35054f23e | |||
| abcf872d2d | |||
| e805028d46 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,2 +1,3 @@
|
||||
transcode/
|
||||
.worktrees/
|
||||
.superpowers/
|
||||
@@ -70,4 +70,5 @@ docker compose -f compose.yml -f compose.traefik.yml up -d
|
||||
| `NEXT_PUBLIC_API_URL` | frontend build arg | Baked in at build time — must point to the public backend URL |
|
||||
| `API_URL` | frontend runtime env | Server-side only (Next.js API routes). Set in compose. |
|
||||
| `DATABASE_URL` | backend | `sqlite:///app/data/k-tv.db` or postgres DSN |
|
||||
| `SESSION_SECRET` | backend | Change in production |
|
||||
| `JWT_SECRET` | backend | JWT signing key — change in production (min 32 chars) |
|
||||
| `COOKIE_SECRET` | backend | OIDC state cookie encryption key — change in production (min 64 chars) |
|
||||
|
||||
2886
docs/superpowers/plans/2026-03-19-library-management.md
Normal file
2886
docs/superpowers/plans/2026-03-19-library-management.md
Normal file
File diff suppressed because it is too large
Load Diff
255
docs/superpowers/specs/2026-03-19-library-management-design.md
Normal file
255
docs/superpowers/specs/2026-03-19-library-management-design.md
Normal file
@@ -0,0 +1,255 @@
|
||||
# Library Management — Design Spec
|
||||
|
||||
**Date:** 2026-03-19
|
||||
**Status:** Approved
|
||||
|
||||
## Context
|
||||
|
||||
K-TV currently has ephemeral library browsing: metadata is always fetched live from providers (Jellyfin, local files) on demand, only accessible through the block editor filter UI. There is no persistent library, no cross-provider browsing, and no way to schedule directly from browsing media.
|
||||
|
||||
This feature introduces an in-house library that syncs and stores media metadata from all providers into k-tv's own DB, then surfaces it through a first-class `/library` page where users can browse, filter, multi-select, and schedule media directly onto channels.
|
||||
|
||||
---
|
||||
|
||||
## Data Model
|
||||
|
||||
### Migration `20260319000002_add_library_tables.sql`
|
||||
|
||||
**`library_items` table**
|
||||
|
||||
| Column | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` | TEXT PK | `"{provider_id}::{raw_item_id}"` — double-colon, matches existing registry prefix format |
|
||||
| `provider_id` | TEXT | `"jellyfin"`, `"local"`, etc. |
|
||||
| `external_id` | TEXT | Raw ID from provider (for re-fetching) |
|
||||
| `title` | TEXT | |
|
||||
| `content_type` | TEXT | `"movie"` \| `"episode"` \| `"short"` |
|
||||
| `duration_secs` | INTEGER | |
|
||||
| `series_name` | TEXT | NULL for movies |
|
||||
| `season_number` | INTEGER | NULL for movies |
|
||||
| `episode_number` | INTEGER | NULL for movies |
|
||||
| `year` | INTEGER | |
|
||||
| `genres` | TEXT | JSON array |
|
||||
| `tags` | TEXT | JSON array |
|
||||
| `collection_id` | TEXT | Provider-specific collection ID |
|
||||
| `collection_name` | TEXT | Human-readable name (synced from provider) |
|
||||
| `collection_type` | TEXT | e.g. `"movies"`, `"tvshows"` |
|
||||
| `thumbnail_url` | TEXT | Provider-served image URL; re-fetched on every sync |
|
||||
| `synced_at` | TEXT | ISO8601 timestamp |
|
||||
|
||||
`thumbnail_url` is refreshed on every full sync. Frontend must handle broken image URLs gracefully (show a placeholder on load error) since URLs may break if provider URL or API key changes between syncs.
|
||||
|
||||
**`library_sync_log` table**
|
||||
|
||||
| Column | Type | Notes |
|
||||
|---|---|---|
|
||||
| `id` | INTEGER PK AUTOINCREMENT | |
|
||||
| `provider_id` | TEXT | |
|
||||
| `started_at` | TEXT | ISO8601 |
|
||||
| `finished_at` | TEXT | ISO8601, NULL while running |
|
||||
| `items_found` | INTEGER | |
|
||||
| `status` | TEXT | `"running"` \| `"done"` \| `"error"` |
|
||||
| `error_msg` | TEXT | NULL on success |
|
||||
|
||||
### Migration `20260319000003_add_app_settings.sql`
|
||||
|
||||
**`app_settings` table** — general-purpose key-value store for admin-configurable settings. Co-exists with the existing `transcode_settings` singleton table (that table is not modified). Seeded with: `INSERT OR IGNORE INTO app_settings(key, value) VALUES ('library_sync_interval_hours', '6')`.
|
||||
|
||||
| Column | Type | Notes |
|
||||
|---|---|---|
|
||||
| `key` | TEXT PK | |
|
||||
| `value` | TEXT | Bare JSON scalar stored as text (e.g. `6`, not `"6"`) |
|
||||
|
||||
`GET /admin/settings` returns parsed values: `{ "library_sync_interval_hours": 6 }` (number, not string). Backend parses with `serde_json::Value` on read; frontend receives typed JSON.
|
||||
|
||||
---
|
||||
|
||||
## Backend Architecture
|
||||
|
||||
### Sync Engine
|
||||
|
||||
**Layer placement:**
|
||||
- `LibraryItem`, `LibrarySyncResult`, `LibrarySyncAdapter` trait, and `ILibraryRepository` trait live in **`domain/src/library.rs`**
|
||||
- `FullSyncAdapter` (impl) and `SqliteLibraryRepository` (impl) live in **`infra/src/library/`**
|
||||
|
||||
The `LibrarySyncAdapter` domain trait does **not** take a DB pool — DB writes are an infra concern handled entirely inside the impl:
|
||||
|
||||
```rust
|
||||
// domain/src/library.rs
|
||||
#[async_trait]
|
||||
pub trait LibrarySyncAdapter: Send + Sync {
|
||||
async fn sync_provider(
|
||||
&self,
|
||||
provider: &dyn IMediaProvider,
|
||||
provider_id: &str,
|
||||
) -> LibrarySyncResult;
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
pub trait ILibraryRepository: Send + Sync {
|
||||
async fn search(&self, filter: LibrarySearchFilter) -> Vec<LibraryItem>;
|
||||
async fn get_by_id(&self, id: &str) -> Option<LibraryItem>;
|
||||
async fn list_collections(&self, provider_id: Option<&str>) -> Vec<LibraryCollection>;
|
||||
async fn list_series(&self, provider_id: Option<&str>) -> Vec<String>;
|
||||
async fn list_genres(&self, content_type: Option<ContentType>, provider_id: Option<&str>) -> Vec<String>;
|
||||
async fn upsert_items(&self, provider_id: &str, items: Vec<LibraryItem>) -> DomainResult<()>;
|
||||
async fn clear_provider(&self, provider_id: &str) -> DomainResult<()>;
|
||||
async fn log_sync_start(&self, provider_id: &str) -> i64; // returns log row id
|
||||
async fn log_sync_finish(&self, log_id: i64, result: &LibrarySyncResult);
|
||||
async fn latest_sync_status(&self) -> Vec<LibrarySyncLogEntry>;
|
||||
async fn is_sync_running(&self, provider_id: &str) -> bool;
|
||||
}
|
||||
```
|
||||
|
||||
`FullSyncAdapter` in infra holds `Arc<dyn ILibraryRepository>` and calls repo methods internally — no DB pool leaks into domain.
|
||||
|
||||
```
|
||||
infra/src/library/
|
||||
mod.rs
|
||||
full_sync.rs -- FullSyncAdapter impl: calls list_collections for names/types,
|
||||
fetch_items(&MediaFilter::default()), repo.clear_provider + repo.upsert_items
|
||||
repository.rs -- SqliteLibraryRepository impl of ILibraryRepository
|
||||
scheduler.rs -- tokio interval task; 10s startup delay (hardcoded); reads interval from
|
||||
app_settings on each tick via AppSettingsRepository
|
||||
```
|
||||
|
||||
**AppState** gains:
|
||||
```rust
|
||||
library_sync_adapter: Arc<dyn LibrarySyncAdapter>,
|
||||
library_repo: Arc<dyn ILibraryRepository>,
|
||||
```
|
||||
|
||||
### Sync Concurrency Guard
|
||||
|
||||
Before starting a sync for a provider, the scheduler and `POST /library/sync` handler both call `repo.is_sync_running(provider_id)`. If `true`, the scheduler skips that provider for this tick; the HTTP endpoint returns **409 Conflict** with body `{ "error": "sync already running for provider" }`. This prevents the truncate+insert race.
|
||||
|
||||
### Admin Settings
|
||||
|
||||
- `GET /admin/settings` — returns `app_settings` rows as parsed JSON object. Requires `is_admin = true` (`AdminUser` extractor).
|
||||
- `PUT /admin/settings` — partial update (only provided keys updated). Requires `is_admin = true`. Scheduler reads new value on next tick.
|
||||
|
||||
### Library API Routes (all require authenticated user)
|
||||
|
||||
| Endpoint | Notes |
|
||||
|---|---|
|
||||
| `GET /library/items?type=&series[]=&collection=&genre=&decade=&min_duration=&max_duration=&search=&provider=&offset=0&limit=50` | DB-backed; returns `{ items: LibraryItemResponse[], total: u32 }` |
|
||||
| `GET /library/items/:id` | Single item |
|
||||
| `GET /library/collections?provider=` | `{ id, name, collection_type }[]` from DB |
|
||||
| `GET /library/series?provider=` | `String[]` from DB |
|
||||
| `GET /library/genres?type=&provider=` | `String[]` from DB |
|
||||
| `GET /library/sync/status` | `LibrarySyncLogEntry[]` (latest per provider) |
|
||||
| `POST /library/sync` | Fires sync; 409 if already running; requires `is_admin = true` |
|
||||
| `GET /admin/settings` | `{ key: value }` map (parsed); requires `is_admin = true` |
|
||||
| `PUT /admin/settings` | Partial update; requires `is_admin = true` |
|
||||
|
||||
**Existing library route API contract is unchanged** for all params except `offset`/`limit` (new). Frontend `use-library.ts` hooks continue working without modification.
|
||||
|
||||
---
|
||||
|
||||
## Frontend Architecture
|
||||
|
||||
### New route: `/library`
|
||||
|
||||
Added to main nav alongside Dashboard and TV.
|
||||
|
||||
```
|
||||
app/(main)/library/
|
||||
page.tsx -- layout, search/filter state, pagination state, multi-select state
|
||||
components/
|
||||
library-sidebar.tsx -- provider picker, type, genre chips, series picker, decade, duration range
|
||||
library-grid.tsx -- paginated grid of LibraryItemCard
|
||||
library-item-card.tsx -- thumbnail (with broken-image fallback placeholder), title,
|
||||
duration badge, content type, checkbox
|
||||
schedule-from-library-dialog.tsx -- modal (see flow below)
|
||||
add-to-block-dialog.tsx -- modal (see flow below)
|
||||
sync-status-bar.tsx -- "Last synced 2h ago · Jellyfin" strip at top
|
||||
```
|
||||
|
||||
### New hooks
|
||||
|
||||
```
|
||||
hooks/use-library-search.ts -- useLibrarySearch(filter, page): wraps GET /library/items with
|
||||
offset/limit pagination. Query key: ["library", "search", filter, page].
|
||||
onSuccess of useTriggerSync: invalidate ["library", "search"] and ["library", "sync"].
|
||||
hooks/use-library-sync.ts -- useLibrarySyncStatus() → ["library", "sync"],
|
||||
useTriggerSync() → POST /library/sync; on success invalidates
|
||||
["library", "search"] and ["library", "sync"]
|
||||
hooks/use-admin-settings.ts -- useAdminSettings(), useUpdateAdminSettings()
|
||||
```
|
||||
|
||||
Existing `use-library.ts` and its four hooks (`useCollections`, `useSeries`, `useGenres`, `useLibraryItems`) are **unchanged** — still used by `AlgorithmicFilterEditor` in the block editor.
|
||||
|
||||
### Schedule From Library Flow
|
||||
|
||||
1. User selects one or more items → floating action bar at bottom
|
||||
2. "Schedule on channel" → `ScheduleFromLibraryDialog` modal
|
||||
3. Modal fields (in order — time/days/strategy disabled until channel is selected):
|
||||
- **Channel** picker (required; enables remaining fields once selected)
|
||||
- **Days**: Mon–Sun checkboxes
|
||||
- **Time**: `NaiveTime` input interpreted in the selected channel's timezone. Timezone label displayed inline (e.g. "20:00 Europe/Warsaw"). Disabled until channel is selected.
|
||||
- **Duration**: For single item, defaults to `ceil(duration_secs / 60)` minutes shown in UI. For multi-item, user sets manually. Rounding to nearest minute shown explicitly (e.g. "1h 35m (rounded from 1h 34m 47s)").
|
||||
- **Fill strategy**: Sequential (default for episodic) | Random | Best Fit
|
||||
4. Preview: *"3 blocks will be created on [Channel] — Mon/Wed/Fri at 20:00 [Europe/Warsaw], Sequential"*
|
||||
5. Confirm → `PUT /channels/:id` merging new `ProgrammingBlock` entries into `schedule_config.day_blocks`:
|
||||
- Series / episodic: **Algorithmic** block with `series_names: [series]`
|
||||
- Specific item(s): **Manual** block with those item IDs
|
||||
|
||||
### Add To Block Flow
|
||||
|
||||
1. User selects items → "Add to block" from action bar
|
||||
2. `AddToBlockDialog`:
|
||||
- Pick channel
|
||||
- Pick existing **manual** block: populated from `useChannel(id)` by collecting all blocks across all days with `content.type === "manual"`, **deduplicated by block `id`** (same block appearing Mon + Wed shown once)
|
||||
3. Confirm → appends item IDs to that block. Since the same block object (by `id`) may appear in multiple days in `schedule_config.day_blocks`, the PUT updates **all day entries that contain that block id** — the block is mutated wherever it appears, consistently.
|
||||
|
||||
### Admin Settings UI
|
||||
|
||||
Settings panel (cog icon in dashboard header, alongside existing transcode settings) gains a "Library sync" section:
|
||||
- Number input: "Sync interval (hours)"
|
||||
- "Sync now" button (visible to admin users only; calls `POST /library/sync`; disabled + shows spinner while running)
|
||||
- Status: "Last synced: [time] · [N] items" per provider from `GET /library/sync/status`
|
||||
|
||||
---
|
||||
|
||||
## Key Files Modified
|
||||
|
||||
**Backend:**
|
||||
- `domain/src/lib.rs` — add `library` module
|
||||
- `domain/src/library.rs` — new: `LibraryItem`, `LibraryCollection`, `LibrarySyncResult`, `LibrarySyncAdapter` trait, `ILibraryRepository` trait, `LibrarySearchFilter`, `LibrarySyncLogEntry`
|
||||
- `infra/src/library/full_sync.rs` — `FullSyncAdapter` impl
|
||||
- `infra/src/library/repository.rs` — `SqliteLibraryRepository` impl
|
||||
- `infra/src/library/scheduler.rs` — tokio interval task, 10s startup delay
|
||||
- `api/src/routes/library.rs` — DB-backed handlers + sync/admin routes
|
||||
- `api/src/routes/mod.rs` — wire admin settings routes
|
||||
- `api/src/main.rs` — start sync scheduler task
|
||||
- `api/src/state.rs` — add `library_sync_adapter: Arc<dyn LibrarySyncAdapter>`, `library_repo: Arc<dyn ILibraryRepository>`
|
||||
- `migrations_sqlite/20260319000002_add_library_tables.sql`
|
||||
- `migrations_sqlite/20260319000003_add_app_settings.sql`
|
||||
|
||||
**Frontend:**
|
||||
- `lib/types.ts` — add `LibraryItem`, `LibraryCollection`, `SyncLogEntry`, `AdminSettings`
|
||||
- `lib/api.ts` — add `api.library.items(filter, page)`, `api.library.syncStatus()`, `api.library.triggerSync()`, `api.admin.getSettings()`, `api.admin.updateSettings(partial)`
|
||||
- `app/(main)/layout.tsx` — add Library nav link
|
||||
- New files per structure above
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. **Sync**: `POST /library/sync` → 200. `GET /library/sync/status` shows `done` with item count. `library_items` rows in DB have `collection_name` and `thumbnail_url` populated.
|
||||
2. **Sync dedup**: Second `POST /library/sync` while first is running → 409 Conflict.
|
||||
3. **Library API pagination**: `GET /library/items?offset=0&limit=10` returns 10 items + `total`. `?offset=10&limit=10` returns next page.
|
||||
4. **Provider filter**: `GET /library/items?provider=jellyfin` returns only Jellyfin items.
|
||||
5. **Collections**: `GET /library/collections` returns `{ id, name, collection_type }` objects.
|
||||
6. **Admin guard**: `POST /library/sync` and `PUT /admin/settings` with non-admin user → 403.
|
||||
7. **Admin settings**: `PUT /admin/settings { "library_sync_interval_hours": 2 }` → `GET /admin/settings` returns `{ "library_sync_interval_hours": 2 }` (number). Scheduler uses new interval.
|
||||
8. **Library UI**: `/library` page loads, sidebar filters update grid, pagination controls work. `sync-status-bar` shows last sync time.
|
||||
9. **Broken thumbnail**: Item with a broken `thumbnail_url` shows fallback placeholder in `library-item-card`.
|
||||
10. **Multi-select action bar**: Select 3 items → action bar appears with "Schedule on channel" and "Add to block".
|
||||
11. **Schedule flow — time gating**: Time input is disabled until channel is selected; timezone shown next to input after channel selected.
|
||||
12. **Schedule flow — rounding**: Single-item selection shows rounded duration with note in dialog.
|
||||
13. **Schedule flow — confirm**: Series scheduled → Dashboard shows Algorithmic blocks on correct days with `series_names` filter.
|
||||
14. **Add to block — dedup**: Block appearing on Mon+Wed shown once in picker. Confirming updates both days.
|
||||
15. **Cache invalidation**: After `useTriggerSync()` resolves, `["library", "search"]` and `["library", "sync"]` query keys are invalidated, grid refreshes.
|
||||
16. **Block editor unchanged**: `AlgorithmicFilterEditor` works; `useLibraryItems` in `use-library.ts` unchanged.
|
||||
17. **Regression**: `cargo test` passes.
|
||||
5
k-tv-backend/Cargo.lock
generated
5
k-tv-backend/Cargo.lock
generated
@@ -86,14 +86,15 @@ dependencies = [
|
||||
"serde",
|
||||
"serde_json",
|
||||
"serde_qs",
|
||||
"sqlx",
|
||||
"thiserror 2.0.17",
|
||||
"time",
|
||||
"tokio",
|
||||
"tokio-stream",
|
||||
"tokio-util",
|
||||
"tower",
|
||||
"tower-http",
|
||||
"tracing",
|
||||
"tracing-subscriber",
|
||||
"uuid",
|
||||
]
|
||||
|
||||
@@ -701,6 +702,7 @@ dependencies = [
|
||||
"email_address",
|
||||
"rand 0.8.5",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror 2.0.17",
|
||||
"tokio",
|
||||
"url",
|
||||
@@ -3546,6 +3548,7 @@ dependencies = [
|
||||
"futures-core",
|
||||
"pin-project-lite",
|
||||
"tokio",
|
||||
"tokio-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
@@ -11,7 +11,7 @@ The backend is a Cargo workspace with three crates following Hexagonal (Ports &
|
||||
```
|
||||
k-tv-backend/
|
||||
├── domain/ # Pure business logic — no I/O, no frameworks
|
||||
├── infra/ # Adapters: SQLite/Postgres repositories, Jellyfin HTTP client
|
||||
├── infra/ # Adapters: SQLite/Postgres repositories, Jellyfin HTTP client, local files
|
||||
└── api/ # Axum HTTP server — routes, DTOs, startup wiring
|
||||
```
|
||||
|
||||
@@ -79,11 +79,20 @@ OIDC state (CSRF token, PKCE verifier, nonce) is stored in a short-lived encrypt
|
||||
|
||||
If Jellyfin variables are not set, the server starts normally but schedule generation endpoints return an error. Channel CRUD and auth still work.
|
||||
|
||||
### Local Files (optional — requires `local-files` feature)
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `LOCAL_FILES_DIR` | — | Absolute path to local video library root. Enables the local-files provider when set. |
|
||||
| `TRANSCODE_DIR` | — | Directory for FFmpeg HLS transcode cache. Enables transcoding when set. |
|
||||
| `TRANSCODE_CLEANUP_TTL_HOURS` | `24` | Hours after last access before a transcode cache entry is deleted. |
|
||||
|
||||
### CORS & Production
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `CORS_ALLOWED_ORIGINS` | `http://localhost:5173` | Comma-separated allowed origins |
|
||||
| `BASE_URL` | `http://localhost:3000` | Public base URL used to build stream URLs for local files |
|
||||
| `PRODUCTION` | `false` | Enforces minimum secret lengths when `true` |
|
||||
|
||||
## Feature Flags
|
||||
@@ -100,6 +109,7 @@ default = ["sqlite", "auth-jwt", "jellyfin"]
|
||||
| `auth-jwt` | JWT Bearer token authentication |
|
||||
| `auth-oidc` | OpenID Connect integration |
|
||||
| `jellyfin` | Jellyfin media provider adapter |
|
||||
| `local-files` | Local filesystem media provider with optional FFmpeg transcoding |
|
||||
|
||||
## API Reference
|
||||
|
||||
@@ -137,11 +147,49 @@ All endpoints are under `/api/v1/`. Endpoints marked **Bearer** require an `Auth
|
||||
| `GET` | `/channels/:id/epg?from=&until=` | Bearer | EPG slots overlapping a time window (RFC3339 datetimes) |
|
||||
| `GET` | `/channels/:id/stream` | Bearer | `307` redirect to the current item's stream URL — `204` if no-signal |
|
||||
|
||||
### Other
|
||||
### Library
|
||||
|
||||
All endpoints require Bearer auth and return `501 Not Implemented` if the active provider lacks the relevant capability.
|
||||
|
||||
| Method | Path | Auth | Description |
|
||||
|--------|------|------|-------------|
|
||||
| `GET` | `/config` | — | Server configuration flags |
|
||||
| `GET` | `/library/collections` | Bearer | List media collections/libraries |
|
||||
| `GET` | `/library/series` | Bearer | List TV series (supports `?collection=`, `?provider=`) |
|
||||
| `GET` | `/library/genres` | Bearer | List genres (supports `?type=`, `?provider=`) |
|
||||
| `GET` | `/library/items` | Bearer | Search/filter media items (supports `?q=`, `?type=`, `?series[]=`, `?collection=`, `?limit=`, `?strategy=`, `?provider=`) |
|
||||
|
||||
### Files (local-files feature only)
|
||||
|
||||
| Method | Path | Auth | Description |
|
||||
|--------|------|------|-------------|
|
||||
| `GET` | `/files/stream/:id` | — | Range-header video streaming for local files |
|
||||
| `POST` | `/files/rescan` | Bearer | Trigger library rescan, returns `{ items_found }` |
|
||||
| `GET` | `/files/transcode/:id/playlist.m3u8` | — | HLS transcode playlist |
|
||||
| `GET` | `/files/transcode/:id/:segment` | — | HLS transcode segment |
|
||||
| `GET` | `/files/transcode-settings` | Bearer | Get transcode settings (`cleanup_ttl_hours`) |
|
||||
| `PUT` | `/files/transcode-settings` | Bearer | Update transcode settings |
|
||||
| `GET` | `/files/transcode-stats` | Bearer | Cache stats `{ cache_size_bytes, item_count }` |
|
||||
| `DELETE` | `/files/transcode-cache` | Bearer | Clear the transcode cache |
|
||||
|
||||
### IPTV
|
||||
|
||||
| Method | Path | Auth | Description |
|
||||
|--------|------|------|-------------|
|
||||
| `GET` | `/iptv/playlist.m3u` | `?token=` | M3U playlist of all channels |
|
||||
| `GET` | `/iptv/epg.xml` | `?token=` | XMLTV EPG for all channels |
|
||||
|
||||
### Admin
|
||||
|
||||
| Method | Path | Auth | Description |
|
||||
|--------|------|------|-------------|
|
||||
| `GET` | `/admin/logs` | `?token=` | SSE stream of live server log lines (`{ level, target, message, timestamp }`) |
|
||||
| `GET` | `/admin/activity` | Bearer | Recent 50 in-app activity events |
|
||||
|
||||
### Config
|
||||
|
||||
| Method | Path | Auth | Description |
|
||||
|--------|------|------|-------------|
|
||||
| `GET` | `/config` | — | Server configuration flags and provider capabilities |
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -267,6 +315,21 @@ curl -s -I http://localhost:3000/api/v1/channels/<id>/stream \
|
||||
### Channel
|
||||
A named broadcast channel owned by a user. Holds a `schedule_config` (the programming template) and a `recycle_policy`.
|
||||
|
||||
Channel fields:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `access_mode` | `public` / `password_protected` / `account_required` / `owner_only` |
|
||||
| `access_password` | Hashed password when `access_mode` is `password_protected` |
|
||||
| `logo` | URL or inline SVG for the watermark overlay |
|
||||
| `logo_position` | `top_right` (default) / `top_left` / `bottom_left` / `bottom_right` |
|
||||
| `logo_opacity` | 0.0–1.0, default 1.0 |
|
||||
| `auto_schedule` | When `true`, the server auto-regenerates the schedule when it expires |
|
||||
| `webhook_url` | HTTP endpoint called on domain events |
|
||||
| `webhook_poll_interval_secs` | Polling interval for webhook delivery |
|
||||
| `webhook_body_template` | Handlebars template for the webhook POST body |
|
||||
| `webhook_headers` | JSON object of extra HTTP headers sent with webhooks |
|
||||
|
||||
### ScheduleConfig
|
||||
The shareable programming template: an ordered list of `ProgrammingBlock`s. Channels do not need to cover all 24 hours — gaps are valid and produce a no-signal state.
|
||||
|
||||
@@ -286,6 +349,8 @@ Provider-agnostic filter used by algorithmic blocks:
|
||||
| `tags` | Provider tag strings |
|
||||
| `min_duration_secs` / `max_duration_secs` | Duration bounds for item selection |
|
||||
| `collections` | Abstract groupings (Jellyfin library IDs, Plex sections, folder paths, etc.) |
|
||||
| `series_names` | List of TV series names (OR-combined) |
|
||||
| `search_term` | Free-text search term for library browsing |
|
||||
|
||||
### FillStrategy
|
||||
How an algorithmic block fills its time budget:
|
||||
@@ -305,6 +370,22 @@ Controls when previously aired items become eligible again:
|
||||
| `cooldown_generations` | Don't replay within this many schedule generations |
|
||||
| `min_available_ratio` | Always keep at least this fraction (0.0–1.0) of the matching pool selectable, even if their cooldown hasn't expired. Prevents small libraries from running dry. |
|
||||
|
||||
### ProviderCapabilities
|
||||
|
||||
`GET /config` returns `providers[]` with per-provider capabilities. Library endpoints return `501` if the active provider lacks the relevant capability.
|
||||
|
||||
| Capability | Description |
|
||||
|------------|-------------|
|
||||
| `collections` | Provider can list/filter by collections |
|
||||
| `series` | Provider exposes TV series groupings |
|
||||
| `genres` | Provider exposes genre metadata |
|
||||
| `tags` | Provider supports tag filtering |
|
||||
| `decade` | Provider supports decade filtering |
|
||||
| `search` | Provider supports free-text search |
|
||||
| `streaming_protocol` | `hls` or `direct_file` |
|
||||
| `rescan` | Provider supports triggering a library rescan |
|
||||
| `transcode` | FFmpeg transcoding is available (local-files only) |
|
||||
|
||||
### No-signal state
|
||||
`GET /channels/:id/now` and `GET /channels/:id/stream` return `204 No Content` when the current time falls in a gap between blocks. The frontend should display static / noise in this case — matching the broadcast TV experience.
|
||||
|
||||
@@ -338,6 +419,9 @@ cargo build -F sqlite,auth-jwt,auth-oidc,jellyfin
|
||||
|
||||
# PostgreSQL variant
|
||||
cargo build --no-default-features -F postgres,auth-jwt,jellyfin
|
||||
|
||||
# With local files + transcoding
|
||||
cargo build -F sqlite,auth-jwt,jellyfin,local-files
|
||||
```
|
||||
|
||||
### Docker
|
||||
@@ -357,7 +441,8 @@ k-tv-backend/
|
||||
│ │ # ScheduledSlot, MediaItem, PlaybackRecord, User, ...
|
||||
│ ├── value_objects.rs # MediaFilter, FillStrategy, RecyclePolicy,
|
||||
│ │ # MediaItemId, ContentType, Email, ...
|
||||
│ ├── ports.rs # IMediaProvider trait
|
||||
│ ├── ports.rs # IMediaProvider trait, ProviderCapabilities
|
||||
│ ├── events.rs # Domain event types
|
||||
│ ├── repositories.rs # ChannelRepository, ScheduleRepository, UserRepository
|
||||
│ ├── services.rs # ChannelService, ScheduleEngineService, UserService
|
||||
│ └── errors.rs # DomainError
|
||||
@@ -366,7 +451,9 @@ k-tv-backend/
|
||||
│ ├── channel_repository.rs # SQLite + Postgres ChannelRepository adapters
|
||||
│ ├── schedule_repository.rs # SQLite + Postgres ScheduleRepository adapters
|
||||
│ ├── user_repository.rs # SQLite + Postgres UserRepository adapters
|
||||
│ ├── activity_log_repository/ # Activity log persistence
|
||||
│ ├── jellyfin.rs # Jellyfin IMediaProvider adapter
|
||||
│ ├── local_files/ # Local filesystem provider + FFmpeg transcoder
|
||||
│ ├── auth/
|
||||
│ │ ├── jwt.rs # JWT create + validate
|
||||
│ │ └── oidc.rs # OIDC flow (stateless cookie state)
|
||||
@@ -376,13 +463,22 @@ k-tv-backend/
|
||||
├── api/src/
|
||||
│ ├── routes/
|
||||
│ │ ├── auth.rs # /auth/* endpoints
|
||||
│ │ ├── channels.rs # /channels/* endpoints (CRUD, EPG, broadcast)
|
||||
│ │ └── config.rs # /config endpoint
|
||||
│ │ ├── channels/ # /channels/* endpoints (CRUD, EPG, broadcast)
|
||||
│ │ ├── admin.rs # /admin/logs (SSE), /admin/activity
|
||||
│ │ ├── config.rs # /config endpoint
|
||||
│ │ ├── files.rs # /files/* endpoints (local-files feature)
|
||||
│ │ ├── iptv.rs # /iptv/playlist.m3u, /iptv/epg.xml
|
||||
│ │ └── library.rs # /library/* endpoints
|
||||
│ ├── config.rs # Config::from_env()
|
||||
│ ├── state.rs # AppState
|
||||
│ ├── extractors.rs # CurrentUser (JWT Bearer extractor)
|
||||
│ ├── error.rs # ApiError → HTTP status mapping
|
||||
│ ├── dto.rs # All request + response types
|
||||
│ ├── events.rs # SSE event broadcasting
|
||||
│ ├── log_layer.rs # Tracing layer → SSE log stream
|
||||
│ ├── poller.rs # Webhook polling task
|
||||
│ ├── scheduler.rs # Auto-schedule renewal task
|
||||
│ ├── webhook.rs # Webhook delivery
|
||||
│ └── main.rs # Startup wiring
|
||||
│
|
||||
├── migrations_sqlite/
|
||||
|
||||
@@ -11,7 +11,7 @@ postgres = ["infra/postgres"]
|
||||
auth-oidc = ["infra/auth-oidc"]
|
||||
auth-jwt = ["infra/auth-jwt"]
|
||||
jellyfin = ["infra/jellyfin"]
|
||||
local-files = ["infra/local-files", "dep:tokio-util", "dep:sqlx"]
|
||||
local-files = ["infra/local-files", "dep:tokio-util"]
|
||||
|
||||
[profile.release]
|
||||
strip = true
|
||||
@@ -56,6 +56,8 @@ uuid = { version = "1.19.0", features = ["v4", "serde"] }
|
||||
|
||||
# Logging
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
|
||||
tokio-stream = { version = "0.1", features = ["sync"] }
|
||||
|
||||
reqwest = { version = "0.12", features = ["json"] }
|
||||
handlebars = "6"
|
||||
@@ -63,4 +65,3 @@ async-trait = "0.1"
|
||||
dotenvy = "0.15.7"
|
||||
time = "0.3"
|
||||
tokio-util = { version = "0.7", features = ["io"], optional = true }
|
||||
sqlx = { version = "0.8.6", features = ["runtime-tokio", "sqlite"], optional = true }
|
||||
|
||||
@@ -5,9 +5,16 @@
|
||||
use std::env;
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub enum ConfigSource {
|
||||
Env,
|
||||
Db,
|
||||
}
|
||||
|
||||
/// Application configuration loaded from environment variables
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Config {
|
||||
pub config_source: ConfigSource,
|
||||
pub database_url: String,
|
||||
pub cookie_secret: String,
|
||||
pub cors_allowed_origins: Vec<String>,
|
||||
@@ -29,6 +36,7 @@ pub struct Config {
|
||||
pub jwt_issuer: Option<String>,
|
||||
pub jwt_audience: Option<String>,
|
||||
pub jwt_expiry_hours: u64,
|
||||
pub jwt_refresh_expiry_days: u64,
|
||||
|
||||
/// Whether the application is running in production mode
|
||||
pub is_production: bool,
|
||||
@@ -110,6 +118,11 @@ impl Config {
|
||||
.and_then(|s| s.parse().ok())
|
||||
.unwrap_or(24);
|
||||
|
||||
let jwt_refresh_expiry_days = env::var("JWT_REFRESH_EXPIRY_DAYS")
|
||||
.ok()
|
||||
.and_then(|s| s.parse().ok())
|
||||
.unwrap_or(30);
|
||||
|
||||
let is_production = env::var("PRODUCTION")
|
||||
.or_else(|_| env::var("RUST_ENV"))
|
||||
.map(|v| v.to_lowercase() == "production" || v == "1" || v == "true")
|
||||
@@ -134,7 +147,13 @@ impl Config {
|
||||
let base_url = env::var("BASE_URL")
|
||||
.unwrap_or_else(|_| format!("http://localhost:{}", port));
|
||||
|
||||
let config_source = match env::var("CONFIG_SOURCE").as_deref() {
|
||||
Ok("db") | Ok("DB") => ConfigSource::Db,
|
||||
_ => ConfigSource::Env,
|
||||
};
|
||||
|
||||
Self {
|
||||
config_source,
|
||||
host,
|
||||
port,
|
||||
database_url,
|
||||
@@ -152,6 +171,7 @@ impl Config {
|
||||
jwt_issuer,
|
||||
jwt_audience,
|
||||
jwt_expiry_hours,
|
||||
jwt_refresh_expiry_days,
|
||||
is_production,
|
||||
allow_registration,
|
||||
jellyfin_base_url,
|
||||
|
||||
36
k-tv-backend/api/src/database.rs
Normal file
36
k-tv-backend/api/src/database.rs
Normal file
@@ -0,0 +1,36 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration as StdDuration;
|
||||
|
||||
use crate::config::Config;
|
||||
use infra::run_migrations;
|
||||
use k_core::db::DatabasePool;
|
||||
|
||||
pub async fn init_database(config: &Config) -> anyhow::Result<Arc<DatabasePool>> {
|
||||
tracing::info!("Connecting to database: {}", config.database_url);
|
||||
|
||||
#[cfg(all(feature = "sqlite", not(feature = "postgres")))]
|
||||
let db_type = k_core::db::DbType::Sqlite;
|
||||
|
||||
#[cfg(all(feature = "postgres", not(feature = "sqlite")))]
|
||||
let db_type = k_core::db::DbType::Postgres;
|
||||
|
||||
// Both features enabled: fall back to URL inspection at runtime
|
||||
#[cfg(all(feature = "sqlite", feature = "postgres"))]
|
||||
let db_type = if config.database_url.starts_with("postgres") {
|
||||
k_core::db::DbType::Postgres
|
||||
} else {
|
||||
k_core::db::DbType::Sqlite
|
||||
};
|
||||
|
||||
let db_config = k_core::db::DatabaseConfig {
|
||||
db_type,
|
||||
url: config.database_url.clone(),
|
||||
max_connections: config.db_max_connections,
|
||||
min_connections: config.db_min_connections,
|
||||
acquire_timeout: StdDuration::from_secs(30),
|
||||
};
|
||||
|
||||
let pool = k_core::db::connect(&db_config).await?;
|
||||
run_migrations(&pool).await?;
|
||||
Ok(Arc::new(pool))
|
||||
}
|
||||
@@ -15,6 +15,15 @@ pub struct LoginRequest {
|
||||
pub email: Email,
|
||||
/// Password is validated on deserialization (min 8 chars)
|
||||
pub password: Password,
|
||||
/// When true, a refresh token is also issued for persistent sessions
|
||||
#[serde(default)]
|
||||
pub remember_me: bool,
|
||||
}
|
||||
|
||||
/// Refresh token request
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct RefreshRequest {
|
||||
pub refresh_token: String,
|
||||
}
|
||||
|
||||
/// Register request with validated email and password newtypes
|
||||
@@ -32,6 +41,7 @@ pub struct UserResponse {
|
||||
pub id: Uuid,
|
||||
pub email: String,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub is_admin: bool,
|
||||
}
|
||||
|
||||
/// JWT token response
|
||||
@@ -40,6 +50,9 @@ pub struct TokenResponse {
|
||||
pub access_token: String,
|
||||
pub token_type: String,
|
||||
pub expires_in: u64,
|
||||
/// Only present when remember_me was true at login, or on token refresh
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub refresh_token: Option<String>,
|
||||
}
|
||||
|
||||
/// Per-provider info returned by `GET /config`.
|
||||
@@ -57,6 +70,34 @@ pub struct ConfigResponse {
|
||||
pub providers: Vec<ProviderInfo>,
|
||||
/// Capabilities of the primary provider — kept for backward compatibility.
|
||||
pub provider_capabilities: domain::ProviderCapabilities,
|
||||
/// Provider type strings supported by this build (feature-gated).
|
||||
pub available_provider_types: Vec<String>,
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Admin DTOs
|
||||
// ============================================================================
|
||||
|
||||
/// An activity log entry returned by GET /admin/activity.
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct ActivityEventResponse {
|
||||
pub id: Uuid,
|
||||
pub timestamp: DateTime<Utc>,
|
||||
pub event_type: String,
|
||||
pub detail: String,
|
||||
pub channel_id: Option<Uuid>,
|
||||
}
|
||||
|
||||
impl From<domain::ActivityEvent> for ActivityEventResponse {
|
||||
fn from(e: domain::ActivityEvent) -> Self {
|
||||
Self {
|
||||
id: e.id,
|
||||
timestamp: e.timestamp,
|
||||
event_type: e.event_type,
|
||||
detail: e.detail,
|
||||
channel_id: e.channel_id,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -85,7 +126,7 @@ pub struct UpdateChannelRequest {
|
||||
pub description: Option<String>,
|
||||
pub timezone: Option<String>,
|
||||
/// Replace the entire schedule config (template import/edit)
|
||||
pub schedule_config: Option<domain::ScheduleConfig>,
|
||||
pub schedule_config: Option<domain::ScheduleConfigCompat>,
|
||||
pub recycle_policy: Option<domain::RecyclePolicy>,
|
||||
pub auto_schedule: Option<bool>,
|
||||
pub access_mode: Option<domain::AccessMode>,
|
||||
@@ -151,6 +192,34 @@ impl From<domain::Channel> for ChannelResponse {
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Config history DTOs
|
||||
// ============================================================================
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct ConfigSnapshotResponse {
|
||||
pub id: Uuid,
|
||||
pub version_num: i64,
|
||||
pub label: Option<String>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
impl From<domain::ChannelConfigSnapshot> for ConfigSnapshotResponse {
|
||||
fn from(s: domain::ChannelConfigSnapshot) -> Self {
|
||||
Self {
|
||||
id: s.id,
|
||||
version_num: s.version_num,
|
||||
label: s.label,
|
||||
created_at: s.created_at,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct PatchSnapshotRequest {
|
||||
pub label: Option<String>,
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// EPG / playback DTOs
|
||||
// ============================================================================
|
||||
@@ -216,8 +285,7 @@ impl ScheduledSlotResponse {
|
||||
pub fn with_block_access(slot: domain::ScheduledSlot, channel: &domain::Channel) -> Self {
|
||||
let block_access_mode = channel
|
||||
.schedule_config
|
||||
.blocks
|
||||
.iter()
|
||||
.all_blocks()
|
||||
.find(|b| b.id == slot.source_block_id)
|
||||
.map(|b| b.access_mode.clone())
|
||||
.unwrap_or_default();
|
||||
@@ -258,22 +326,46 @@ pub struct ScheduleResponse {
|
||||
// Transcode DTOs
|
||||
// ============================================================================
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct TranscodeSettingsResponse {
|
||||
pub cleanup_ttl_hours: u32,
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct UpdateTranscodeSettingsRequest {
|
||||
pub cleanup_ttl_hours: u32,
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct TranscodeStatsResponse {
|
||||
pub cache_size_bytes: u64,
|
||||
pub item_count: usize,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct ScheduleHistoryEntry {
|
||||
pub id: Uuid,
|
||||
pub generation: u32,
|
||||
pub valid_from: DateTime<Utc>,
|
||||
pub valid_until: DateTime<Utc>,
|
||||
pub slot_count: usize,
|
||||
}
|
||||
|
||||
impl From<domain::GeneratedSchedule> for ScheduleHistoryEntry {
|
||||
fn from(s: domain::GeneratedSchedule) -> Self {
|
||||
Self {
|
||||
id: s.id,
|
||||
generation: s.generation,
|
||||
valid_from: s.valid_from,
|
||||
valid_until: s.valid_until,
|
||||
slot_count: s.slots.len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<domain::GeneratedSchedule> for ScheduleResponse {
|
||||
fn from(s: domain::GeneratedSchedule) -> Self {
|
||||
Self {
|
||||
|
||||
@@ -36,11 +36,15 @@ pub enum ApiError {
|
||||
#[error("auth_required")]
|
||||
AuthRequired,
|
||||
|
||||
#[allow(dead_code)]
|
||||
#[error("Not found: {0}")]
|
||||
NotFound(String),
|
||||
|
||||
#[error("Not implemented: {0}")]
|
||||
NotImplemented(String),
|
||||
|
||||
#[error("Conflict: {0}")]
|
||||
Conflict(String),
|
||||
}
|
||||
|
||||
/// Error response body
|
||||
@@ -154,6 +158,14 @@ impl IntoResponse for ApiError {
|
||||
details: Some(msg.clone()),
|
||||
},
|
||||
),
|
||||
|
||||
ApiError::Conflict(msg) => (
|
||||
StatusCode::CONFLICT,
|
||||
ErrorResponse {
|
||||
error: "Conflict".to_string(),
|
||||
details: Some(msg.clone()),
|
||||
},
|
||||
),
|
||||
};
|
||||
|
||||
(status, Json(error_response)).into_response()
|
||||
@@ -173,10 +185,12 @@ impl ApiError {
|
||||
Self::NotFound(msg.into())
|
||||
}
|
||||
|
||||
pub fn conflict(msg: impl Into<String>) -> Self {
|
||||
Self::Conflict(msg.into())
|
||||
}
|
||||
|
||||
pub fn not_implemented(msg: impl Into<String>) -> Self {
|
||||
Self::NotImplemented(msg.into())
|
||||
}
|
||||
}
|
||||
|
||||
/// Result type alias for API handlers
|
||||
pub type ApiResult<T> = Result<T, ApiError>;
|
||||
|
||||
@@ -67,7 +67,7 @@ impl FromRequestParts<AppState> for OptionalCurrentUser {
|
||||
let user = validate_jwt_token(&token, state).await.ok();
|
||||
return Ok(OptionalCurrentUser(user));
|
||||
}
|
||||
return Ok(OptionalCurrentUser(None));
|
||||
Ok(OptionalCurrentUser(None))
|
||||
}
|
||||
|
||||
#[cfg(not(feature = "auth-jwt"))]
|
||||
@@ -78,6 +78,21 @@ impl FromRequestParts<AppState> for OptionalCurrentUser {
|
||||
}
|
||||
}
|
||||
|
||||
/// Extracted admin user — returns 403 if user is not an admin.
|
||||
pub struct AdminUser(pub User);
|
||||
|
||||
impl FromRequestParts<AppState> for AdminUser {
|
||||
type Rejection = ApiError;
|
||||
|
||||
async fn from_request_parts(parts: &mut Parts, state: &AppState) -> Result<Self, Self::Rejection> {
|
||||
let CurrentUser(user) = CurrentUser::from_request_parts(parts, state).await?;
|
||||
if !user.is_admin {
|
||||
return Err(ApiError::Forbidden("Admin access required".to_string()));
|
||||
}
|
||||
Ok(AdminUser(user))
|
||||
}
|
||||
}
|
||||
|
||||
/// Authenticate using JWT Bearer token from the `Authorization` header.
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
async fn try_jwt_auth(parts: &mut Parts, state: &AppState) -> Result<User, ApiError> {
|
||||
@@ -107,7 +122,7 @@ pub(crate) async fn validate_jwt_token(token: &str, state: &AppState) -> Result<
|
||||
.as_ref()
|
||||
.ok_or_else(|| ApiError::Internal("JWT validator not configured".to_string()))?;
|
||||
|
||||
let claims = validator.validate_token(token).map_err(|e| {
|
||||
let claims = validator.validate_access_token(token).map_err(|e| {
|
||||
tracing::debug!("JWT validation failed: {:?}", e);
|
||||
match e {
|
||||
infra::auth::jwt::JwtError::Expired => {
|
||||
|
||||
64
k-tv-backend/api/src/library_scheduler.rs
Normal file
64
k-tv-backend/api/src/library_scheduler.rs
Normal file
@@ -0,0 +1,64 @@
|
||||
//! Background library sync task.
|
||||
//! Fires 10 seconds after startup, then every N hours (read from app_settings).
|
||||
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use domain::IProviderRegistry;
|
||||
|
||||
const STARTUP_DELAY_SECS: u64 = 10;
|
||||
const DEFAULT_INTERVAL_HOURS: u64 = 6;
|
||||
|
||||
pub async fn run_library_sync(
|
||||
sync_adapter: Arc<dyn domain::LibrarySyncAdapter>,
|
||||
registry: Arc<tokio::sync::RwLock<Arc<infra::ProviderRegistry>>>,
|
||||
app_settings_repo: Arc<dyn domain::IAppSettingsRepository>,
|
||||
) {
|
||||
tokio::time::sleep(Duration::from_secs(STARTUP_DELAY_SECS)).await;
|
||||
|
||||
loop {
|
||||
tick(&sync_adapter, ®istry).await;
|
||||
|
||||
let interval_hours = load_interval_hours(&app_settings_repo).await;
|
||||
tokio::time::sleep(Duration::from_secs(interval_hours * 3600)).await;
|
||||
}
|
||||
}
|
||||
|
||||
async fn load_interval_hours(repo: &Arc<dyn domain::IAppSettingsRepository>) -> u64 {
|
||||
repo.get("library_sync_interval_hours")
|
||||
.await
|
||||
.ok()
|
||||
.flatten()
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_INTERVAL_HOURS)
|
||||
}
|
||||
|
||||
async fn tick(
|
||||
sync_adapter: &Arc<dyn domain::LibrarySyncAdapter>,
|
||||
registry: &Arc<tokio::sync::RwLock<Arc<infra::ProviderRegistry>>>,
|
||||
) {
|
||||
let reg = registry.read().await;
|
||||
let provider_ids: Vec<String> = reg.provider_ids();
|
||||
drop(reg);
|
||||
|
||||
for provider_id in provider_ids {
|
||||
let reg = registry.read().await;
|
||||
let provider = match reg.get_provider(&provider_id) {
|
||||
Some(p) => p,
|
||||
None => continue,
|
||||
};
|
||||
drop(reg);
|
||||
|
||||
tracing::info!("library-sync: syncing provider '{}'", provider_id);
|
||||
let result = sync_adapter.sync_provider(provider.as_ref(), &provider_id).await;
|
||||
|
||||
if let Some(ref err) = result.error {
|
||||
tracing::warn!("library-sync: provider '{}' failed: {}", provider_id, err);
|
||||
} else {
|
||||
tracing::info!(
|
||||
"library-sync: provider '{}' done — {} items in {}ms",
|
||||
provider_id, result.items_found, result.duration_ms
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
72
k-tv-backend/api/src/log_layer.rs
Normal file
72
k-tv-backend/api/src/log_layer.rs
Normal file
@@ -0,0 +1,72 @@
|
||||
//! Custom tracing layer that captures log events and broadcasts them to SSE clients.
|
||||
|
||||
use chrono::Utc;
|
||||
use serde::Serialize;
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use tokio::sync::broadcast;
|
||||
use tracing::Event;
|
||||
use tracing_subscriber::Layer;
|
||||
|
||||
/// A single structured log line sent to SSE clients.
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct LogLine {
|
||||
pub level: String,
|
||||
pub target: String,
|
||||
pub message: String,
|
||||
pub timestamp: String,
|
||||
}
|
||||
|
||||
/// Tracing layer that fans log events out to a broadcast channel + ring buffer.
|
||||
pub struct AppLogLayer {
|
||||
tx: broadcast::Sender<LogLine>,
|
||||
history: Arc<Mutex<VecDeque<LogLine>>>,
|
||||
}
|
||||
|
||||
impl AppLogLayer {
|
||||
pub fn new(
|
||||
tx: broadcast::Sender<LogLine>,
|
||||
history: Arc<Mutex<VecDeque<LogLine>>>,
|
||||
) -> Self {
|
||||
Self { tx, history }
|
||||
}
|
||||
}
|
||||
|
||||
impl<S: tracing::Subscriber> Layer<S> for AppLogLayer {
|
||||
fn on_event(&self, event: &Event<'_>, _ctx: tracing_subscriber::layer::Context<'_, S>) {
|
||||
let mut visitor = MsgVisitor(String::new());
|
||||
event.record(&mut visitor);
|
||||
|
||||
let line = LogLine {
|
||||
level: event.metadata().level().to_string(),
|
||||
target: event.metadata().target().to_string(),
|
||||
message: visitor.0,
|
||||
timestamp: Utc::now().to_rfc3339(),
|
||||
};
|
||||
|
||||
if let Ok(mut history) = self.history.lock() {
|
||||
if history.len() >= 200 {
|
||||
history.pop_front();
|
||||
}
|
||||
history.push_back(line.clone());
|
||||
}
|
||||
|
||||
let _ = self.tx.send(line);
|
||||
}
|
||||
}
|
||||
|
||||
struct MsgVisitor(String);
|
||||
|
||||
impl tracing::field::Visit for MsgVisitor {
|
||||
fn record_str(&mut self, field: &tracing::field::Field, value: &str) {
|
||||
if field.name() == "message" {
|
||||
self.0 = value.to_owned();
|
||||
}
|
||||
}
|
||||
|
||||
fn record_debug(&mut self, field: &tracing::field::Field, value: &dyn std::fmt::Debug) {
|
||||
if field.name() == "message" {
|
||||
self.0 = format!("{value:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2,31 +2,31 @@
|
||||
//!
|
||||
//! Configures and starts the HTTP server with JWT-based authentication.
|
||||
|
||||
use std::net::SocketAddr;
|
||||
use std::time::Duration as StdDuration;
|
||||
|
||||
use axum::Router;
|
||||
use axum::http::{HeaderName, HeaderValue};
|
||||
use std::sync::Arc;
|
||||
use tower_http::cors::{AllowHeaders, AllowMethods, AllowOrigin, CorsLayer};
|
||||
|
||||
use domain::{ChannelService, IMediaProvider, IProviderRegistry, ProviderCapabilities, ScheduleEngineService, StreamingProtocol, UserService};
|
||||
use infra::factory::{build_channel_repository, build_schedule_repository, build_user_repository};
|
||||
use infra::run_migrations;
|
||||
use k_core::http::server::{ServerConfig, apply_standard_middleware};
|
||||
use k_core::logging;
|
||||
use tokio::net::TcpListener;
|
||||
use tracing::info;
|
||||
|
||||
use domain::{ChannelService, IProviderRegistry, ScheduleEngineService, UserService};
|
||||
use infra::factory::{build_activity_log_repository, build_app_settings_repository, build_channel_repository, build_library_repository, build_provider_config_repository, build_schedule_repository, build_user_repository};
|
||||
#[cfg(feature = "local-files")]
|
||||
use infra::factory::build_transcode_settings_repository;
|
||||
|
||||
mod config;
|
||||
mod database;
|
||||
mod library_scheduler;
|
||||
mod provider_registry;
|
||||
mod dto;
|
||||
mod error;
|
||||
mod events;
|
||||
mod extractors;
|
||||
mod log_layer;
|
||||
mod poller;
|
||||
mod routes;
|
||||
mod scheduler;
|
||||
mod server;
|
||||
mod startup;
|
||||
mod state;
|
||||
mod telemetry;
|
||||
mod webhook;
|
||||
|
||||
use crate::config::Config;
|
||||
@@ -34,124 +34,33 @@ use crate::state::AppState;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> anyhow::Result<()> {
|
||||
logging::init("api");
|
||||
let handles = telemetry::init_tracing();
|
||||
|
||||
let config = Config::from_env();
|
||||
|
||||
info!("Starting server on {}:{}", config.host, config.port);
|
||||
|
||||
// Setup database
|
||||
tracing::info!("Connecting to database: {}", config.database_url);
|
||||
|
||||
#[cfg(all(feature = "sqlite", not(feature = "postgres")))]
|
||||
let db_type = k_core::db::DbType::Sqlite;
|
||||
|
||||
#[cfg(all(feature = "postgres", not(feature = "sqlite")))]
|
||||
let db_type = k_core::db::DbType::Postgres;
|
||||
|
||||
// Both features enabled: fall back to URL inspection at runtime
|
||||
#[cfg(all(feature = "sqlite", feature = "postgres"))]
|
||||
let db_type = if config.database_url.starts_with("postgres") {
|
||||
k_core::db::DbType::Postgres
|
||||
} else {
|
||||
k_core::db::DbType::Sqlite
|
||||
};
|
||||
|
||||
let db_config = k_core::db::DatabaseConfig {
|
||||
db_type,
|
||||
url: config.database_url.clone(),
|
||||
max_connections: config.db_max_connections,
|
||||
min_connections: config.db_min_connections,
|
||||
acquire_timeout: StdDuration::from_secs(30),
|
||||
};
|
||||
|
||||
let db_pool = k_core::db::connect(&db_config).await?;
|
||||
run_migrations(&db_pool).await?;
|
||||
let db_pool = database::init_database(&config).await?;
|
||||
|
||||
let user_repo = build_user_repository(&db_pool).await?;
|
||||
let channel_repo = build_channel_repository(&db_pool).await?;
|
||||
let schedule_repo = build_schedule_repository(&db_pool).await?;
|
||||
let activity_log_repo = build_activity_log_repository(&db_pool).await?;
|
||||
|
||||
let user_service = UserService::new(user_repo);
|
||||
let channel_service = ChannelService::new(channel_repo.clone());
|
||||
|
||||
// Build provider registry — all configured providers are registered simultaneously.
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut local_index: Option<Arc<infra::LocalIndex>> = None;
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut transcode_manager: Option<Arc<infra::TranscodeManager>> = None;
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut sqlite_pool_for_state: Option<sqlx::SqlitePool> = None;
|
||||
let provider_config_repo = build_provider_config_repository(&db_pool).await?;
|
||||
|
||||
let mut registry = infra::ProviderRegistry::new();
|
||||
let bundle = provider_registry::build_provider_registry(
|
||||
&config, &db_pool, &provider_config_repo,
|
||||
).await?;
|
||||
|
||||
#[cfg(feature = "jellyfin")]
|
||||
if let (Some(base_url), Some(api_key), Some(user_id)) = (
|
||||
&config.jellyfin_base_url,
|
||||
&config.jellyfin_api_key,
|
||||
&config.jellyfin_user_id,
|
||||
) {
|
||||
tracing::info!("Media provider: Jellyfin at {}", base_url);
|
||||
registry.register("jellyfin", Arc::new(infra::JellyfinMediaProvider::new(infra::JellyfinConfig {
|
||||
base_url: base_url.clone(),
|
||||
api_key: api_key.clone(),
|
||||
user_id: user_id.clone(),
|
||||
})));
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
if let Some(dir) = &config.local_files_dir {
|
||||
if let k_core::db::DatabasePool::Sqlite(ref sqlite_pool) = db_pool {
|
||||
tracing::info!("Media provider: local files at {:?}", dir);
|
||||
let lf_cfg = infra::LocalFilesConfig {
|
||||
root_dir: dir.clone(),
|
||||
base_url: config.base_url.clone(),
|
||||
transcode_dir: config.transcode_dir.clone(),
|
||||
cleanup_ttl_hours: config.transcode_cleanup_ttl_hours,
|
||||
};
|
||||
let idx = Arc::new(infra::LocalIndex::new(&lf_cfg, sqlite_pool.clone()).await);
|
||||
local_index = Some(Arc::clone(&idx));
|
||||
let scan_idx = Arc::clone(&idx);
|
||||
tokio::spawn(async move { scan_idx.rescan().await; });
|
||||
|
||||
// Build TranscodeManager if TRANSCODE_DIR is set.
|
||||
let tm = config.transcode_dir.as_ref().map(|td| {
|
||||
std::fs::create_dir_all(td).ok();
|
||||
tracing::info!("Transcoding enabled; cache dir: {:?}", td);
|
||||
let tm = infra::TranscodeManager::new(td.clone(), config.transcode_cleanup_ttl_hours);
|
||||
// Load persisted TTL from DB.
|
||||
let tm_clone = Arc::clone(&tm);
|
||||
let pool_clone = sqlite_pool.clone();
|
||||
tokio::spawn(async move {
|
||||
if let Ok(row) = sqlx::query_as::<_, (i64,)>(
|
||||
"SELECT cleanup_ttl_hours FROM transcode_settings WHERE id = 1",
|
||||
)
|
||||
.fetch_one(&pool_clone)
|
||||
.await
|
||||
{
|
||||
tm_clone.set_cleanup_ttl(row.0 as u32);
|
||||
}
|
||||
});
|
||||
tm
|
||||
});
|
||||
|
||||
registry.register(
|
||||
"local",
|
||||
Arc::new(infra::LocalFilesProvider::new(idx, lf_cfg, tm.clone())),
|
||||
);
|
||||
transcode_manager = tm;
|
||||
sqlite_pool_for_state = Some(sqlite_pool.clone());
|
||||
} else {
|
||||
tracing::warn!("local-files requires SQLite; ignoring LOCAL_FILES_DIR");
|
||||
}
|
||||
}
|
||||
|
||||
if registry.is_empty() {
|
||||
tracing::warn!("No media provider configured. Set JELLYFIN_BASE_URL / LOCAL_FILES_DIR.");
|
||||
registry.register("noop", Arc::new(NoopMediaProvider));
|
||||
}
|
||||
|
||||
let registry = Arc::new(registry);
|
||||
let registry_arc = bundle.registry;
|
||||
let provider_registry: Arc<tokio::sync::RwLock<Arc<infra::ProviderRegistry>>> =
|
||||
Arc::new(tokio::sync::RwLock::new(Arc::clone(®istry_arc)));
|
||||
|
||||
let (event_tx, event_rx) = tokio::sync::broadcast::channel::<domain::DomainEvent>(64);
|
||||
|
||||
@@ -164,133 +73,60 @@ async fn main() -> anyhow::Result<()> {
|
||||
));
|
||||
|
||||
let schedule_engine = ScheduleEngineService::new(
|
||||
Arc::clone(®istry) as Arc<dyn IProviderRegistry>,
|
||||
Arc::clone(®istry_arc) as Arc<dyn IProviderRegistry>,
|
||||
channel_repo,
|
||||
schedule_repo,
|
||||
);
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
let transcode_settings_repo = build_transcode_settings_repository(&db_pool).await.ok();
|
||||
|
||||
let library_repo = build_library_repository(&db_pool).await?;
|
||||
let app_settings_repo = build_app_settings_repository(&db_pool).await?;
|
||||
let library_sync_adapter: Arc<dyn domain::LibrarySyncAdapter> =
|
||||
Arc::new(infra::FullSyncAdapter::new(Arc::clone(&library_repo)));
|
||||
|
||||
#[allow(unused_mut)]
|
||||
let mut state = AppState::new(
|
||||
user_service,
|
||||
channel_service,
|
||||
schedule_engine,
|
||||
registry,
|
||||
provider_registry,
|
||||
provider_config_repo,
|
||||
config.clone(),
|
||||
event_tx.clone(),
|
||||
handles.log_tx,
|
||||
handles.log_history,
|
||||
activity_log_repo,
|
||||
db_pool,
|
||||
library_repo,
|
||||
library_sync_adapter,
|
||||
app_settings_repo,
|
||||
#[cfg(feature = "local-files")]
|
||||
transcode_settings_repo,
|
||||
)
|
||||
.await?;
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
{
|
||||
state.local_index = local_index;
|
||||
state.transcode_manager = transcode_manager;
|
||||
state.sqlite_pool = sqlite_pool_for_state;
|
||||
if !bundle.local_index.is_empty() {
|
||||
*state.local_index.write().await = bundle.local_index;
|
||||
}
|
||||
#[cfg(feature = "local-files")]
|
||||
if let Some(tm) = bundle.transcode_manager {
|
||||
*state.transcode_manager.write().await = Some(tm);
|
||||
}
|
||||
|
||||
let server_config = ServerConfig {
|
||||
cors_origins: config.cors_allowed_origins.clone(),
|
||||
};
|
||||
|
||||
let bg_channel_repo_poller = bg_channel_repo.clone();
|
||||
let bg_schedule_engine = Arc::clone(&state.schedule_engine);
|
||||
tokio::spawn(scheduler::run_auto_scheduler(bg_schedule_engine, bg_channel_repo, event_tx.clone()));
|
||||
|
||||
let bg_schedule_engine_poller = Arc::clone(&state.schedule_engine);
|
||||
tokio::spawn(poller::run_broadcast_poller(
|
||||
bg_schedule_engine_poller,
|
||||
bg_channel_repo_poller,
|
||||
startup::spawn_background_tasks(
|
||||
Arc::clone(&state.schedule_engine),
|
||||
bg_channel_repo,
|
||||
event_tx,
|
||||
);
|
||||
|
||||
tokio::spawn(library_scheduler::run_library_sync(
|
||||
Arc::clone(&state.library_sync_adapter),
|
||||
Arc::clone(&state.provider_registry),
|
||||
Arc::clone(&state.app_settings_repo),
|
||||
));
|
||||
|
||||
let app = Router::new()
|
||||
.nest("/api/v1", routes::api_v1_router())
|
||||
.with_state(state);
|
||||
|
||||
let app = apply_standard_middleware(app, &server_config);
|
||||
|
||||
// Wrap with an outer CorsLayer that includes the custom password headers.
|
||||
// Being outermost it handles OPTIONS preflights before k_core's inner layer.
|
||||
let origins: Vec<HeaderValue> = config
|
||||
.cors_allowed_origins
|
||||
.iter()
|
||||
.filter_map(|o| o.parse().ok())
|
||||
.collect();
|
||||
let cors = CorsLayer::new()
|
||||
.allow_origin(AllowOrigin::list(origins))
|
||||
.allow_methods(AllowMethods::any())
|
||||
.allow_headers(AllowHeaders::list([
|
||||
axum::http::header::AUTHORIZATION,
|
||||
axum::http::header::CONTENT_TYPE,
|
||||
HeaderName::from_static("x-channel-password"),
|
||||
HeaderName::from_static("x-block-password"),
|
||||
]));
|
||||
let app = app.layer(cors);
|
||||
|
||||
let addr: SocketAddr = format!("{}:{}", config.host, config.port).parse()?;
|
||||
let listener = TcpListener::bind(addr).await?;
|
||||
|
||||
tracing::info!("🚀 API server running at http://{}", addr);
|
||||
tracing::info!("🔒 Authentication mode: JWT (Bearer token)");
|
||||
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
tracing::info!(" ✓ JWT auth enabled");
|
||||
|
||||
#[cfg(feature = "auth-oidc")]
|
||||
tracing::info!(" ✓ OIDC integration enabled (stateless cookie state)");
|
||||
|
||||
tracing::info!("📝 API endpoints available at /api/v1/...");
|
||||
|
||||
axum::serve(listener, app).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Stand-in provider used when no real media source is configured.
|
||||
/// Returns a descriptive error for every call so schedule endpoints fail
|
||||
/// gracefully rather than panicking at startup.
|
||||
struct NoopMediaProvider;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl IMediaProvider for NoopMediaProvider {
|
||||
fn capabilities(&self) -> ProviderCapabilities {
|
||||
ProviderCapabilities {
|
||||
collections: false,
|
||||
series: false,
|
||||
genres: false,
|
||||
tags: false,
|
||||
decade: false,
|
||||
search: false,
|
||||
streaming_protocol: StreamingProtocol::DirectFile,
|
||||
rescan: false,
|
||||
transcode: false,
|
||||
}
|
||||
}
|
||||
|
||||
async fn fetch_items(
|
||||
&self,
|
||||
_: &domain::MediaFilter,
|
||||
) -> domain::DomainResult<Vec<domain::MediaItem>> {
|
||||
Err(domain::DomainError::InfrastructureError(
|
||||
"No media provider configured. Set JELLYFIN_BASE_URL or LOCAL_FILES_DIR.".into(),
|
||||
))
|
||||
}
|
||||
|
||||
async fn fetch_by_id(
|
||||
&self,
|
||||
_: &domain::MediaItemId,
|
||||
) -> domain::DomainResult<Option<domain::MediaItem>> {
|
||||
Err(domain::DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
|
||||
async fn get_stream_url(
|
||||
&self,
|
||||
_: &domain::MediaItemId,
|
||||
_: &domain::StreamQuality,
|
||||
) -> domain::DomainResult<String> {
|
||||
Err(domain::DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
server::build_and_serve(state, &config).await
|
||||
}
|
||||
|
||||
@@ -17,7 +17,7 @@ use domain::{ChannelRepository, DomainError, DomainEvent, ScheduleEngineService}
|
||||
|
||||
/// Per-channel poller state.
|
||||
#[derive(Debug)]
|
||||
struct ChannelPollState {
|
||||
pub struct ChannelPollState {
|
||||
/// ID of the last slot we saw as current (None = no signal).
|
||||
last_slot_id: Option<Uuid>,
|
||||
/// Wall-clock instant of the last poll for this channel.
|
||||
@@ -80,13 +80,11 @@ pub(crate) async fn poll_tick(
|
||||
|
||||
// Find the current slot
|
||||
let current_slot_id = match schedule_engine.get_active_schedule(channel.id, now).await {
|
||||
Ok(Some(schedule)) => {
|
||||
schedule
|
||||
.slots
|
||||
.iter()
|
||||
.find(|s| s.start_at <= now && now < s.end_at)
|
||||
.map(|s| s.id)
|
||||
}
|
||||
Ok(Some(schedule)) => schedule
|
||||
.slots
|
||||
.iter()
|
||||
.find(|s| s.start_at <= now && now < s.end_at)
|
||||
.map(|s| s.id),
|
||||
Ok(None) => None,
|
||||
Err(DomainError::NoActiveSchedule(_)) => None,
|
||||
Err(DomainError::ChannelNotFound(_)) => {
|
||||
@@ -109,7 +107,9 @@ pub(crate) async fn poll_tick(
|
||||
// State changed — emit appropriate event
|
||||
match ¤t_slot_id {
|
||||
Some(slot_id) => {
|
||||
if let Ok(Some(schedule)) = schedule_engine.get_active_schedule(channel.id, now).await {
|
||||
if let Ok(Some(schedule)) =
|
||||
schedule_engine.get_active_schedule(channel.id, now).await
|
||||
{
|
||||
if let Some(slot) = schedule.slots.iter().find(|s| s.id == *slot_id).cloned() {
|
||||
let _ = event_tx.send(DomainEvent::BroadcastTransition {
|
||||
channel_id: channel.id,
|
||||
@@ -137,12 +137,12 @@ mod tests {
|
||||
|
||||
use async_trait::async_trait;
|
||||
use chrono::{DateTime, Duration, Utc};
|
||||
use domain::{
|
||||
Channel, ChannelRepository, Collection, DomainResult, GeneratedSchedule, IProviderRegistry,
|
||||
MediaFilter, MediaItem, MediaItemId, PlaybackRecord, ProviderCapabilities,
|
||||
ScheduleEngineService, ScheduleRepository, SeriesSummary, StreamQuality, StreamingProtocol,
|
||||
};
|
||||
use domain::value_objects::{ChannelId, ContentType, UserId};
|
||||
use domain::{
|
||||
BlockId, Channel, ChannelRepository, Collection, DomainResult, GeneratedSchedule,
|
||||
IProviderRegistry, MediaFilter, MediaItem, MediaItemId, PlaybackRecord, ProviderCapabilities,
|
||||
ScheduleEngineService, ScheduleRepository, SeriesSummary, StreamQuality,
|
||||
};
|
||||
use tokio::sync::broadcast;
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -172,6 +172,10 @@ mod tests {
|
||||
async fn delete(&self, _id: ChannelId) -> DomainResult<()> {
|
||||
unimplemented!()
|
||||
}
|
||||
async fn save_config_snapshot(&self, _channel_id: ChannelId, _config: &domain::ScheduleConfig, _label: Option<String>) -> DomainResult<domain::ChannelConfigSnapshot> { unimplemented!() }
|
||||
async fn list_config_snapshots(&self, _channel_id: ChannelId) -> DomainResult<Vec<domain::ChannelConfigSnapshot>> { unimplemented!() }
|
||||
async fn get_config_snapshot(&self, _channel_id: ChannelId, _snapshot_id: Uuid) -> DomainResult<Option<domain::ChannelConfigSnapshot>> { unimplemented!() }
|
||||
async fn patch_config_snapshot_label(&self, _channel_id: ChannelId, _snapshot_id: Uuid, _label: Option<String>) -> DomainResult<Option<domain::ChannelConfigSnapshot>> { unimplemented!() }
|
||||
}
|
||||
|
||||
struct MockScheduleRepo {
|
||||
@@ -188,32 +192,55 @@ mod tests {
|
||||
) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
Ok(self.active.clone())
|
||||
}
|
||||
async fn find_latest(&self, _channel_id: ChannelId) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
async fn find_latest(
|
||||
&self,
|
||||
_channel_id: ChannelId,
|
||||
) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
Ok(self.active.clone())
|
||||
}
|
||||
async fn save(&self, schedule: &GeneratedSchedule) -> DomainResult<()> {
|
||||
self.saved.lock().unwrap().push(schedule.clone());
|
||||
Ok(())
|
||||
}
|
||||
async fn find_playback_history(&self, _channel_id: ChannelId) -> DomainResult<Vec<PlaybackRecord>> {
|
||||
async fn find_playback_history(
|
||||
&self,
|
||||
_channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<PlaybackRecord>> {
|
||||
Ok(vec![])
|
||||
}
|
||||
async fn save_playback_record(&self, _record: &PlaybackRecord) -> DomainResult<()> {
|
||||
Ok(())
|
||||
}
|
||||
async fn find_last_slot_per_block(
|
||||
&self,
|
||||
_channel_id: ChannelId,
|
||||
) -> DomainResult<HashMap<BlockId, MediaItemId>> {
|
||||
Ok(HashMap::new())
|
||||
}
|
||||
async fn list_schedule_history(&self, _channel_id: ChannelId) -> DomainResult<Vec<GeneratedSchedule>> { unimplemented!() }
|
||||
async fn get_schedule_by_id(&self, _channel_id: ChannelId, _schedule_id: Uuid) -> DomainResult<Option<GeneratedSchedule>> { unimplemented!() }
|
||||
async fn delete_schedules_after(&self, _channel_id: ChannelId, _target_generation: u32) -> DomainResult<()> { unimplemented!() }
|
||||
}
|
||||
|
||||
struct MockRegistry;
|
||||
|
||||
#[async_trait]
|
||||
impl IProviderRegistry for MockRegistry {
|
||||
async fn fetch_items(&self, _provider_id: &str, _filter: &MediaFilter) -> DomainResult<Vec<MediaItem>> {
|
||||
async fn fetch_items(
|
||||
&self,
|
||||
_provider_id: &str,
|
||||
_filter: &MediaFilter,
|
||||
) -> DomainResult<Vec<MediaItem>> {
|
||||
Ok(vec![])
|
||||
}
|
||||
async fn fetch_by_id(&self, _item_id: &MediaItemId) -> DomainResult<Option<MediaItem>> {
|
||||
Ok(None)
|
||||
}
|
||||
async fn get_stream_url(&self, _item_id: &MediaItemId, _quality: &StreamQuality) -> DomainResult<String> {
|
||||
async fn get_stream_url(
|
||||
&self,
|
||||
_item_id: &MediaItemId,
|
||||
_quality: &StreamQuality,
|
||||
) -> DomainResult<String> {
|
||||
unimplemented!()
|
||||
}
|
||||
fn provider_ids(&self) -> Vec<String> {
|
||||
@@ -228,10 +255,18 @@ mod tests {
|
||||
async fn list_collections(&self, _provider_id: &str) -> DomainResult<Vec<Collection>> {
|
||||
unimplemented!()
|
||||
}
|
||||
async fn list_series(&self, _provider_id: &str, _collection_id: Option<&str>) -> DomainResult<Vec<SeriesSummary>> {
|
||||
async fn list_series(
|
||||
&self,
|
||||
_provider_id: &str,
|
||||
_collection_id: Option<&str>,
|
||||
) -> DomainResult<Vec<SeriesSummary>> {
|
||||
unimplemented!()
|
||||
}
|
||||
async fn list_genres(&self, _provider_id: &str, _content_type: Option<&ContentType>) -> DomainResult<Vec<String>> {
|
||||
async fn list_genres(
|
||||
&self,
|
||||
_provider_id: &str,
|
||||
_content_type: Option<&ContentType>,
|
||||
) -> DomainResult<Vec<String>> {
|
||||
unimplemented!()
|
||||
}
|
||||
}
|
||||
@@ -246,7 +281,7 @@ mod tests {
|
||||
ch
|
||||
}
|
||||
|
||||
fn make_slot(channel_id: Uuid, slot_id: Uuid) -> domain::ScheduledSlot {
|
||||
fn make_slot(_channel_id: Uuid, slot_id: Uuid) -> domain::ScheduledSlot {
|
||||
use domain::entities::MediaItem;
|
||||
let now = Utc::now();
|
||||
domain::ScheduledSlot {
|
||||
@@ -265,6 +300,8 @@ mod tests {
|
||||
series_name: None,
|
||||
season_number: None,
|
||||
episode_number: None,
|
||||
thumbnail_url: None,
|
||||
collection_id: None,
|
||||
},
|
||||
source_block_id: Uuid::new_v4(),
|
||||
}
|
||||
@@ -318,11 +355,14 @@ mod tests {
|
||||
|
||||
let event = event_rx.try_recv().expect("expected an event");
|
||||
match event {
|
||||
DomainEvent::BroadcastTransition { channel_id: cid, slot: s } => {
|
||||
DomainEvent::BroadcastTransition {
|
||||
channel_id: cid,
|
||||
slot: s,
|
||||
} => {
|
||||
assert_eq!(cid, channel_id);
|
||||
assert_eq!(s.id, slot_id);
|
||||
}
|
||||
other => panic!("expected BroadcastTransition, got something else"),
|
||||
_other => panic!("expected BroadcastTransition, got something else"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -388,11 +428,27 @@ mod tests {
|
||||
async fn find_latest(&self, _: ChannelId) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
Ok(None)
|
||||
}
|
||||
async fn save(&self, _: &GeneratedSchedule) -> DomainResult<()> { Ok(()) }
|
||||
async fn find_playback_history(&self, _: ChannelId) -> DomainResult<Vec<PlaybackRecord>> {
|
||||
async fn save(&self, _: &GeneratedSchedule) -> DomainResult<()> {
|
||||
Ok(())
|
||||
}
|
||||
async fn find_playback_history(
|
||||
&self,
|
||||
_: ChannelId,
|
||||
) -> DomainResult<Vec<PlaybackRecord>> {
|
||||
Ok(vec![])
|
||||
}
|
||||
async fn save_playback_record(&self, _: &PlaybackRecord) -> DomainResult<()> { Ok(()) }
|
||||
async fn save_playback_record(&self, _: &PlaybackRecord) -> DomainResult<()> {
|
||||
Ok(())
|
||||
}
|
||||
async fn find_last_slot_per_block(
|
||||
&self,
|
||||
_: ChannelId,
|
||||
) -> DomainResult<HashMap<BlockId, MediaItemId>> {
|
||||
Ok(HashMap::new())
|
||||
}
|
||||
async fn list_schedule_history(&self, _: ChannelId) -> DomainResult<Vec<GeneratedSchedule>> { unimplemented!() }
|
||||
async fn get_schedule_by_id(&self, _: ChannelId, _: Uuid) -> DomainResult<Option<GeneratedSchedule>> { unimplemented!() }
|
||||
async fn delete_schedules_after(&self, _: ChannelId, _: u32) -> DomainResult<()> { unimplemented!() }
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
|
||||
209
k-tv-backend/api/src/provider_registry.rs
Normal file
209
k-tv-backend/api/src/provider_registry.rs
Normal file
@@ -0,0 +1,209 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use domain::{
|
||||
DomainError, IMediaProvider, ProviderCapabilities, ProviderConfigRepository,
|
||||
StreamingProtocol, StreamQuality,
|
||||
};
|
||||
use k_core::db::DatabasePool;
|
||||
|
||||
use crate::config::{Config, ConfigSource};
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
use infra::factory::build_transcode_settings_repository;
|
||||
|
||||
pub struct ProviderBundle {
|
||||
pub registry: Arc<infra::ProviderRegistry>,
|
||||
#[cfg(feature = "local-files")]
|
||||
pub local_index: std::collections::HashMap<String, Arc<infra::LocalIndex>>,
|
||||
#[cfg(feature = "local-files")]
|
||||
pub transcode_manager: Option<Arc<infra::TranscodeManager>>,
|
||||
}
|
||||
|
||||
pub async fn build_provider_registry(
|
||||
config: &Config,
|
||||
#[cfg_attr(not(feature = "local-files"), allow(unused_variables))]
|
||||
db_pool: &Arc<DatabasePool>,
|
||||
provider_config_repo: &Arc<dyn ProviderConfigRepository>,
|
||||
) -> anyhow::Result<ProviderBundle> {
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut local_index: std::collections::HashMap<String, Arc<infra::LocalIndex>> = std::collections::HashMap::new();
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut transcode_manager: Option<Arc<infra::TranscodeManager>> = None;
|
||||
|
||||
let mut registry = infra::ProviderRegistry::new();
|
||||
|
||||
if config.config_source == ConfigSource::Db {
|
||||
tracing::info!("CONFIG_SOURCE=db: loading provider configs from database");
|
||||
let rows = provider_config_repo.get_all().await?;
|
||||
for row in &rows {
|
||||
if !row.enabled { continue; }
|
||||
match row.provider_type.as_str() {
|
||||
#[cfg(feature = "jellyfin")]
|
||||
"jellyfin" => {
|
||||
if let Ok(cfg) = serde_json::from_str::<infra::JellyfinConfig>(&row.config_json) {
|
||||
tracing::info!("Loading Jellyfin provider [{}] from DB config", row.id);
|
||||
registry.register(&row.id, Arc::new(infra::JellyfinMediaProvider::new(cfg)));
|
||||
}
|
||||
}
|
||||
#[cfg(feature = "local-files")]
|
||||
"local_files" => {
|
||||
if let Ok(cfg_map) = serde_json::from_str::<std::collections::HashMap<String, String>>(&row.config_json)
|
||||
&& let Some(files_dir) = cfg_map.get("files_dir")
|
||||
{
|
||||
let transcode_dir = cfg_map.get("transcode_dir")
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(std::path::PathBuf::from);
|
||||
let cleanup_ttl_hours: u32 = cfg_map.get("cleanup_ttl_hours")
|
||||
.and_then(|s| s.parse().ok())
|
||||
.unwrap_or(24);
|
||||
tracing::info!("Loading local-files provider [{}] from DB config at {:?}", row.id, files_dir);
|
||||
match infra::factory::build_local_files_bundle(
|
||||
db_pool,
|
||||
std::path::PathBuf::from(files_dir),
|
||||
transcode_dir,
|
||||
cleanup_ttl_hours,
|
||||
config.base_url.clone(),
|
||||
&row.id,
|
||||
).await {
|
||||
Ok(bundle) => {
|
||||
let scan_idx = Arc::clone(&bundle.local_index);
|
||||
tokio::spawn(async move { scan_idx.rescan().await; });
|
||||
if let Some(ref tm) = bundle.transcode_manager {
|
||||
tracing::info!("Transcoding enabled for [{}]", row.id);
|
||||
// Load persisted TTL override from transcode_settings table.
|
||||
let tm_clone = Arc::clone(tm);
|
||||
let repo = build_transcode_settings_repository(db_pool).await.ok();
|
||||
tokio::spawn(async move {
|
||||
if let Some(r) = repo
|
||||
&& let Ok(Some(ttl)) = r.load_cleanup_ttl().await
|
||||
{
|
||||
tm_clone.set_cleanup_ttl(ttl);
|
||||
}
|
||||
});
|
||||
}
|
||||
registry.register(&row.id, bundle.provider);
|
||||
if transcode_manager.is_none() {
|
||||
transcode_manager = bundle.transcode_manager;
|
||||
}
|
||||
local_index.insert(row.id.clone(), bundle.local_index);
|
||||
}
|
||||
Err(e) => tracing::warn!("Failed to build local-files provider [{}]: {}", row.id, e),
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
#[cfg(feature = "jellyfin")]
|
||||
if let (Some(base_url), Some(api_key), Some(user_id)) = (
|
||||
&config.jellyfin_base_url,
|
||||
&config.jellyfin_api_key,
|
||||
&config.jellyfin_user_id,
|
||||
) {
|
||||
tracing::info!("Media provider: Jellyfin at {}", base_url);
|
||||
registry.register("jellyfin", Arc::new(infra::JellyfinMediaProvider::new(infra::JellyfinConfig {
|
||||
base_url: base_url.clone(),
|
||||
api_key: api_key.clone(),
|
||||
user_id: user_id.clone(),
|
||||
})));
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
if let Some(dir) = &config.local_files_dir {
|
||||
tracing::info!("Media provider: local files at {:?}", dir);
|
||||
match infra::factory::build_local_files_bundle(
|
||||
db_pool,
|
||||
dir.clone(),
|
||||
config.transcode_dir.clone(),
|
||||
config.transcode_cleanup_ttl_hours,
|
||||
config.base_url.clone(),
|
||||
"local",
|
||||
).await {
|
||||
Ok(bundle) => {
|
||||
let scan_idx = Arc::clone(&bundle.local_index);
|
||||
tokio::spawn(async move { scan_idx.rescan().await; });
|
||||
if let Some(ref tm) = bundle.transcode_manager {
|
||||
tracing::info!("Transcoding enabled; cache dir: {:?}", config.transcode_dir);
|
||||
let tm_clone = Arc::clone(tm);
|
||||
let repo = build_transcode_settings_repository(db_pool).await.ok();
|
||||
tokio::spawn(async move {
|
||||
if let Some(r) = repo
|
||||
&& let Ok(Some(ttl)) = r.load_cleanup_ttl().await
|
||||
{
|
||||
tm_clone.set_cleanup_ttl(ttl);
|
||||
}
|
||||
});
|
||||
}
|
||||
registry.register("local", bundle.provider);
|
||||
transcode_manager = bundle.transcode_manager;
|
||||
local_index.insert("local".to_string(), bundle.local_index);
|
||||
}
|
||||
Err(e) => tracing::warn!("local-files requires SQLite; ignoring LOCAL_FILES_DIR: {}", e),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if registry.is_empty() {
|
||||
tracing::warn!("No media provider configured. Set JELLYFIN_BASE_URL / LOCAL_FILES_DIR.");
|
||||
registry.register("noop", Arc::new(NoopMediaProvider));
|
||||
}
|
||||
|
||||
Ok(ProviderBundle {
|
||||
registry: Arc::new(registry),
|
||||
#[cfg(feature = "local-files")]
|
||||
local_index,
|
||||
#[cfg(feature = "local-files")]
|
||||
transcode_manager,
|
||||
})
|
||||
}
|
||||
|
||||
/// Stand-in provider used when no real media source is configured.
|
||||
/// Returns a descriptive error for every call so schedule endpoints fail
|
||||
/// gracefully rather than panicking at startup.
|
||||
struct NoopMediaProvider;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl IMediaProvider for NoopMediaProvider {
|
||||
fn capabilities(&self) -> ProviderCapabilities {
|
||||
ProviderCapabilities {
|
||||
collections: false,
|
||||
series: false,
|
||||
genres: false,
|
||||
tags: false,
|
||||
decade: false,
|
||||
search: false,
|
||||
streaming_protocol: StreamingProtocol::DirectFile,
|
||||
rescan: false,
|
||||
transcode: false,
|
||||
}
|
||||
}
|
||||
|
||||
async fn fetch_items(
|
||||
&self,
|
||||
_: &domain::MediaFilter,
|
||||
) -> domain::DomainResult<Vec<domain::MediaItem>> {
|
||||
Err(DomainError::InfrastructureError(
|
||||
"No media provider configured. Set JELLYFIN_BASE_URL or LOCAL_FILES_DIR.".into(),
|
||||
))
|
||||
}
|
||||
|
||||
async fn fetch_by_id(
|
||||
&self,
|
||||
_: &domain::MediaItemId,
|
||||
) -> domain::DomainResult<Option<domain::MediaItem>> {
|
||||
Err(DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
|
||||
async fn get_stream_url(
|
||||
&self,
|
||||
_: &domain::MediaItemId,
|
||||
_: &StreamQuality,
|
||||
) -> domain::DomainResult<String> {
|
||||
Err(DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
}
|
||||
95
k-tv-backend/api/src/routes/admin.rs
Normal file
95
k-tv-backend/api/src/routes/admin.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
//! Admin routes: SSE log stream + activity log.
|
||||
|
||||
use axum::{
|
||||
Json,
|
||||
extract::State,
|
||||
response::{
|
||||
IntoResponse,
|
||||
sse::{Event, KeepAlive, Sse},
|
||||
},
|
||||
};
|
||||
use tokio_stream::{StreamExt, wrappers::BroadcastStream};
|
||||
|
||||
use crate::{
|
||||
dto::ActivityEventResponse,
|
||||
error::ApiError,
|
||||
extractors::OptionalCurrentUser,
|
||||
state::AppState,
|
||||
};
|
||||
|
||||
use axum::Router;
|
||||
use axum::routing::get;
|
||||
|
||||
pub fn router() -> Router<AppState> {
|
||||
Router::new()
|
||||
.route("/logs", get(stream_logs))
|
||||
.route("/activity", get(list_activity))
|
||||
}
|
||||
|
||||
/// Stream server log lines as Server-Sent Events.
|
||||
///
|
||||
/// Auth: requires a valid JWT passed as `?token=<jwt>` (EventSource cannot set headers).
|
||||
/// On connect: flushes the recent history ring buffer, then streams live events.
|
||||
pub async fn stream_logs(
|
||||
State(state): State<AppState>,
|
||||
OptionalCurrentUser(user): OptionalCurrentUser,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
if user.is_none() {
|
||||
return Err(ApiError::Unauthorized(
|
||||
"Authentication required for log stream".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
// Snapshot history and subscribe before releasing the lock so we don't miss events.
|
||||
let rx = state.log_tx.subscribe();
|
||||
let history: Vec<_> = state
|
||||
.log_history
|
||||
.lock()
|
||||
.map(|h| h.iter().cloned().collect())
|
||||
.unwrap_or_default();
|
||||
|
||||
let history_stream = tokio_stream::iter(history).map(|line| {
|
||||
let data = serde_json::to_string(&line).unwrap_or_default();
|
||||
Ok::<Event, String>(Event::default().data(data))
|
||||
});
|
||||
|
||||
let live_stream = BroadcastStream::new(rx).filter_map(|result| match result {
|
||||
Ok(line) => {
|
||||
let data = serde_json::to_string(&line).unwrap_or_default();
|
||||
Some(Ok::<Event, String>(Event::default().data(data)))
|
||||
}
|
||||
Err(tokio_stream::wrappers::errors::BroadcastStreamRecvError::Lagged(n)) => {
|
||||
let data = format!(
|
||||
r#"{{"level":"WARN","target":"sse","message":"[{n} log lines dropped — buffer overrun]","timestamp":""}}"#
|
||||
);
|
||||
Some(Ok(Event::default().data(data)))
|
||||
}
|
||||
});
|
||||
|
||||
let combined = history_stream.chain(live_stream);
|
||||
|
||||
Ok(Sse::new(combined).keep_alive(KeepAlive::default()))
|
||||
}
|
||||
|
||||
/// Return recent activity log entries.
|
||||
///
|
||||
/// Auth: requires a valid JWT (Authorization: Bearer or ?token=).
|
||||
pub async fn list_activity(
|
||||
State(state): State<AppState>,
|
||||
OptionalCurrentUser(user): OptionalCurrentUser,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
if user.is_none() {
|
||||
return Err(ApiError::Unauthorized(
|
||||
"Authentication required".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
let events = state
|
||||
.activity_log_repo
|
||||
.recent(50)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
let response: Vec<ActivityEventResponse> = events.into_iter().map(Into::into).collect();
|
||||
Ok(Json(response))
|
||||
}
|
||||
513
k-tv-backend/api/src/routes/admin_providers.rs
Normal file
513
k-tv-backend/api/src/routes/admin_providers.rs
Normal file
@@ -0,0 +1,513 @@
|
||||
//! Admin provider management routes.
|
||||
//!
|
||||
//! All routes require an admin user. Allows listing, creating, updating, deleting, and
|
||||
//! testing media provider configs stored in the DB. Only available when
|
||||
//! CONFIG_SOURCE=db.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
|
||||
use axum::Router;
|
||||
use axum::extract::{Path, State};
|
||||
use axum::http::StatusCode;
|
||||
use axum::response::IntoResponse;
|
||||
use axum::routing::{get, post, put};
|
||||
use axum::Json;
|
||||
use domain::errors::DomainResult;
|
||||
use domain::ProviderConfigRow;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::config::ConfigSource;
|
||||
use crate::error::ApiError;
|
||||
use crate::extractors::AdminUser;
|
||||
use crate::state::AppState;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// DTOs
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Validate that an instance id is a safe slug (alphanumeric + hyphens, 1-40 chars).
|
||||
fn is_valid_instance_id(id: &str) -> bool {
|
||||
!id.is_empty()
|
||||
&& id.len() <= 40
|
||||
&& id.chars().all(|c| c.is_ascii_alphanumeric() || c == '-')
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct CreateProviderRequest {
|
||||
pub id: String,
|
||||
pub provider_type: String,
|
||||
pub config_json: HashMap<String, String>,
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct UpdateProviderRequest {
|
||||
pub config_json: HashMap<String, String>,
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct TestProviderRequest {
|
||||
pub provider_type: String,
|
||||
pub config_json: HashMap<String, String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct ProviderConfigResponse {
|
||||
pub id: String,
|
||||
pub provider_type: String,
|
||||
pub config_json: HashMap<String, serde_json::Value>,
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct TestResult {
|
||||
pub ok: bool,
|
||||
pub message: String,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Router
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
pub fn router() -> Router<AppState> {
|
||||
Router::new()
|
||||
.route("/", get(list_providers).post(create_provider))
|
||||
.route("/{id}", put(update_provider).delete(delete_provider))
|
||||
.route("/test", post(test_provider))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
fn mask_config(raw: &str) -> HashMap<String, serde_json::Value> {
|
||||
let parsed: HashMap<String, serde_json::Value> =
|
||||
serde_json::from_str(raw).unwrap_or_default();
|
||||
parsed
|
||||
.into_iter()
|
||||
.map(|(k, v)| {
|
||||
let secret_key = ["key", "password", "secret", "token"]
|
||||
.iter()
|
||||
.any(|kw| k.to_lowercase().contains(kw));
|
||||
let masked = if secret_key {
|
||||
match &v {
|
||||
serde_json::Value::String(s) if !s.is_empty() => {
|
||||
serde_json::Value::String("***".to_string())
|
||||
}
|
||||
_ => v,
|
||||
}
|
||||
} else {
|
||||
v
|
||||
};
|
||||
(k, masked)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn conflict_response() -> impl IntoResponse {
|
||||
(
|
||||
StatusCode::CONFLICT,
|
||||
Json(serde_json::json!({
|
||||
"error": "UI config disabled — set CONFIG_SOURCE=db on the server"
|
||||
})),
|
||||
)
|
||||
}
|
||||
|
||||
async fn rebuild_registry(state: &AppState) -> DomainResult<()> {
|
||||
let rows = state.provider_config_repo.get_all().await?;
|
||||
let mut new_registry = infra::ProviderRegistry::new();
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut new_local_index: std::collections::HashMap<String, Arc<infra::LocalIndex>> =
|
||||
std::collections::HashMap::new();
|
||||
#[cfg(feature = "local-files")]
|
||||
let mut first_transcode_manager: Option<Arc<infra::TranscodeManager>> = None;
|
||||
|
||||
for row in &rows {
|
||||
if !row.enabled {
|
||||
continue;
|
||||
}
|
||||
match row.provider_type.as_str() {
|
||||
#[cfg(feature = "jellyfin")]
|
||||
"jellyfin" => {
|
||||
if let Ok(cfg) =
|
||||
serde_json::from_str::<infra::JellyfinConfig>(&row.config_json)
|
||||
{
|
||||
new_registry.register(
|
||||
&row.id,
|
||||
Arc::new(infra::JellyfinMediaProvider::new(cfg)),
|
||||
);
|
||||
}
|
||||
}
|
||||
#[cfg(feature = "local-files")]
|
||||
"local_files" => {
|
||||
let config: std::collections::HashMap<String, String> =
|
||||
match serde_json::from_str(&row.config_json) {
|
||||
Ok(c) => c,
|
||||
Err(_) => continue,
|
||||
};
|
||||
|
||||
let files_dir = match config.get("files_dir") {
|
||||
Some(d) => std::path::PathBuf::from(d),
|
||||
None => continue,
|
||||
};
|
||||
|
||||
let transcode_dir = config
|
||||
.get("transcode_dir")
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(std::path::PathBuf::from);
|
||||
|
||||
let cleanup_ttl_hours: u32 = config
|
||||
.get("cleanup_ttl_hours")
|
||||
.and_then(|s| s.parse().ok())
|
||||
.unwrap_or(24);
|
||||
|
||||
let base_url = state.config.base_url.clone();
|
||||
|
||||
match infra::factory::build_local_files_bundle(
|
||||
&state.db_pool,
|
||||
files_dir,
|
||||
transcode_dir,
|
||||
cleanup_ttl_hours,
|
||||
base_url,
|
||||
&row.id,
|
||||
).await {
|
||||
Ok(bundle) => {
|
||||
let scan_idx = Arc::clone(&bundle.local_index);
|
||||
tokio::spawn(async move { scan_idx.rescan().await; });
|
||||
new_registry.register(&row.id, bundle.provider);
|
||||
new_local_index.insert(row.id.clone(), bundle.local_index);
|
||||
if first_transcode_manager.is_none() {
|
||||
first_transcode_manager = bundle.transcode_manager;
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("local_files provider [{}] requires SQLite; skipping: {}", row.id, e);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
if new_registry.is_empty() {
|
||||
new_registry.register("noop", Arc::new(NoopMediaProvider));
|
||||
}
|
||||
|
||||
*state.provider_registry.write().await = Arc::new(new_registry);
|
||||
#[cfg(feature = "local-files")]
|
||||
{
|
||||
*state.local_index.write().await = new_local_index;
|
||||
*state.transcode_manager.write().await = first_transcode_manager;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Handlers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
pub async fn list_providers(
|
||||
State(state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let rows = state
|
||||
.provider_config_repo
|
||||
.get_all()
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
let response: Vec<ProviderConfigResponse> = rows
|
||||
.iter()
|
||||
.map(|row| ProviderConfigResponse {
|
||||
id: row.id.clone(),
|
||||
provider_type: row.provider_type.clone(),
|
||||
config_json: mask_config(&row.config_json),
|
||||
enabled: row.enabled,
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(Json(response))
|
||||
}
|
||||
|
||||
pub async fn create_provider(
|
||||
State(state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
Json(payload): Json<CreateProviderRequest>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
if state.config.config_source != ConfigSource::Db {
|
||||
return Ok(conflict_response().into_response());
|
||||
}
|
||||
|
||||
if !is_valid_instance_id(&payload.id) {
|
||||
return Err(ApiError::Validation(
|
||||
"Instance id must be 1-40 alphanumeric+hyphen characters".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
let known = matches!(payload.provider_type.as_str(), "jellyfin" | "local_files");
|
||||
if !known {
|
||||
return Err(ApiError::Validation(format!(
|
||||
"Unknown provider type: {}",
|
||||
payload.provider_type
|
||||
)));
|
||||
}
|
||||
|
||||
// Check for uniqueness
|
||||
if state
|
||||
.provider_config_repo
|
||||
.get_by_id(&payload.id)
|
||||
.await
|
||||
.map_err(ApiError::from)?
|
||||
.is_some()
|
||||
{
|
||||
return Ok((
|
||||
StatusCode::CONFLICT,
|
||||
Json(serde_json::json!({ "error": format!("Provider instance '{}' already exists", payload.id) })),
|
||||
).into_response());
|
||||
}
|
||||
|
||||
let config_json = serde_json::to_string(&payload.config_json)
|
||||
.map_err(|e| ApiError::Internal(format!("Failed to serialize config: {}", e)))?;
|
||||
|
||||
let row = ProviderConfigRow {
|
||||
id: payload.id.clone(),
|
||||
provider_type: payload.provider_type.clone(),
|
||||
config_json: config_json.clone(),
|
||||
enabled: payload.enabled,
|
||||
updated_at: chrono::Utc::now().to_rfc3339(),
|
||||
};
|
||||
|
||||
state
|
||||
.provider_config_repo
|
||||
.upsert(&row)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
rebuild_registry(&state)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
let response = ProviderConfigResponse {
|
||||
id: payload.id,
|
||||
provider_type: payload.provider_type,
|
||||
config_json: mask_config(&config_json),
|
||||
enabled: payload.enabled,
|
||||
};
|
||||
|
||||
Ok((StatusCode::CREATED, Json(response)).into_response())
|
||||
}
|
||||
|
||||
pub async fn update_provider(
|
||||
State(state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
Path(instance_id): Path<String>,
|
||||
Json(payload): Json<UpdateProviderRequest>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
if state.config.config_source != ConfigSource::Db {
|
||||
return Ok(conflict_response().into_response());
|
||||
}
|
||||
|
||||
let existing = state
|
||||
.provider_config_repo
|
||||
.get_by_id(&instance_id)
|
||||
.await
|
||||
.map_err(ApiError::from)?
|
||||
.ok_or_else(|| ApiError::NotFound(format!("Provider instance '{}' not found", instance_id)))?;
|
||||
|
||||
let config_json = serde_json::to_string(&payload.config_json)
|
||||
.map_err(|e| ApiError::Internal(format!("Failed to serialize config: {}", e)))?;
|
||||
|
||||
let row = ProviderConfigRow {
|
||||
id: existing.id.clone(),
|
||||
provider_type: existing.provider_type.clone(),
|
||||
config_json: config_json.clone(),
|
||||
enabled: payload.enabled,
|
||||
updated_at: chrono::Utc::now().to_rfc3339(),
|
||||
};
|
||||
|
||||
state
|
||||
.provider_config_repo
|
||||
.upsert(&row)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
rebuild_registry(&state)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
let response = ProviderConfigResponse {
|
||||
id: existing.id,
|
||||
provider_type: existing.provider_type,
|
||||
config_json: mask_config(&config_json),
|
||||
enabled: payload.enabled,
|
||||
};
|
||||
|
||||
Ok(Json(response).into_response())
|
||||
}
|
||||
|
||||
pub async fn delete_provider(
|
||||
State(state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
Path(instance_id): Path<String>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
if state.config.config_source != ConfigSource::Db {
|
||||
return Ok(conflict_response().into_response());
|
||||
}
|
||||
|
||||
state
|
||||
.provider_config_repo
|
||||
.delete(&instance_id)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
rebuild_registry(&state)
|
||||
.await
|
||||
.map_err(ApiError::from)?;
|
||||
|
||||
Ok(StatusCode::NO_CONTENT.into_response())
|
||||
}
|
||||
|
||||
pub async fn test_provider(
|
||||
State(_state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
Json(payload): Json<TestProviderRequest>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let result = match payload.provider_type.as_str() {
|
||||
"jellyfin" => test_jellyfin(&payload.config_json).await,
|
||||
"local_files" => test_local_files(&payload.config_json),
|
||||
_ => TestResult {
|
||||
ok: false,
|
||||
message: "Unknown provider type".to_string(),
|
||||
},
|
||||
};
|
||||
|
||||
Ok(Json(result))
|
||||
}
|
||||
|
||||
async fn test_jellyfin(config: &HashMap<String, String>) -> TestResult {
|
||||
let base_url = match config.get("base_url") {
|
||||
Some(u) => u.trim_end_matches('/').to_string(),
|
||||
None => {
|
||||
return TestResult {
|
||||
ok: false,
|
||||
message: "Missing field: base_url".to_string(),
|
||||
}
|
||||
}
|
||||
};
|
||||
let api_key = match config.get("api_key") {
|
||||
Some(k) => k.clone(),
|
||||
None => {
|
||||
return TestResult {
|
||||
ok: false,
|
||||
message: "Missing field: api_key".to_string(),
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let url = format!("{}/System/Info", base_url);
|
||||
let client = reqwest::Client::new();
|
||||
match client
|
||||
.get(&url)
|
||||
.header("X-Emby-Token", &api_key)
|
||||
.send()
|
||||
.await
|
||||
{
|
||||
Ok(resp) => {
|
||||
let status = resp.status();
|
||||
if status.is_success() {
|
||||
TestResult {
|
||||
ok: true,
|
||||
message: format!("Connected successfully (HTTP {})", status.as_u16()),
|
||||
}
|
||||
} else {
|
||||
TestResult {
|
||||
ok: false,
|
||||
message: format!("Jellyfin returned HTTP {}", status.as_u16()),
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => TestResult {
|
||||
ok: false,
|
||||
message: format!("Connection failed: {}", e),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn test_local_files(config: &HashMap<String, String>) -> TestResult {
|
||||
let path = match config.get("files_dir") {
|
||||
Some(p) => p.clone(),
|
||||
None => {
|
||||
return TestResult {
|
||||
ok: false,
|
||||
message: "Missing field: files_dir".to_string(),
|
||||
}
|
||||
}
|
||||
};
|
||||
let p = std::path::Path::new(&path);
|
||||
if p.exists() && p.is_dir() {
|
||||
TestResult {
|
||||
ok: true,
|
||||
message: format!("Directory exists: {}", path),
|
||||
}
|
||||
} else {
|
||||
TestResult {
|
||||
ok: false,
|
||||
message: format!("Path does not exist or is not a directory: {}", path),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// NoopMediaProvider (local copy — avoids pub-ing it from main.rs)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
struct NoopMediaProvider;
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl domain::IMediaProvider for NoopMediaProvider {
|
||||
fn capabilities(&self) -> domain::ProviderCapabilities {
|
||||
domain::ProviderCapabilities {
|
||||
collections: false,
|
||||
series: false,
|
||||
genres: false,
|
||||
tags: false,
|
||||
decade: false,
|
||||
search: false,
|
||||
streaming_protocol: domain::StreamingProtocol::DirectFile,
|
||||
rescan: false,
|
||||
transcode: false,
|
||||
}
|
||||
}
|
||||
|
||||
async fn fetch_items(
|
||||
&self,
|
||||
_: &domain::MediaFilter,
|
||||
) -> domain::DomainResult<Vec<domain::MediaItem>> {
|
||||
Err(domain::DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
|
||||
async fn fetch_by_id(
|
||||
&self,
|
||||
_: &domain::MediaItemId,
|
||||
) -> domain::DomainResult<Option<domain::MediaItem>> {
|
||||
Err(domain::DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
|
||||
async fn get_stream_url(
|
||||
&self,
|
||||
_: &domain::MediaItemId,
|
||||
_: &domain::StreamQuality,
|
||||
) -> domain::DomainResult<String> {
|
||||
Err(domain::DomainError::InfrastructureError(
|
||||
"No media provider configured.".into(),
|
||||
))
|
||||
}
|
||||
}
|
||||
@@ -6,13 +6,13 @@ use axum::{
|
||||
};
|
||||
|
||||
use crate::{
|
||||
dto::{LoginRequest, RegisterRequest, TokenResponse, UserResponse},
|
||||
dto::{LoginRequest, RefreshRequest, RegisterRequest, TokenResponse, UserResponse},
|
||||
error::ApiError,
|
||||
extractors::CurrentUser,
|
||||
state::AppState,
|
||||
};
|
||||
|
||||
use super::create_jwt;
|
||||
use super::{create_jwt, create_refresh_jwt};
|
||||
|
||||
/// Login with email + password → JWT token
|
||||
pub(super) async fn login(
|
||||
@@ -35,6 +35,12 @@ pub(super) async fn login(
|
||||
}
|
||||
|
||||
let token = create_jwt(&user, &state)?;
|
||||
let refresh_token = if payload.remember_me {
|
||||
Some(create_refresh_jwt(&user, &state)?)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let _ = state.activity_log_repo.log("user_login", user.email.as_ref(), None).await;
|
||||
|
||||
Ok((
|
||||
StatusCode::OK,
|
||||
@@ -42,6 +48,7 @@ pub(super) async fn login(
|
||||
access_token: token,
|
||||
token_type: "Bearer".to_string(),
|
||||
expires_in: state.config.jwt_expiry_hours * 3600,
|
||||
refresh_token,
|
||||
}),
|
||||
))
|
||||
}
|
||||
@@ -70,6 +77,7 @@ pub(super) async fn register(
|
||||
access_token: token,
|
||||
token_type: "Bearer".to_string(),
|
||||
expires_in: state.config.jwt_expiry_hours * 3600,
|
||||
refresh_token: None,
|
||||
}),
|
||||
))
|
||||
}
|
||||
@@ -85,6 +93,47 @@ pub(super) async fn me(CurrentUser(user): CurrentUser) -> Result<impl IntoRespon
|
||||
id: user.id,
|
||||
email: user.email.into_inner(),
|
||||
created_at: user.created_at,
|
||||
is_admin: user.is_admin,
|
||||
}))
|
||||
}
|
||||
|
||||
/// Exchange a valid refresh token for a new access + refresh token pair
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
pub(super) async fn refresh_token(
|
||||
State(state): State<AppState>,
|
||||
Json(payload): Json<RefreshRequest>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let validator = state
|
||||
.jwt_validator
|
||||
.as_ref()
|
||||
.ok_or_else(|| ApiError::Internal("JWT not configured".to_string()))?;
|
||||
|
||||
let claims = validator
|
||||
.validate_refresh_token(&payload.refresh_token)
|
||||
.map_err(|e| {
|
||||
tracing::debug!("Refresh token validation failed: {:?}", e);
|
||||
ApiError::Unauthorized("Invalid or expired refresh token".to_string())
|
||||
})?;
|
||||
|
||||
let user_id: uuid::Uuid = claims
|
||||
.sub
|
||||
.parse()
|
||||
.map_err(|_| ApiError::Unauthorized("Invalid user ID in token".to_string()))?;
|
||||
|
||||
let user = state
|
||||
.user_service
|
||||
.find_by_id(user_id)
|
||||
.await
|
||||
.map_err(|e| ApiError::Internal(format!("Failed to fetch user: {}", e)))?;
|
||||
|
||||
let access_token = create_jwt(&user, &state)?;
|
||||
let new_refresh_token = create_refresh_jwt(&user, &state)?;
|
||||
|
||||
Ok(Json(TokenResponse {
|
||||
access_token,
|
||||
token_type: "Bearer".to_string(),
|
||||
expires_in: state.config.jwt_expiry_hours * 3600,
|
||||
refresh_token: Some(new_refresh_token),
|
||||
}))
|
||||
}
|
||||
|
||||
@@ -100,5 +149,6 @@ pub(super) async fn get_token(
|
||||
access_token: token,
|
||||
token_type: "Bearer".to_string(),
|
||||
expires_in: state.config.jwt_expiry_hours * 3600,
|
||||
refresh_token: None,
|
||||
}))
|
||||
}
|
||||
|
||||
@@ -18,7 +18,9 @@ pub fn router() -> Router<AppState> {
|
||||
.route("/me", get(local::me));
|
||||
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
let r = r.route("/token", post(local::get_token));
|
||||
let r = r
|
||||
.route("/token", post(local::get_token))
|
||||
.route("/refresh", post(local::refresh_token));
|
||||
|
||||
#[cfg(feature = "auth-oidc")]
|
||||
let r = r
|
||||
@@ -28,7 +30,7 @@ pub fn router() -> Router<AppState> {
|
||||
r
|
||||
}
|
||||
|
||||
/// Helper: create JWT for a user
|
||||
/// Helper: create access JWT for a user
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
pub(super) fn create_jwt(user: &domain::User, state: &AppState) -> Result<String, ApiError> {
|
||||
let validator = state
|
||||
@@ -45,3 +47,21 @@ pub(super) fn create_jwt(user: &domain::User, state: &AppState) -> Result<String
|
||||
pub(super) fn create_jwt(_user: &domain::User, _state: &AppState) -> Result<String, ApiError> {
|
||||
Err(ApiError::Internal("JWT feature not enabled".to_string()))
|
||||
}
|
||||
|
||||
/// Helper: create refresh JWT for a user
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
pub(super) fn create_refresh_jwt(user: &domain::User, state: &AppState) -> Result<String, ApiError> {
|
||||
let validator = state
|
||||
.jwt_validator
|
||||
.as_ref()
|
||||
.ok_or_else(|| ApiError::Internal("JWT not configured".to_string()))?;
|
||||
|
||||
validator
|
||||
.create_refresh_token(user)
|
||||
.map_err(|e| ApiError::Internal(format!("Failed to create refresh token: {}", e)))
|
||||
}
|
||||
|
||||
#[cfg(not(feature = "auth-jwt"))]
|
||||
pub(super) fn create_refresh_jwt(_user: &domain::User, _state: &AppState) -> Result<String, ApiError> {
|
||||
Err(ApiError::Internal("JWT feature not enabled".to_string()))
|
||||
}
|
||||
|
||||
@@ -61,8 +61,7 @@ pub(super) async fn get_current_broadcast(
|
||||
Some(broadcast) => {
|
||||
let block_access_mode = channel
|
||||
.schedule_config
|
||||
.blocks
|
||||
.iter()
|
||||
.all_blocks()
|
||||
.find(|b| b.id == broadcast.slot.source_block_id)
|
||||
.map(|b| b.access_mode.clone())
|
||||
.unwrap_or_default();
|
||||
@@ -168,8 +167,7 @@ pub(super) async fn get_stream(
|
||||
// Block-level access check
|
||||
if let Some(block) = channel
|
||||
.schedule_config
|
||||
.blocks
|
||||
.iter()
|
||||
.all_blocks()
|
||||
.find(|b| b.id == broadcast.slot.source_block_id)
|
||||
{
|
||||
check_access(
|
||||
|
||||
72
k-tv-backend/api/src/routes/channels/config_history.rs
Normal file
72
k-tv-backend/api/src/routes/channels/config_history.rs
Normal file
@@ -0,0 +1,72 @@
|
||||
use axum::{
|
||||
Json,
|
||||
extract::{Path, State},
|
||||
http::StatusCode,
|
||||
response::IntoResponse,
|
||||
};
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::{
|
||||
dto::{ChannelResponse, ConfigSnapshotResponse, PatchSnapshotRequest},
|
||||
error::ApiError,
|
||||
extractors::CurrentUser,
|
||||
state::AppState,
|
||||
};
|
||||
|
||||
use super::require_owner;
|
||||
|
||||
pub(super) async fn list_config_history(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(user): CurrentUser,
|
||||
Path(channel_id): Path<Uuid>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let channel = state.channel_service.find_by_id(channel_id).await?;
|
||||
require_owner(&channel, user.id)?;
|
||||
|
||||
let snapshots = state.channel_service.list_config_snapshots(channel_id).await?;
|
||||
let response: Vec<ConfigSnapshotResponse> = snapshots.into_iter().map(Into::into).collect();
|
||||
Ok(Json(response))
|
||||
}
|
||||
|
||||
pub(super) async fn patch_config_snapshot(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(user): CurrentUser,
|
||||
Path((channel_id, snap_id)): Path<(Uuid, Uuid)>,
|
||||
Json(payload): Json<PatchSnapshotRequest>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let channel = state.channel_service.find_by_id(channel_id).await?;
|
||||
require_owner(&channel, user.id)?;
|
||||
|
||||
let updated = state
|
||||
.channel_service
|
||||
.patch_config_snapshot_label(channel_id, snap_id, payload.label)
|
||||
.await?
|
||||
.ok_or_else(|| ApiError::NotFound("Snapshot not found".into()))?;
|
||||
|
||||
Ok(Json(ConfigSnapshotResponse::from(updated)))
|
||||
}
|
||||
|
||||
pub(super) async fn restore_config_snapshot(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(user): CurrentUser,
|
||||
Path((channel_id, snap_id)): Path<(Uuid, Uuid)>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let channel = state.channel_service.find_by_id(channel_id).await?;
|
||||
require_owner(&channel, user.id)?;
|
||||
|
||||
let updated = state
|
||||
.channel_service
|
||||
.restore_config_snapshot(channel_id, snap_id)
|
||||
.await
|
||||
.map_err(|e| match e {
|
||||
domain::DomainError::ChannelNotFound(_) => ApiError::NotFound("Snapshot not found".into()),
|
||||
other => ApiError::from(other),
|
||||
})?;
|
||||
|
||||
let _ = state
|
||||
.activity_log_repo
|
||||
.log("config_restored", &snap_id.to_string(), Some(channel_id))
|
||||
.await;
|
||||
|
||||
Ok((StatusCode::OK, Json(ChannelResponse::from(updated))))
|
||||
}
|
||||
@@ -69,6 +69,7 @@ pub(super) async fn create_channel(
|
||||
}
|
||||
|
||||
let _ = state.event_tx.send(domain::DomainEvent::ChannelCreated { channel: channel.clone() });
|
||||
let _ = state.activity_log_repo.log("channel_created", &channel.name, Some(channel.id)).await;
|
||||
Ok((StatusCode::CREATED, Json(ChannelResponse::from(channel))))
|
||||
}
|
||||
|
||||
@@ -101,7 +102,7 @@ pub(super) async fn update_channel(
|
||||
channel.timezone = tz;
|
||||
}
|
||||
if let Some(sc) = payload.schedule_config {
|
||||
channel.schedule_config = sc;
|
||||
channel.schedule_config = domain::ScheduleConfig::from(sc);
|
||||
}
|
||||
if let Some(rp) = payload.recycle_policy {
|
||||
channel.recycle_policy = rp;
|
||||
@@ -144,6 +145,7 @@ pub(super) async fn update_channel(
|
||||
|
||||
let channel = state.channel_service.update(channel).await?;
|
||||
let _ = state.event_tx.send(domain::DomainEvent::ChannelUpdated { channel: channel.clone() });
|
||||
let _ = state.activity_log_repo.log("channel_updated", &channel.name, Some(channel.id)).await;
|
||||
Ok(Json(ChannelResponse::from(channel)))
|
||||
}
|
||||
|
||||
@@ -155,5 +157,6 @@ pub(super) async fn delete_channel(
|
||||
// ChannelService::delete enforces ownership internally
|
||||
state.channel_service.delete(channel_id, user.id).await?;
|
||||
let _ = state.event_tx.send(domain::DomainEvent::ChannelDeleted { channel_id });
|
||||
let _ = state.activity_log_repo.log("channel_deleted", &channel_id.to_string(), Some(channel_id)).await;
|
||||
Ok(StatusCode::NO_CONTENT)
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ use domain::{AccessMode, User};
|
||||
use crate::{error::ApiError, state::AppState};
|
||||
|
||||
mod broadcast;
|
||||
mod config_history;
|
||||
mod crud;
|
||||
mod schedule;
|
||||
|
||||
@@ -27,9 +28,30 @@ pub fn router() -> Router<AppState> {
|
||||
"/{id}/schedule",
|
||||
post(schedule::generate_schedule).get(schedule::get_active_schedule),
|
||||
)
|
||||
.route("/{id}/schedule/history", get(schedule::list_schedule_history))
|
||||
.route(
|
||||
"/{id}/schedule/history/{gen_id}",
|
||||
get(schedule::get_schedule_history_entry),
|
||||
)
|
||||
.route(
|
||||
"/{id}/schedule/history/{gen_id}/rollback",
|
||||
post(schedule::rollback_schedule),
|
||||
)
|
||||
.route("/{id}/now", get(broadcast::get_current_broadcast))
|
||||
.route("/{id}/epg", get(broadcast::get_epg))
|
||||
.route("/{id}/stream", get(broadcast::get_stream))
|
||||
.route(
|
||||
"/{id}/config/history",
|
||||
get(config_history::list_config_history),
|
||||
)
|
||||
.route(
|
||||
"/{id}/config/history/{snap_id}",
|
||||
axum::routing::patch(config_history::patch_config_snapshot),
|
||||
)
|
||||
.route(
|
||||
"/{id}/config/history/{snap_id}/restore",
|
||||
post(config_history::restore_config_snapshot),
|
||||
)
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
|
||||
@@ -10,7 +10,7 @@ use uuid::Uuid;
|
||||
use domain::{self, DomainError};
|
||||
|
||||
use crate::{
|
||||
dto::ScheduleResponse,
|
||||
dto::{ScheduleHistoryEntry, ScheduleResponse},
|
||||
error::ApiError,
|
||||
extractors::CurrentUser,
|
||||
state::AppState,
|
||||
@@ -18,7 +18,7 @@ use crate::{
|
||||
|
||||
use super::require_owner;
|
||||
|
||||
/// Trigger 48-hour schedule generation for a channel, starting from now.
|
||||
/// Trigger 7-day schedule generation for a channel, starting from now.
|
||||
/// Replaces any existing schedule for the same window.
|
||||
pub(super) async fn generate_schedule(
|
||||
State(state): State<AppState>,
|
||||
@@ -37,10 +37,12 @@ pub(super) async fn generate_schedule(
|
||||
channel_id,
|
||||
schedule: schedule.clone(),
|
||||
});
|
||||
let detail = format!("{} slots", schedule.slots.len());
|
||||
let _ = state.activity_log_repo.log("schedule_generated", &detail, Some(channel_id)).await;
|
||||
Ok((StatusCode::CREATED, Json(ScheduleResponse::from(schedule))))
|
||||
}
|
||||
|
||||
/// Return the currently active 48-hour schedule for a channel.
|
||||
/// Return the currently active 7-day schedule for a channel.
|
||||
/// 404 if no schedule has been generated yet — call POST /:id/schedule first.
|
||||
pub(super) async fn get_active_schedule(
|
||||
State(state): State<AppState>,
|
||||
@@ -58,3 +60,75 @@ pub(super) async fn get_active_schedule(
|
||||
|
||||
Ok(Json(ScheduleResponse::from(schedule)))
|
||||
}
|
||||
|
||||
/// List all schedule generations for a channel, newest first.
|
||||
/// Returns lightweight entries (no slots).
|
||||
pub(super) async fn list_schedule_history(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(user): CurrentUser,
|
||||
Path(channel_id): Path<Uuid>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let channel = state.channel_service.find_by_id(channel_id).await?;
|
||||
require_owner(&channel, user.id)?;
|
||||
|
||||
let history = state.schedule_engine.list_schedule_history(channel_id).await?;
|
||||
let entries: Vec<ScheduleHistoryEntry> = history.into_iter().map(Into::into).collect();
|
||||
Ok(Json(entries))
|
||||
}
|
||||
|
||||
/// Fetch a single historical schedule with all its slots.
|
||||
pub(super) async fn get_schedule_history_entry(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(user): CurrentUser,
|
||||
Path((channel_id, gen_id)): Path<(Uuid, Uuid)>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let channel = state.channel_service.find_by_id(channel_id).await?;
|
||||
require_owner(&channel, user.id)?;
|
||||
|
||||
let schedule = state
|
||||
.schedule_engine
|
||||
.get_schedule_by_id(channel_id, gen_id)
|
||||
.await?
|
||||
.ok_or_else(|| ApiError::NotFound(format!("Schedule {} not found", gen_id)))?;
|
||||
|
||||
Ok(Json(ScheduleResponse::from(schedule)))
|
||||
}
|
||||
|
||||
/// Roll back to a previous schedule generation.
|
||||
///
|
||||
/// Deletes all generations after `gen_id`'s generation, then generates a fresh
|
||||
/// schedule from now (inheriting the rolled-back generation as the base for
|
||||
/// recycle-policy history).
|
||||
pub(super) async fn rollback_schedule(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(user): CurrentUser,
|
||||
Path((channel_id, gen_id)): Path<(Uuid, Uuid)>,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
let channel = state.channel_service.find_by_id(channel_id).await?;
|
||||
require_owner(&channel, user.id)?;
|
||||
|
||||
let target = state
|
||||
.schedule_engine
|
||||
.get_schedule_by_id(channel_id, gen_id)
|
||||
.await?
|
||||
.ok_or_else(|| ApiError::NotFound(format!("Schedule {} not found", gen_id)))?;
|
||||
|
||||
state
|
||||
.schedule_engine
|
||||
.delete_schedules_after(channel_id, target.generation)
|
||||
.await?;
|
||||
|
||||
let schedule = state
|
||||
.schedule_engine
|
||||
.generate_schedule(channel_id, Utc::now())
|
||||
.await?;
|
||||
|
||||
let _ = state.event_tx.send(domain::DomainEvent::ScheduleGenerated {
|
||||
channel_id,
|
||||
schedule: schedule.clone(),
|
||||
});
|
||||
let detail = format!("rollback to gen {}; {} slots", target.generation, schedule.slots.len());
|
||||
let _ = state.activity_log_repo.log("schedule_rollback", &detail, Some(channel_id)).await;
|
||||
|
||||
Ok(Json(ScheduleResponse::from(schedule)))
|
||||
}
|
||||
|
||||
@@ -8,22 +8,23 @@ pub fn router() -> Router<AppState> {
|
||||
Router::new().route("/", get(get_config))
|
||||
}
|
||||
|
||||
#[allow(clippy::vec_init_then_push)]
|
||||
async fn get_config(State(state): State<AppState>) -> Json<ConfigResponse> {
|
||||
let providers: Vec<ProviderInfo> = state
|
||||
.provider_registry
|
||||
let registry = state.provider_registry.read().await;
|
||||
|
||||
let providers: Vec<ProviderInfo> = registry
|
||||
.provider_ids()
|
||||
.into_iter()
|
||||
.filter_map(|id| {
|
||||
state.provider_registry.capabilities(&id).map(|caps| ProviderInfo {
|
||||
registry.capabilities(&id).map(|caps| ProviderInfo {
|
||||
id: id.clone(),
|
||||
capabilities: caps,
|
||||
})
|
||||
})
|
||||
.collect();
|
||||
|
||||
let primary_capabilities = state
|
||||
.provider_registry
|
||||
.capabilities(state.provider_registry.primary_id())
|
||||
let primary_capabilities = registry
|
||||
.capabilities(registry.primary_id())
|
||||
.unwrap_or(ProviderCapabilities {
|
||||
collections: false,
|
||||
series: false,
|
||||
@@ -36,9 +37,16 @@ async fn get_config(State(state): State<AppState>) -> Json<ConfigResponse> {
|
||||
transcode: false,
|
||||
});
|
||||
|
||||
let mut available_provider_types = Vec::new();
|
||||
#[cfg(feature = "jellyfin")]
|
||||
available_provider_types.push("jellyfin".to_string());
|
||||
#[cfg(feature = "local-files")]
|
||||
available_provider_types.push("local_files".to_string());
|
||||
|
||||
Json(ConfigResponse {
|
||||
allow_registration: state.config.allow_registration,
|
||||
providers,
|
||||
provider_capabilities: primary_capabilities,
|
||||
available_provider_types,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -22,6 +22,7 @@ use crate::{error::ApiError, state::AppState};
|
||||
#[cfg(feature = "local-files")]
|
||||
use axum::{
|
||||
Json,
|
||||
extract::Query,
|
||||
http::StatusCode,
|
||||
routing::{delete, post},
|
||||
};
|
||||
@@ -53,6 +54,7 @@ pub fn router() -> Router<AppState> {
|
||||
// Direct streaming
|
||||
// ============================================================================
|
||||
|
||||
#[cfg_attr(not(feature = "local-files"), allow(unused_variables))]
|
||||
async fn stream_file(
|
||||
State(state): State<AppState>,
|
||||
Path(encoded_id): Path<String>,
|
||||
@@ -131,7 +133,7 @@ async fn stream_file(
|
||||
);
|
||||
}
|
||||
|
||||
return builder.body(body).map_err(|e| ApiError::internal(e.to_string()));
|
||||
builder.body(body).map_err(|e| ApiError::internal(e.to_string()))
|
||||
}
|
||||
|
||||
#[cfg(not(feature = "local-files"))]
|
||||
@@ -142,15 +144,25 @@ async fn stream_file(
|
||||
// Rescan
|
||||
// ============================================================================
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
#[derive(Deserialize)]
|
||||
struct RescanQuery {
|
||||
provider: Option<String>,
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
async fn trigger_rescan(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Query(query): Query<RescanQuery>,
|
||||
) -> Result<Json<serde_json::Value>, ApiError> {
|
||||
let index = state
|
||||
.local_index
|
||||
.as_ref()
|
||||
.ok_or_else(|| ApiError::not_implemented("no local files provider active"))?;
|
||||
let map = state.local_index.read().await.clone();
|
||||
let index = if let Some(id) = &query.provider {
|
||||
map.get(id).cloned()
|
||||
} else {
|
||||
map.values().next().cloned()
|
||||
};
|
||||
let index = index.ok_or_else(|| ApiError::not_implemented("no local files provider active"))?;
|
||||
let count = index.rescan().await;
|
||||
Ok(Json(serde_json::json!({ "items_found": count })))
|
||||
}
|
||||
@@ -164,9 +176,7 @@ async fn transcode_playlist(
|
||||
State(state): State<AppState>,
|
||||
Path(id): Path<String>,
|
||||
) -> Result<Response, ApiError> {
|
||||
let tm = state
|
||||
.transcode_manager
|
||||
.as_ref()
|
||||
let tm = state.transcode_manager.read().await.clone()
|
||||
.ok_or_else(|| ApiError::not_implemented("TRANSCODE_DIR not configured"))?;
|
||||
|
||||
let root = state.config.local_files_dir.as_ref().ok_or_else(|| {
|
||||
@@ -219,9 +229,7 @@ async fn transcode_segment(
|
||||
return Err(ApiError::Forbidden("invalid segment path".into()));
|
||||
}
|
||||
|
||||
let tm = state
|
||||
.transcode_manager
|
||||
.as_ref()
|
||||
let tm = state.transcode_manager.read().await.clone()
|
||||
.ok_or_else(|| ApiError::not_implemented("TRANSCODE_DIR not configured"))?;
|
||||
|
||||
let file_path = tm.transcode_dir.join(&id).join(&segment);
|
||||
@@ -262,19 +270,10 @@ async fn get_transcode_settings(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
) -> Result<Json<TranscodeSettingsResponse>, ApiError> {
|
||||
let pool = state
|
||||
.sqlite_pool
|
||||
.as_ref()
|
||||
.ok_or_else(|| ApiError::not_implemented("sqlite not available"))?;
|
||||
|
||||
let (ttl,): (i64,) =
|
||||
sqlx::query_as("SELECT cleanup_ttl_hours FROM transcode_settings WHERE id = 1")
|
||||
.fetch_one(pool)
|
||||
.await
|
||||
.map_err(|e| ApiError::internal(e.to_string()))?;
|
||||
|
||||
let tm = state.transcode_manager.read().await.clone()
|
||||
.ok_or_else(|| ApiError::not_implemented("TRANSCODE_DIR not configured"))?;
|
||||
Ok(Json(TranscodeSettingsResponse {
|
||||
cleanup_ttl_hours: ttl as u32,
|
||||
cleanup_ttl_hours: tm.get_cleanup_ttl(),
|
||||
}))
|
||||
}
|
||||
|
||||
@@ -284,19 +283,14 @@ async fn update_transcode_settings(
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Json(req): Json<UpdateTranscodeSettingsRequest>,
|
||||
) -> Result<Json<TranscodeSettingsResponse>, ApiError> {
|
||||
let pool = state
|
||||
.sqlite_pool
|
||||
.as_ref()
|
||||
.ok_or_else(|| ApiError::not_implemented("sqlite not available"))?;
|
||||
if let Some(repo) = &state.transcode_settings_repo {
|
||||
repo.save_cleanup_ttl(req.cleanup_ttl_hours)
|
||||
.await
|
||||
.map_err(|e| ApiError::internal(e.to_string()))?;
|
||||
}
|
||||
|
||||
let ttl = req.cleanup_ttl_hours as i64;
|
||||
sqlx::query("UPDATE transcode_settings SET cleanup_ttl_hours = ? WHERE id = 1")
|
||||
.bind(ttl)
|
||||
.execute(pool)
|
||||
.await
|
||||
.map_err(|e| ApiError::internal(e.to_string()))?;
|
||||
|
||||
if let Some(tm) = &state.transcode_manager {
|
||||
let tm_opt = state.transcode_manager.read().await.clone();
|
||||
if let Some(tm) = tm_opt {
|
||||
tm.set_cleanup_ttl(req.cleanup_ttl_hours);
|
||||
}
|
||||
|
||||
@@ -310,9 +304,7 @@ async fn get_transcode_stats(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
) -> Result<Json<TranscodeStatsResponse>, ApiError> {
|
||||
let tm = state
|
||||
.transcode_manager
|
||||
.as_ref()
|
||||
let tm = state.transcode_manager.read().await.clone()
|
||||
.ok_or_else(|| ApiError::not_implemented("TRANSCODE_DIR not configured"))?;
|
||||
let (cache_size_bytes, item_count) = tm.cache_stats().await;
|
||||
Ok(Json(TranscodeStatsResponse {
|
||||
@@ -326,9 +318,7 @@ async fn clear_transcode_cache(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
) -> Result<StatusCode, ApiError> {
|
||||
let tm = state
|
||||
.transcode_manager
|
||||
.as_ref()
|
||||
let tm = state.transcode_manager.read().await.clone()
|
||||
.ok_or_else(|| ApiError::not_implemented("TRANSCODE_DIR not configured"))?;
|
||||
tm.clear_cache()
|
||||
.await
|
||||
@@ -340,6 +330,7 @@ async fn clear_transcode_cache(
|
||||
// Helpers
|
||||
// ============================================================================
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
fn content_type_for_ext(ext: &str) -> &'static str {
|
||||
match ext {
|
||||
"mp4" | "m4v" => "video/mp4",
|
||||
@@ -351,6 +342,7 @@ fn content_type_for_ext(ext: &str) -> &'static str {
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
fn parse_range(range: &str, file_size: u64) -> Option<(u64, u64)> {
|
||||
let range = range.strip_prefix("bytes=")?;
|
||||
let (start_str, end_str) = range.split_once('-')?;
|
||||
|
||||
@@ -1,25 +1,39 @@
|
||||
//! Library browsing routes
|
||||
//! Library routes — DB-backed.
|
||||
//!
|
||||
//! These endpoints expose the media provider's library to the dashboard so
|
||||
//! users can discover what's available without knowing provider-internal IDs.
|
||||
//! All routes require authentication.
|
||||
//! GET /library/collections — collections derived from synced items
|
||||
//! GET /library/series — series names
|
||||
//! GET /library/genres — genres
|
||||
//! GET /library/items — search / browse
|
||||
//! GET /library/items/:id — single item
|
||||
//! GET /library/sync/status — latest sync log per provider
|
||||
//! POST /library/sync — trigger an ad-hoc sync (auth)
|
||||
//!
|
||||
//! GET /library/collections — top-level libraries (Jellyfin views, Plex sections)
|
||||
//! GET /library/series — TV series, optionally scoped to a collection
|
||||
//! GET /library/genres — available genres, optionally filtered by content type
|
||||
//! GET /library/items — search / browse items (used for block filter preview)
|
||||
//! Admin (nested under /admin/library):
|
||||
//! GET /admin/library/settings — app_settings key/value
|
||||
//! PUT /admin/library/settings — update app_settings
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
|
||||
use axum::{
|
||||
Json, Router,
|
||||
extract::{Query, RawQuery, State},
|
||||
routing::get,
|
||||
extract::{Path, Query, RawQuery, State},
|
||||
http::StatusCode,
|
||||
response::IntoResponse,
|
||||
routing::{get, post, put},
|
||||
};
|
||||
use domain::IProviderRegistry as _;
|
||||
use domain::{ContentType, ILibraryRepository, LibrarySearchFilter, LibrarySyncAdapter};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use domain::{Collection, ContentType, MediaFilter, SeriesSummary};
|
||||
use crate::{
|
||||
error::ApiError,
|
||||
extractors::{AdminUser, CurrentUser},
|
||||
state::AppState,
|
||||
};
|
||||
|
||||
use crate::{error::ApiError, extractors::CurrentUser, state::AppState};
|
||||
// ============================================================================
|
||||
// Routers
|
||||
// ============================================================================
|
||||
|
||||
pub fn router() -> Router<AppState> {
|
||||
Router::new()
|
||||
@@ -27,6 +41,15 @@ pub fn router() -> Router<AppState> {
|
||||
.route("/series", get(list_series))
|
||||
.route("/genres", get(list_genres))
|
||||
.route("/items", get(search_items))
|
||||
.route("/items/{id}", get(get_item))
|
||||
.route("/shows", get(list_shows))
|
||||
.route("/shows/{name}/seasons", get(list_seasons))
|
||||
.route("/sync/status", get(sync_status))
|
||||
.route("/sync", post(trigger_sync))
|
||||
}
|
||||
|
||||
pub fn admin_router() -> Router<AppState> {
|
||||
Router::new().route("/settings", get(get_settings).put(update_settings))
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -41,38 +64,6 @@ struct CollectionResponse {
|
||||
collection_type: Option<String>,
|
||||
}
|
||||
|
||||
impl From<Collection> for CollectionResponse {
|
||||
fn from(c: Collection) -> Self {
|
||||
Self {
|
||||
id: c.id,
|
||||
name: c.name,
|
||||
collection_type: c.collection_type,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct SeriesResponse {
|
||||
id: String,
|
||||
name: String,
|
||||
episode_count: u32,
|
||||
genres: Vec<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
year: Option<u16>,
|
||||
}
|
||||
|
||||
impl From<SeriesSummary> for SeriesResponse {
|
||||
fn from(s: SeriesSummary) -> Self {
|
||||
Self {
|
||||
id: s.id,
|
||||
name: s.name,
|
||||
episode_count: s.episode_count,
|
||||
genres: s.genres,
|
||||
year: s.year,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct LibraryItemResponse {
|
||||
id: String,
|
||||
@@ -88,6 +79,45 @@ struct LibraryItemResponse {
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
year: Option<u16>,
|
||||
genres: Vec<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
thumbnail_url: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct PagedResponse<T: Serialize> {
|
||||
items: Vec<T>,
|
||||
total: u32,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct ShowSummaryResponse {
|
||||
series_name: String,
|
||||
episode_count: u32,
|
||||
season_count: u32,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
thumbnail_url: Option<String>,
|
||||
genres: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct SeasonSummaryResponse {
|
||||
season_number: u32,
|
||||
episode_count: u32,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
thumbnail_url: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct SyncLogResponse {
|
||||
id: i64,
|
||||
provider_id: String,
|
||||
started_at: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
finished_at: Option<String>,
|
||||
items_found: u32,
|
||||
status: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
error_msg: Option<String>,
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -96,47 +126,47 @@ struct LibraryItemResponse {
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
struct CollectionsQuery {
|
||||
/// Provider key to query (default: primary).
|
||||
provider: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
struct SeriesQuery {
|
||||
/// Scope results to a specific collection (provider library ID).
|
||||
collection: Option<String>,
|
||||
/// Provider key to query (default: primary).
|
||||
provider: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
struct GenresQuery {
|
||||
/// Limit genres to a content type: "movie", "episode", or "short".
|
||||
#[serde(rename = "type")]
|
||||
content_type: Option<String>,
|
||||
/// Provider key to query (default: primary).
|
||||
provider: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Deserialize)]
|
||||
struct ItemsQuery {
|
||||
/// Free-text search.
|
||||
q: Option<String>,
|
||||
/// Content type filter: "movie", "episode", or "short".
|
||||
#[serde(rename = "type")]
|
||||
content_type: Option<String>,
|
||||
/// Filter episodes by series name. Repeat the param for multiple series:
|
||||
/// `?series[]=iCarly&series[]=Victorious`
|
||||
#[serde(default)]
|
||||
series: Vec<String>,
|
||||
/// Scope to a provider collection ID.
|
||||
#[serde(default)]
|
||||
genres: Vec<String>,
|
||||
collection: Option<String>,
|
||||
/// Maximum number of results (default: 50, max: 200).
|
||||
limit: Option<usize>,
|
||||
/// Fill strategy to simulate: "random" | "sequential" | "best_fit".
|
||||
/// Applies the same ordering the schedule engine would use so the preview
|
||||
/// reflects what will actually be scheduled.
|
||||
strategy: Option<String>,
|
||||
/// Provider key to query (default: primary).
|
||||
limit: Option<u32>,
|
||||
offset: Option<u32>,
|
||||
provider: Option<String>,
|
||||
season: Option<u32>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Deserialize)]
|
||||
struct ShowsQuery {
|
||||
q: Option<String>,
|
||||
provider: Option<String>,
|
||||
#[serde(default)]
|
||||
genres: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
struct SeasonsQuery {
|
||||
provider: Option<String>,
|
||||
}
|
||||
|
||||
@@ -144,127 +174,275 @@ struct ItemsQuery {
|
||||
// Handlers
|
||||
// ============================================================================
|
||||
|
||||
/// List top-level collections (Jellyfin virtual libraries, Plex sections, etc.)
|
||||
async fn list_collections(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Query(params): Query<CollectionsQuery>,
|
||||
) -> Result<Json<Vec<CollectionResponse>>, ApiError> {
|
||||
let provider_id = params.provider.as_deref().unwrap_or("");
|
||||
let caps = state.provider_registry.capabilities(provider_id).ok_or_else(|| {
|
||||
ApiError::validation(format!("Unknown provider '{}'", provider_id))
|
||||
})?;
|
||||
if !caps.collections {
|
||||
return Err(ApiError::not_implemented("collections not supported by this provider"));
|
||||
}
|
||||
let collections = state.provider_registry.list_collections(provider_id).await?;
|
||||
Ok(Json(collections.into_iter().map(Into::into).collect()))
|
||||
let cols = state
|
||||
.library_repo
|
||||
.list_collections(params.provider.as_deref())
|
||||
.await?;
|
||||
let resp = cols
|
||||
.into_iter()
|
||||
.map(|c| CollectionResponse {
|
||||
id: c.id,
|
||||
name: c.name,
|
||||
collection_type: c.collection_type,
|
||||
})
|
||||
.collect();
|
||||
Ok(Json(resp))
|
||||
}
|
||||
|
||||
/// List TV series, optionally scoped to a collection.
|
||||
async fn list_series(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Query(params): Query<SeriesQuery>,
|
||||
) -> Result<Json<Vec<SeriesResponse>>, ApiError> {
|
||||
let provider_id = params.provider.as_deref().unwrap_or("");
|
||||
let caps = state.provider_registry.capabilities(provider_id).ok_or_else(|| {
|
||||
ApiError::validation(format!("Unknown provider '{}'", provider_id))
|
||||
})?;
|
||||
if !caps.series {
|
||||
return Err(ApiError::not_implemented("series not supported by this provider"));
|
||||
}
|
||||
) -> Result<Json<Vec<String>>, ApiError> {
|
||||
let series = state
|
||||
.provider_registry
|
||||
.list_series(provider_id, params.collection.as_deref())
|
||||
.library_repo
|
||||
.list_series(params.provider.as_deref())
|
||||
.await?;
|
||||
Ok(Json(series.into_iter().map(Into::into).collect()))
|
||||
Ok(Json(series))
|
||||
}
|
||||
|
||||
/// List available genres, optionally filtered to a content type.
|
||||
async fn list_genres(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Query(params): Query<GenresQuery>,
|
||||
) -> Result<Json<Vec<String>>, ApiError> {
|
||||
let provider_id = params.provider.as_deref().unwrap_or("");
|
||||
let caps = state.provider_registry.capabilities(provider_id).ok_or_else(|| {
|
||||
ApiError::validation(format!("Unknown provider '{}'", provider_id))
|
||||
})?;
|
||||
if !caps.genres {
|
||||
return Err(ApiError::not_implemented("genres not supported by this provider"));
|
||||
}
|
||||
let ct = parse_content_type(params.content_type.as_deref())?;
|
||||
let genres = state.provider_registry.list_genres(provider_id, ct.as_ref()).await?;
|
||||
let genres = state
|
||||
.library_repo
|
||||
.list_genres(ct.as_ref(), params.provider.as_deref())
|
||||
.await?;
|
||||
Ok(Json(genres))
|
||||
}
|
||||
|
||||
/// Search / browse library items. Used by the block editor to preview what a
|
||||
/// filter matches before saving a channel config.
|
||||
async fn search_items(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
RawQuery(raw_query): RawQuery,
|
||||
) -> Result<Json<Vec<LibraryItemResponse>>, ApiError> {
|
||||
let qs_config = serde_qs::Config::new(2, false); // non-strict: accept encoded brackets
|
||||
) -> Result<Json<PagedResponse<LibraryItemResponse>>, ApiError> {
|
||||
let qs_config = serde_qs::Config::new(2, false);
|
||||
let params: ItemsQuery = raw_query
|
||||
.as_deref()
|
||||
.map(|q| qs_config.deserialize_str::<ItemsQuery>(q))
|
||||
.transpose()
|
||||
.map_err(|e| ApiError::validation(e.to_string()))?
|
||||
.unwrap_or_default();
|
||||
|
||||
let limit = params.limit.unwrap_or(50).min(200);
|
||||
let offset = params.offset.unwrap_or(0);
|
||||
|
||||
let provider_id = params.provider.as_deref().unwrap_or("");
|
||||
|
||||
let filter = MediaFilter {
|
||||
let filter = LibrarySearchFilter {
|
||||
provider_id: params.provider,
|
||||
content_type: parse_content_type(params.content_type.as_deref())?,
|
||||
search_term: params.q,
|
||||
series_names: params.series,
|
||||
collections: params
|
||||
.collection
|
||||
.map(|c| vec![c])
|
||||
.unwrap_or_default(),
|
||||
collection_id: params.collection,
|
||||
genres: params.genres,
|
||||
search_term: params.q,
|
||||
season_number: params.season,
|
||||
offset,
|
||||
limit,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let mut items = state.provider_registry.fetch_items(provider_id, &filter).await?;
|
||||
let (items, total) = state.library_repo.search(&filter).await?;
|
||||
let resp = items.into_iter().map(library_item_to_response).collect();
|
||||
Ok(Json(PagedResponse { items: resp, total }))
|
||||
}
|
||||
|
||||
// Apply the same ordering the schedule engine uses so the preview reflects
|
||||
// what will actually be scheduled rather than raw provider order.
|
||||
match params.strategy.as_deref() {
|
||||
Some("random") => {
|
||||
use rand::seq::SliceRandom;
|
||||
items.shuffle(&mut rand::thread_rng());
|
||||
}
|
||||
Some("best_fit") => {
|
||||
// Mirror the greedy bin-packing: longest items first.
|
||||
items.sort_by(|a, b| b.duration_secs.cmp(&a.duration_secs));
|
||||
}
|
||||
_ => {} // "sequential" / unset: keep provider order (episode order per series)
|
||||
}
|
||||
async fn get_item(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Path(id): Path<String>,
|
||||
) -> Result<Json<LibraryItemResponse>, ApiError> {
|
||||
let item = state
|
||||
.library_repo
|
||||
.get_by_id(&id)
|
||||
.await?
|
||||
.ok_or_else(|| ApiError::NotFound(format!("Library item '{}' not found", id)))?;
|
||||
Ok(Json(library_item_to_response(item)))
|
||||
}
|
||||
|
||||
let response: Vec<LibraryItemResponse> = items
|
||||
async fn sync_status(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
) -> Result<Json<Vec<SyncLogResponse>>, ApiError> {
|
||||
let entries = state.library_repo.latest_sync_status().await?;
|
||||
let resp = entries
|
||||
.into_iter()
|
||||
.take(limit)
|
||||
.map(|item| LibraryItemResponse {
|
||||
id: item.id.into_inner(),
|
||||
title: item.title,
|
||||
content_type: match item.content_type {
|
||||
domain::ContentType::Movie => "movie".into(),
|
||||
domain::ContentType::Episode => "episode".into(),
|
||||
domain::ContentType::Short => "short".into(),
|
||||
},
|
||||
duration_secs: item.duration_secs,
|
||||
series_name: item.series_name,
|
||||
season_number: item.season_number,
|
||||
episode_number: item.episode_number,
|
||||
year: item.year,
|
||||
genres: item.genres,
|
||||
.map(|e| SyncLogResponse {
|
||||
id: e.id,
|
||||
provider_id: e.provider_id,
|
||||
started_at: e.started_at,
|
||||
finished_at: e.finished_at,
|
||||
items_found: e.items_found,
|
||||
status: e.status,
|
||||
error_msg: e.error_msg,
|
||||
})
|
||||
.collect();
|
||||
Ok(Json(resp))
|
||||
}
|
||||
|
||||
Ok(Json(response))
|
||||
async fn trigger_sync(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
) -> Result<impl IntoResponse, ApiError> {
|
||||
use domain::IProviderRegistry as _;
|
||||
let provider_ids: Vec<String> = {
|
||||
let reg = state.provider_registry.read().await;
|
||||
reg.provider_ids()
|
||||
};
|
||||
|
||||
// 409 if any provider is already syncing
|
||||
for pid in &provider_ids {
|
||||
let running = state.library_repo.is_sync_running(pid).await?;
|
||||
if running {
|
||||
return Ok((
|
||||
StatusCode::CONFLICT,
|
||||
Json(serde_json::json!({
|
||||
"error": format!("Sync already running for provider '{}'", pid)
|
||||
})),
|
||||
)
|
||||
.into_response());
|
||||
}
|
||||
}
|
||||
|
||||
// Spawn background sync
|
||||
let sync_adapter: Arc<dyn LibrarySyncAdapter> = Arc::clone(&state.library_sync_adapter);
|
||||
let registry = Arc::clone(&state.provider_registry);
|
||||
tokio::spawn(async move {
|
||||
let providers: Vec<(String, Arc<dyn domain::IMediaProvider>)> = {
|
||||
let reg = registry.read().await;
|
||||
provider_ids
|
||||
.iter()
|
||||
.filter_map(|id| reg.get_provider(id).map(|p| (id.clone(), p)))
|
||||
.collect()
|
||||
};
|
||||
|
||||
for (pid, provider) in providers {
|
||||
let result = sync_adapter.sync_provider(provider.as_ref(), &pid).await;
|
||||
if let Some(ref err) = result.error {
|
||||
tracing::warn!("manual sync: provider '{}' failed: {}", pid, err);
|
||||
} else {
|
||||
tracing::info!(
|
||||
"manual sync: provider '{}' done — {} items in {}ms",
|
||||
pid,
|
||||
result.items_found,
|
||||
result.duration_ms
|
||||
);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
Ok((
|
||||
StatusCode::ACCEPTED,
|
||||
Json(serde_json::json!({ "message": "Sync started" })),
|
||||
)
|
||||
.into_response())
|
||||
}
|
||||
|
||||
async fn list_shows(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Query(params): Query<ShowsQuery>,
|
||||
) -> Result<Json<Vec<ShowSummaryResponse>>, ApiError> {
|
||||
let shows = state
|
||||
.library_repo
|
||||
.list_shows(
|
||||
params.provider.as_deref(),
|
||||
params.q.as_deref(),
|
||||
¶ms.genres,
|
||||
)
|
||||
.await?;
|
||||
let resp = shows
|
||||
.into_iter()
|
||||
.map(|s| ShowSummaryResponse {
|
||||
series_name: s.series_name,
|
||||
episode_count: s.episode_count,
|
||||
season_count: s.season_count,
|
||||
thumbnail_url: s.thumbnail_url,
|
||||
genres: s.genres,
|
||||
})
|
||||
.collect();
|
||||
Ok(Json(resp))
|
||||
}
|
||||
|
||||
async fn list_seasons(
|
||||
State(state): State<AppState>,
|
||||
CurrentUser(_user): CurrentUser,
|
||||
Path(name): Path<String>,
|
||||
Query(params): Query<SeasonsQuery>,
|
||||
) -> Result<Json<Vec<SeasonSummaryResponse>>, ApiError> {
|
||||
let seasons = state
|
||||
.library_repo
|
||||
.list_seasons(&name, params.provider.as_deref())
|
||||
.await?;
|
||||
let resp = seasons
|
||||
.into_iter()
|
||||
.map(|s| SeasonSummaryResponse {
|
||||
season_number: s.season_number,
|
||||
episode_count: s.episode_count,
|
||||
thumbnail_url: s.thumbnail_url,
|
||||
})
|
||||
.collect();
|
||||
Ok(Json(resp))
|
||||
}
|
||||
|
||||
async fn get_settings(
|
||||
State(state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
) -> Result<Json<HashMap<String, serde_json::Value>>, ApiError> {
|
||||
let pairs = state.app_settings_repo.get_all().await?;
|
||||
let map: HashMap<String, serde_json::Value> = pairs
|
||||
.into_iter()
|
||||
.map(|(k, v)| {
|
||||
// Try to parse as number first, then bool, then keep as string
|
||||
let val = if let Ok(n) = v.parse::<i64>() {
|
||||
serde_json::Value::Number(n.into())
|
||||
} else if let Ok(b) = v.parse::<bool>() {
|
||||
serde_json::Value::Bool(b)
|
||||
} else {
|
||||
serde_json::Value::String(v)
|
||||
};
|
||||
(k, val)
|
||||
})
|
||||
.collect();
|
||||
Ok(Json(map))
|
||||
}
|
||||
|
||||
async fn update_settings(
|
||||
State(state): State<AppState>,
|
||||
AdminUser(_user): AdminUser,
|
||||
Json(body): Json<HashMap<String, serde_json::Value>>,
|
||||
) -> Result<Json<HashMap<String, serde_json::Value>>, ApiError> {
|
||||
for (key, val) in &body {
|
||||
let val_str = match val {
|
||||
serde_json::Value::String(s) => s.clone(),
|
||||
serde_json::Value::Number(n) => n.to_string(),
|
||||
serde_json::Value::Bool(b) => b.to_string(),
|
||||
other => other.to_string(),
|
||||
};
|
||||
state.app_settings_repo.set(key, &val_str).await?;
|
||||
}
|
||||
// Return the updated state
|
||||
let pairs = state.app_settings_repo.get_all().await?;
|
||||
let map: HashMap<String, serde_json::Value> = pairs
|
||||
.into_iter()
|
||||
.map(|(k, v)| {
|
||||
let val = if let Ok(n) = v.parse::<i64>() {
|
||||
serde_json::Value::Number(n.into())
|
||||
} else if let Ok(b) = v.parse::<bool>() {
|
||||
serde_json::Value::Bool(b)
|
||||
} else {
|
||||
serde_json::Value::String(v)
|
||||
};
|
||||
(k, val)
|
||||
})
|
||||
.collect();
|
||||
Ok(Json(map))
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -283,3 +461,22 @@ fn parse_content_type(s: Option<&str>) -> Result<Option<ContentType>, ApiError>
|
||||
))),
|
||||
}
|
||||
}
|
||||
|
||||
fn library_item_to_response(item: domain::LibraryItem) -> LibraryItemResponse {
|
||||
LibraryItemResponse {
|
||||
id: item.id,
|
||||
title: item.title,
|
||||
content_type: match item.content_type {
|
||||
ContentType::Movie => "movie".into(),
|
||||
ContentType::Episode => "episode".into(),
|
||||
ContentType::Short => "short".into(),
|
||||
},
|
||||
duration_secs: item.duration_secs,
|
||||
series_name: item.series_name,
|
||||
season_number: item.season_number,
|
||||
episode_number: item.episode_number,
|
||||
year: item.year,
|
||||
genres: item.genres,
|
||||
thumbnail_url: item.thumbnail_url,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,8 @@
|
||||
use crate::state::AppState;
|
||||
use axum::Router;
|
||||
|
||||
pub mod admin;
|
||||
pub mod admin_providers;
|
||||
pub mod auth;
|
||||
pub mod channels;
|
||||
pub mod config;
|
||||
@@ -15,10 +17,13 @@ pub mod library;
|
||||
/// Construct the API v1 router
|
||||
pub fn api_v1_router() -> Router<AppState> {
|
||||
Router::new()
|
||||
.nest("/admin", admin::router())
|
||||
.nest("/admin/providers", admin_providers::router())
|
||||
.nest("/auth", auth::router())
|
||||
.nest("/channels", channels::router())
|
||||
.nest("/config", config::router())
|
||||
.nest("/files", files::router())
|
||||
.nest("/iptv", iptv::router())
|
||||
.nest("/library", library::router())
|
||||
.nest("/admin", library::admin_router())
|
||||
}
|
||||
|
||||
@@ -92,12 +92,13 @@ mod tests {
|
||||
|
||||
use async_trait::async_trait;
|
||||
use chrono::{DateTime, Duration, Utc};
|
||||
use domain::{
|
||||
Channel, ChannelRepository, Collection, DomainResult, GeneratedSchedule, IProviderRegistry,
|
||||
MediaFilter, MediaItem, MediaItemId, PlaybackRecord, ProviderCapabilities, ScheduleEngineService,
|
||||
ScheduleRepository, SeriesSummary, StreamQuality, StreamingProtocol,
|
||||
};
|
||||
use domain::value_objects::{ChannelId, ContentType, UserId};
|
||||
use std::collections::HashMap;
|
||||
use domain::{
|
||||
BlockId, Channel, ChannelRepository, Collection, DomainResult, GeneratedSchedule,
|
||||
IProviderRegistry, MediaFilter, MediaItem, MediaItemId, PlaybackRecord, ProviderCapabilities,
|
||||
ScheduleEngineService, ScheduleRepository, SeriesSummary, StreamQuality,
|
||||
};
|
||||
use uuid::Uuid;
|
||||
|
||||
// ── Mocks ─────────────────────────────────────────────────────────────────
|
||||
@@ -126,6 +127,10 @@ mod tests {
|
||||
async fn delete(&self, _id: ChannelId) -> DomainResult<()> {
|
||||
unimplemented!()
|
||||
}
|
||||
async fn save_config_snapshot(&self, _channel_id: ChannelId, _config: &domain::ScheduleConfig, _label: Option<String>) -> DomainResult<domain::ChannelConfigSnapshot> { unimplemented!() }
|
||||
async fn list_config_snapshots(&self, _channel_id: ChannelId) -> DomainResult<Vec<domain::ChannelConfigSnapshot>> { unimplemented!() }
|
||||
async fn get_config_snapshot(&self, _channel_id: ChannelId, _snapshot_id: Uuid) -> DomainResult<Option<domain::ChannelConfigSnapshot>> { unimplemented!() }
|
||||
async fn patch_config_snapshot_label(&self, _channel_id: ChannelId, _snapshot_id: Uuid, _label: Option<String>) -> DomainResult<Option<domain::ChannelConfigSnapshot>> { unimplemented!() }
|
||||
}
|
||||
|
||||
struct MockScheduleRepo {
|
||||
@@ -142,32 +147,55 @@ mod tests {
|
||||
) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
Ok(None)
|
||||
}
|
||||
async fn find_latest(&self, _channel_id: ChannelId) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
async fn find_latest(
|
||||
&self,
|
||||
_channel_id: ChannelId,
|
||||
) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
Ok(self.latest.clone())
|
||||
}
|
||||
async fn save(&self, schedule: &GeneratedSchedule) -> DomainResult<()> {
|
||||
self.saved.lock().unwrap().push(schedule.clone());
|
||||
Ok(())
|
||||
}
|
||||
async fn find_playback_history(&self, _channel_id: ChannelId) -> DomainResult<Vec<PlaybackRecord>> {
|
||||
async fn find_playback_history(
|
||||
&self,
|
||||
_channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<PlaybackRecord>> {
|
||||
Ok(vec![])
|
||||
}
|
||||
async fn save_playback_record(&self, _record: &PlaybackRecord) -> DomainResult<()> {
|
||||
Ok(())
|
||||
}
|
||||
async fn find_last_slot_per_block(
|
||||
&self,
|
||||
_channel_id: ChannelId,
|
||||
) -> DomainResult<HashMap<BlockId, MediaItemId>> {
|
||||
Ok(HashMap::new())
|
||||
}
|
||||
async fn list_schedule_history(&self, _channel_id: ChannelId) -> DomainResult<Vec<GeneratedSchedule>> { unimplemented!() }
|
||||
async fn get_schedule_by_id(&self, _channel_id: ChannelId, _schedule_id: Uuid) -> DomainResult<Option<GeneratedSchedule>> { unimplemented!() }
|
||||
async fn delete_schedules_after(&self, _channel_id: ChannelId, _target_generation: u32) -> DomainResult<()> { unimplemented!() }
|
||||
}
|
||||
|
||||
struct MockRegistry;
|
||||
|
||||
#[async_trait]
|
||||
impl IProviderRegistry for MockRegistry {
|
||||
async fn fetch_items(&self, _provider_id: &str, _filter: &MediaFilter) -> DomainResult<Vec<MediaItem>> {
|
||||
async fn fetch_items(
|
||||
&self,
|
||||
_provider_id: &str,
|
||||
_filter: &MediaFilter,
|
||||
) -> DomainResult<Vec<MediaItem>> {
|
||||
Ok(vec![])
|
||||
}
|
||||
async fn fetch_by_id(&self, _item_id: &MediaItemId) -> DomainResult<Option<MediaItem>> {
|
||||
Ok(None)
|
||||
}
|
||||
async fn get_stream_url(&self, _item_id: &MediaItemId, _quality: &StreamQuality) -> DomainResult<String> {
|
||||
async fn get_stream_url(
|
||||
&self,
|
||||
_item_id: &MediaItemId,
|
||||
_quality: &StreamQuality,
|
||||
) -> DomainResult<String> {
|
||||
unimplemented!()
|
||||
}
|
||||
fn provider_ids(&self) -> Vec<String> {
|
||||
@@ -182,10 +210,18 @@ mod tests {
|
||||
async fn list_collections(&self, _provider_id: &str) -> DomainResult<Vec<Collection>> {
|
||||
unimplemented!()
|
||||
}
|
||||
async fn list_series(&self, _provider_id: &str, _collection_id: Option<&str>) -> DomainResult<Vec<SeriesSummary>> {
|
||||
async fn list_series(
|
||||
&self,
|
||||
_provider_id: &str,
|
||||
_collection_id: Option<&str>,
|
||||
) -> DomainResult<Vec<SeriesSummary>> {
|
||||
unimplemented!()
|
||||
}
|
||||
async fn list_genres(&self, _provider_id: &str, _content_type: Option<&ContentType>) -> DomainResult<Vec<String>> {
|
||||
async fn list_genres(
|
||||
&self,
|
||||
_provider_id: &str,
|
||||
_content_type: Option<&ContentType>,
|
||||
) -> DomainResult<Vec<String>> {
|
||||
unimplemented!()
|
||||
}
|
||||
}
|
||||
@@ -226,9 +262,12 @@ mod tests {
|
||||
async fn test_no_schedule_generates_from_now() {
|
||||
let ch = make_channel();
|
||||
let saved = Arc::new(Mutex::new(vec![]));
|
||||
let channel_repo: Arc<dyn ChannelRepository> = Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> =
|
||||
Arc::new(MockScheduleRepo { latest: None, saved: saved.clone() });
|
||||
let channel_repo: Arc<dyn ChannelRepository> =
|
||||
Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> = Arc::new(MockScheduleRepo {
|
||||
latest: None,
|
||||
saved: saved.clone(),
|
||||
});
|
||||
let engine = make_engine(channel_repo.clone(), schedule_repo);
|
||||
|
||||
let (event_tx, _) = tokio::sync::broadcast::channel(8);
|
||||
@@ -246,9 +285,12 @@ mod tests {
|
||||
let valid_until = Utc::now() + Duration::hours(25);
|
||||
let schedule = make_schedule(ch.id, valid_until);
|
||||
let saved = Arc::new(Mutex::new(vec![]));
|
||||
let channel_repo: Arc<dyn ChannelRepository> = Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> =
|
||||
Arc::new(MockScheduleRepo { latest: Some(schedule), saved: saved.clone() });
|
||||
let channel_repo: Arc<dyn ChannelRepository> =
|
||||
Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> = Arc::new(MockScheduleRepo {
|
||||
latest: Some(schedule),
|
||||
saved: saved.clone(),
|
||||
});
|
||||
let engine = make_engine(channel_repo.clone(), schedule_repo);
|
||||
|
||||
let (event_tx, _) = tokio::sync::broadcast::channel(8);
|
||||
@@ -263,9 +305,12 @@ mod tests {
|
||||
let valid_until = Utc::now() + Duration::hours(20);
|
||||
let schedule = make_schedule(ch.id, valid_until);
|
||||
let saved = Arc::new(Mutex::new(vec![]));
|
||||
let channel_repo: Arc<dyn ChannelRepository> = Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> =
|
||||
Arc::new(MockScheduleRepo { latest: Some(schedule), saved: saved.clone() });
|
||||
let channel_repo: Arc<dyn ChannelRepository> =
|
||||
Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> = Arc::new(MockScheduleRepo {
|
||||
latest: Some(schedule),
|
||||
saved: saved.clone(),
|
||||
});
|
||||
let engine = make_engine(channel_repo.clone(), schedule_repo);
|
||||
|
||||
let (event_tx, _) = tokio::sync::broadcast::channel(8);
|
||||
@@ -282,9 +327,12 @@ mod tests {
|
||||
let valid_until = Utc::now() - Duration::hours(1);
|
||||
let schedule = make_schedule(ch.id, valid_until);
|
||||
let saved = Arc::new(Mutex::new(vec![]));
|
||||
let channel_repo: Arc<dyn ChannelRepository> = Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> =
|
||||
Arc::new(MockScheduleRepo { latest: Some(schedule), saved: saved.clone() });
|
||||
let channel_repo: Arc<dyn ChannelRepository> =
|
||||
Arc::new(MockChannelRepo { channels: vec![ch] });
|
||||
let schedule_repo: Arc<dyn ScheduleRepository> = Arc::new(MockScheduleRepo {
|
||||
latest: Some(schedule),
|
||||
saved: saved.clone(),
|
||||
});
|
||||
let engine = make_engine(channel_repo.clone(), schedule_repo);
|
||||
|
||||
let (event_tx, _) = tokio::sync::broadcast::channel(8);
|
||||
|
||||
59
k-tv-backend/api/src/server.rs
Normal file
59
k-tv-backend/api/src/server.rs
Normal file
@@ -0,0 +1,59 @@
|
||||
use std::net::SocketAddr;
|
||||
|
||||
use axum::Router;
|
||||
use axum::http::{HeaderName, HeaderValue};
|
||||
use k_core::http::server::{ServerConfig, apply_standard_middleware};
|
||||
use tokio::net::TcpListener;
|
||||
use tower_http::cors::{AllowHeaders, AllowMethods, AllowOrigin, CorsLayer};
|
||||
|
||||
use crate::config::Config;
|
||||
use crate::routes;
|
||||
use crate::state::AppState;
|
||||
|
||||
pub async fn build_and_serve(state: AppState, config: &Config) -> anyhow::Result<()> {
|
||||
let server_config = ServerConfig {
|
||||
cors_origins: config.cors_allowed_origins.clone(),
|
||||
};
|
||||
|
||||
let app = Router::new()
|
||||
.nest("/api/v1", routes::api_v1_router())
|
||||
.with_state(state);
|
||||
|
||||
let app = apply_standard_middleware(app, &server_config);
|
||||
|
||||
// Wrap with an outer CorsLayer that includes the custom password headers.
|
||||
// Being outermost it handles OPTIONS preflights before k_core's inner layer.
|
||||
let origins: Vec<HeaderValue> = config
|
||||
.cors_allowed_origins
|
||||
.iter()
|
||||
.filter_map(|o| o.parse().ok())
|
||||
.collect();
|
||||
let cors = CorsLayer::new()
|
||||
.allow_origin(AllowOrigin::list(origins))
|
||||
.allow_methods(AllowMethods::any())
|
||||
.allow_headers(AllowHeaders::list([
|
||||
axum::http::header::AUTHORIZATION,
|
||||
axum::http::header::CONTENT_TYPE,
|
||||
HeaderName::from_static("x-channel-password"),
|
||||
HeaderName::from_static("x-block-password"),
|
||||
]));
|
||||
let app = app.layer(cors);
|
||||
|
||||
let addr: SocketAddr = format!("{}:{}", config.host, config.port).parse()?;
|
||||
let listener = TcpListener::bind(addr).await?;
|
||||
|
||||
tracing::info!("🚀 API server running at http://{}", addr);
|
||||
tracing::info!("🔒 Authentication mode: JWT (Bearer token)");
|
||||
|
||||
#[cfg(feature = "auth-jwt")]
|
||||
tracing::info!(" ✓ JWT auth enabled");
|
||||
|
||||
#[cfg(feature = "auth-oidc")]
|
||||
tracing::info!(" ✓ OIDC integration enabled (stateless cookie state)");
|
||||
|
||||
tracing::info!("📝 API endpoints available at /api/v1/...");
|
||||
|
||||
axum::serve(listener, app).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
24
k-tv-backend/api/src/startup.rs
Normal file
24
k-tv-backend/api/src/startup.rs
Normal file
@@ -0,0 +1,24 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use domain::{ChannelRepository, DomainEvent, ScheduleEngineService};
|
||||
use tokio::sync::broadcast;
|
||||
|
||||
use crate::{poller, scheduler};
|
||||
|
||||
pub fn spawn_background_tasks(
|
||||
schedule_engine: Arc<ScheduleEngineService>,
|
||||
channel_repo: Arc<dyn ChannelRepository>,
|
||||
event_tx: broadcast::Sender<DomainEvent>,
|
||||
) {
|
||||
let bg_channel_repo = channel_repo.clone();
|
||||
tokio::spawn(scheduler::run_auto_scheduler(
|
||||
Arc::clone(&schedule_engine),
|
||||
bg_channel_repo,
|
||||
event_tx.clone(),
|
||||
));
|
||||
tokio::spawn(poller::run_broadcast_poller(
|
||||
schedule_engine,
|
||||
channel_repo,
|
||||
event_tx,
|
||||
));
|
||||
}
|
||||
@@ -8,18 +8,27 @@ use axum_extra::extract::cookie::Key;
|
||||
use infra::auth::jwt::{JwtConfig, JwtValidator};
|
||||
#[cfg(feature = "auth-oidc")]
|
||||
use infra::auth::oidc::OidcService;
|
||||
use std::sync::Arc;
|
||||
use std::collections::VecDeque;
|
||||
#[cfg(feature = "local-files")]
|
||||
use std::collections::HashMap;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use tokio::sync::broadcast;
|
||||
|
||||
use crate::config::Config;
|
||||
use crate::events::EventBus;
|
||||
use domain::{ChannelService, ScheduleEngineService, UserService};
|
||||
use crate::log_layer::LogLine;
|
||||
use domain::{ActivityLogRepository, ChannelService, IAppSettingsRepository, ILibraryRepository, LibrarySyncAdapter, ProviderConfigRepository, ScheduleEngineService, UserService};
|
||||
#[cfg(feature = "local-files")]
|
||||
use domain::TranscodeSettingsRepository;
|
||||
use k_core::db::DatabasePool;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct AppState {
|
||||
pub user_service: Arc<UserService>,
|
||||
pub channel_service: Arc<ChannelService>,
|
||||
pub schedule_engine: Arc<ScheduleEngineService>,
|
||||
pub provider_registry: Arc<infra::ProviderRegistry>,
|
||||
pub provider_registry: Arc<tokio::sync::RwLock<Arc<infra::ProviderRegistry>>>,
|
||||
pub provider_config_repo: Arc<dyn ProviderConfigRepository>,
|
||||
pub cookie_key: Key,
|
||||
#[cfg(feature = "auth-oidc")]
|
||||
pub oidc_service: Option<Arc<OidcService>>,
|
||||
@@ -27,25 +36,47 @@ pub struct AppState {
|
||||
pub jwt_validator: Option<Arc<JwtValidator>>,
|
||||
pub config: Arc<Config>,
|
||||
pub event_tx: EventBus,
|
||||
/// Index for the local-files provider, used by the rescan route.
|
||||
/// Broadcast channel for streaming log lines to SSE clients.
|
||||
pub log_tx: broadcast::Sender<LogLine>,
|
||||
/// Ring buffer of recent log lines sent to new SSE clients on connect.
|
||||
pub log_history: Arc<Mutex<VecDeque<LogLine>>>,
|
||||
/// Repository for persisted in-app activity events.
|
||||
pub activity_log_repo: Arc<dyn ActivityLogRepository>,
|
||||
/// Indexes for local-files provider instances, keyed by provider instance id.
|
||||
#[cfg(feature = "local-files")]
|
||||
pub local_index: Option<Arc<infra::LocalIndex>>,
|
||||
pub local_index: Arc<tokio::sync::RwLock<HashMap<String, Arc<infra::LocalIndex>>>>,
|
||||
/// TranscodeManager for FFmpeg HLS transcoding (requires TRANSCODE_DIR).
|
||||
#[cfg(feature = "local-files")]
|
||||
pub transcode_manager: Option<Arc<infra::TranscodeManager>>,
|
||||
/// SQLite pool for transcode settings CRUD.
|
||||
pub transcode_manager: Arc<tokio::sync::RwLock<Option<Arc<infra::TranscodeManager>>>>,
|
||||
/// Repository for transcode settings persistence.
|
||||
#[cfg(feature = "local-files")]
|
||||
pub sqlite_pool: Option<sqlx::SqlitePool>,
|
||||
pub transcode_settings_repo: Option<Arc<dyn TranscodeSettingsRepository>>,
|
||||
/// Database pool — used by infra factory functions for hot-reload.
|
||||
pub db_pool: Arc<DatabasePool>,
|
||||
pub library_repo: Arc<dyn ILibraryRepository>,
|
||||
pub library_sync_adapter: Arc<dyn LibrarySyncAdapter>,
|
||||
pub app_settings_repo: Arc<dyn IAppSettingsRepository>,
|
||||
}
|
||||
|
||||
impl AppState {
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub async fn new(
|
||||
user_service: UserService,
|
||||
channel_service: ChannelService,
|
||||
schedule_engine: ScheduleEngineService,
|
||||
provider_registry: Arc<infra::ProviderRegistry>,
|
||||
provider_registry: Arc<tokio::sync::RwLock<Arc<infra::ProviderRegistry>>>,
|
||||
provider_config_repo: Arc<dyn ProviderConfigRepository>,
|
||||
config: Config,
|
||||
event_tx: EventBus,
|
||||
log_tx: broadcast::Sender<LogLine>,
|
||||
log_history: Arc<Mutex<VecDeque<LogLine>>>,
|
||||
activity_log_repo: Arc<dyn ActivityLogRepository>,
|
||||
db_pool: Arc<DatabasePool>,
|
||||
library_repo: Arc<dyn ILibraryRepository>,
|
||||
library_sync_adapter: Arc<dyn LibrarySyncAdapter>,
|
||||
app_settings_repo: Arc<dyn IAppSettingsRepository>,
|
||||
#[cfg(feature = "local-files")]
|
||||
transcode_settings_repo: Option<Arc<dyn TranscodeSettingsRepository>>,
|
||||
) -> anyhow::Result<Self> {
|
||||
let cookie_key = Key::derive_from(config.cookie_secret.as_bytes());
|
||||
|
||||
@@ -101,6 +132,7 @@ impl AppState {
|
||||
config.jwt_issuer.clone(),
|
||||
config.jwt_audience.clone(),
|
||||
Some(config.jwt_expiry_hours),
|
||||
Some(config.jwt_refresh_expiry_days),
|
||||
config.is_production,
|
||||
)?;
|
||||
Some(Arc::new(JwtValidator::new(jwt_config)))
|
||||
@@ -111,6 +143,7 @@ impl AppState {
|
||||
channel_service: Arc::new(channel_service),
|
||||
schedule_engine: Arc::new(schedule_engine),
|
||||
provider_registry,
|
||||
provider_config_repo,
|
||||
cookie_key,
|
||||
#[cfg(feature = "auth-oidc")]
|
||||
oidc_service,
|
||||
@@ -118,12 +151,19 @@ impl AppState {
|
||||
jwt_validator,
|
||||
config: Arc::new(config),
|
||||
event_tx,
|
||||
log_tx,
|
||||
log_history,
|
||||
activity_log_repo,
|
||||
#[cfg(feature = "local-files")]
|
||||
local_index: None,
|
||||
local_index: Arc::new(tokio::sync::RwLock::new(HashMap::new())),
|
||||
#[cfg(feature = "local-files")]
|
||||
transcode_manager: None,
|
||||
transcode_manager: Arc::new(tokio::sync::RwLock::new(None)),
|
||||
#[cfg(feature = "local-files")]
|
||||
sqlite_pool: None,
|
||||
transcode_settings_repo,
|
||||
db_pool,
|
||||
library_repo,
|
||||
library_sync_adapter,
|
||||
app_settings_repo,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
25
k-tv-backend/api/src/telemetry.rs
Normal file
25
k-tv-backend/api/src/telemetry.rs
Normal file
@@ -0,0 +1,25 @@
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use tokio::sync::broadcast;
|
||||
use tracing_subscriber::{EnvFilter, fmt, layer::SubscriberExt, util::SubscriberInitExt};
|
||||
|
||||
use crate::log_layer::{AppLogLayer, LogLine};
|
||||
|
||||
pub struct LoggingHandles {
|
||||
pub log_tx: broadcast::Sender<LogLine>,
|
||||
pub log_history: Arc<Mutex<VecDeque<LogLine>>>,
|
||||
}
|
||||
|
||||
pub fn init_tracing() -> LoggingHandles {
|
||||
let (log_tx, _) = broadcast::channel::<LogLine>(512);
|
||||
let log_history = Arc::new(Mutex::new(VecDeque::<LogLine>::new()));
|
||||
|
||||
tracing_subscriber::registry()
|
||||
.with(EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info")))
|
||||
.with(fmt::layer())
|
||||
.with(AppLogLayer::new(log_tx.clone(), Arc::clone(&log_history)))
|
||||
.init();
|
||||
|
||||
LoggingHandles { log_tx, log_history }
|
||||
}
|
||||
@@ -182,15 +182,15 @@ async fn post_webhook(
|
||||
let mut req = client.post(url).body(body);
|
||||
let mut has_content_type = false;
|
||||
|
||||
if let Some(h) = headers_json {
|
||||
if let Ok(map) = serde_json::from_str::<serde_json::Map<String, Value>>(h) {
|
||||
for (k, v) in &map {
|
||||
if k.to_lowercase() == "content-type" {
|
||||
has_content_type = true;
|
||||
}
|
||||
if let Some(v_str) = v.as_str() {
|
||||
req = req.header(k.as_str(), v_str);
|
||||
}
|
||||
if let Some(h) = headers_json
|
||||
&& let Ok(map) = serde_json::from_str::<serde_json::Map<String, Value>>(h)
|
||||
{
|
||||
for (k, v) in &map {
|
||||
if k.to_lowercase() == "content-type" {
|
||||
has_content_type = true;
|
||||
}
|
||||
if let Some(v_str) = v.as_str() {
|
||||
req = req.header(k.as_str(), v_str);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,3 +16,4 @@ uuid = { version = "1.19.0", features = ["v4", "serde"] }
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { version = "1", features = ["rt", "macros"] }
|
||||
serde_json = "1"
|
||||
|
||||
@@ -6,11 +6,12 @@
|
||||
pub use crate::value_objects::{Email, UserId};
|
||||
use chrono::{DateTime, NaiveTime, Timelike, Utc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::value_objects::{
|
||||
AccessMode, BlockId, ChannelId, ContentType, FillStrategy, LogoPosition, MediaFilter,
|
||||
MediaItemId, RecyclePolicy, SlotId,
|
||||
MediaItemId, RecyclePolicy, SlotId, Weekday,
|
||||
};
|
||||
|
||||
/// A user in the system.
|
||||
@@ -22,6 +23,7 @@ pub struct User {
|
||||
pub subject: String,
|
||||
pub email: Email,
|
||||
pub password_hash: Option<String>,
|
||||
pub is_admin: bool,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
@@ -32,6 +34,7 @@ impl User {
|
||||
subject: subject.into(),
|
||||
email,
|
||||
password_hash: None,
|
||||
is_admin: false,
|
||||
created_at: Utc::now(),
|
||||
}
|
||||
}
|
||||
@@ -41,6 +44,7 @@ impl User {
|
||||
subject: impl Into<String>,
|
||||
email: Email,
|
||||
password_hash: Option<String>,
|
||||
is_admin: bool,
|
||||
created_at: DateTime<Utc>,
|
||||
) -> Self {
|
||||
Self {
|
||||
@@ -48,6 +52,7 @@ impl User {
|
||||
subject: subject.into(),
|
||||
email,
|
||||
password_hash,
|
||||
is_admin,
|
||||
created_at,
|
||||
}
|
||||
}
|
||||
@@ -58,6 +63,7 @@ impl User {
|
||||
subject: format!("local|{}", Uuid::new_v4()),
|
||||
email,
|
||||
password_hash: Some(password_hash.into()),
|
||||
is_admin: false,
|
||||
created_at: Utc::now(),
|
||||
}
|
||||
}
|
||||
@@ -127,40 +133,77 @@ impl Channel {
|
||||
}
|
||||
}
|
||||
|
||||
/// The user-designed programming template.
|
||||
/// The user-designed programming template (V2: day-keyed weekly grid).
|
||||
///
|
||||
/// This is the shareable/exportable part of a channel. It contains an ordered
|
||||
/// list of `ProgrammingBlock`s but makes no assumptions about the media source.
|
||||
/// A channel does not need to cover all 24 hours — gaps are valid and render
|
||||
/// as a no-signal state on the client.
|
||||
/// Each day of the week has its own independent list of `ProgrammingBlock`s.
|
||||
/// A day with an empty vec (or absent key) produces no slots — valid, not an error.
|
||||
/// A channel does not need to cover all 24 hours — gaps render as no-signal.
|
||||
///
|
||||
/// `deny_unknown_fields` is required so the `#[serde(untagged)]` compat enum
|
||||
/// correctly rejects V1 `{"blocks":[...]}` payloads and falls through to `OldScheduleConfig`.
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct ScheduleConfig {
|
||||
pub day_blocks: HashMap<Weekday, Vec<ProgrammingBlock>>,
|
||||
}
|
||||
|
||||
/// V1 on-disk shape — kept for transparent migration only.
|
||||
/// Never construct directly; use `ScheduleConfigCompat` for deserialization.
|
||||
/// `deny_unknown_fields` ensures V2 payloads don't accidentally match here.
|
||||
#[derive(Debug, Clone, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct OldScheduleConfig {
|
||||
pub blocks: Vec<ProgrammingBlock>,
|
||||
}
|
||||
|
||||
/// Deserializes either V2 (`day_blocks`) or V1 (`blocks`) from the DB.
|
||||
/// V1 is automatically promoted: all blocks are copied to all 7 days.
|
||||
#[derive(Debug, Clone, Deserialize)]
|
||||
#[serde(untagged)]
|
||||
pub enum ScheduleConfigCompat {
|
||||
V2(ScheduleConfig),
|
||||
V1(OldScheduleConfig),
|
||||
}
|
||||
|
||||
impl From<ScheduleConfigCompat> for ScheduleConfig {
|
||||
fn from(c: ScheduleConfigCompat) -> Self {
|
||||
match c {
|
||||
ScheduleConfigCompat::V2(cfg) => cfg,
|
||||
ScheduleConfigCompat::V1(old) => {
|
||||
let day_blocks = Weekday::all()
|
||||
.into_iter()
|
||||
.map(|d| (d, old.blocks.clone()))
|
||||
.collect();
|
||||
ScheduleConfig { day_blocks }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ScheduleConfig {
|
||||
/// Return the block whose time window contains `time`, if any.
|
||||
///
|
||||
/// Handles blocks that span midnight (e.g. start 23:00, duration 180 min).
|
||||
pub fn find_block_at(&self, time: NaiveTime) -> Option<&ProgrammingBlock> {
|
||||
/// Blocks for a given day. Returns empty slice if the day has no blocks.
|
||||
pub fn blocks_for(&self, day: Weekday) -> &[ProgrammingBlock] {
|
||||
self.day_blocks.get(&day).map(|v| v.as_slice()).unwrap_or(&[])
|
||||
}
|
||||
|
||||
/// The block whose window contains `time` on `day`, if any.
|
||||
pub fn find_block_at(&self, day: Weekday, time: NaiveTime) -> Option<&ProgrammingBlock> {
|
||||
let secs = time.num_seconds_from_midnight();
|
||||
self.blocks.iter().find(|block| {
|
||||
self.blocks_for(day).iter().find(|block| {
|
||||
let start = block.start_time.num_seconds_from_midnight();
|
||||
let end = start + block.duration_mins * 60;
|
||||
if end <= 86_400 {
|
||||
secs >= start && secs < end
|
||||
} else {
|
||||
// Block crosses midnight: active from `start` to `end % 86400` next day
|
||||
secs >= start || secs < (end % 86_400)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Return the start time of the next block that begins strictly after `time`,
|
||||
/// within the same calendar day.
|
||||
pub fn next_block_start_after(&self, time: NaiveTime) -> Option<NaiveTime> {
|
||||
/// The start time of the next block beginning strictly after `time` on `day`.
|
||||
pub fn next_block_start_after(&self, day: Weekday, time: NaiveTime) -> Option<NaiveTime> {
|
||||
let secs = time.num_seconds_from_midnight();
|
||||
self.blocks
|
||||
self.blocks_for(day)
|
||||
.iter()
|
||||
.map(|b| b.start_time.num_seconds_from_midnight())
|
||||
.filter(|&s| s > secs)
|
||||
@@ -168,9 +211,15 @@ impl ScheduleConfig {
|
||||
.and_then(|s| NaiveTime::from_num_seconds_from_midnight_opt(s, 0))
|
||||
}
|
||||
|
||||
/// The earliest block start time across all blocks (used for next-day rollover).
|
||||
/// Earliest block start time across ALL days (used by background scheduler).
|
||||
/// Returns `None` if every day is empty.
|
||||
pub fn earliest_block_start(&self) -> Option<NaiveTime> {
|
||||
self.blocks.iter().map(|b| b.start_time).min()
|
||||
self.day_blocks.values().flatten().map(|b| b.start_time).min()
|
||||
}
|
||||
|
||||
/// Iterator over all blocks across all days (for block-ID lookups that are day-agnostic).
|
||||
pub fn all_blocks(&self) -> impl Iterator<Item = &ProgrammingBlock> {
|
||||
self.day_blocks.values().flatten()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -297,9 +346,13 @@ pub struct MediaItem {
|
||||
pub season_number: Option<u32>,
|
||||
/// For episodes: episode number within the season (1-based).
|
||||
pub episode_number: Option<u32>,
|
||||
/// Provider-served thumbnail image URL, populated if available.
|
||||
pub thumbnail_url: Option<String>,
|
||||
/// Provider-specific collection this item belongs to.
|
||||
pub collection_id: Option<String>,
|
||||
}
|
||||
|
||||
/// A fully resolved 48-hour broadcast program for one channel.
|
||||
/// A fully resolved 7-day broadcast program for one channel.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct GeneratedSchedule {
|
||||
pub id: Uuid,
|
||||
@@ -351,6 +404,18 @@ pub struct PlaybackRecord {
|
||||
pub generation: u32,
|
||||
}
|
||||
|
||||
/// A point-in-time snapshot of a channel's `ScheduleConfig`.
|
||||
/// Auto-created on every config save; users can pin with a label.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct ChannelConfigSnapshot {
|
||||
pub id: Uuid,
|
||||
pub channel_id: ChannelId,
|
||||
pub config: ScheduleConfig,
|
||||
pub version_num: i64,
|
||||
pub label: Option<String>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
impl PlaybackRecord {
|
||||
pub fn new(channel_id: ChannelId, item_id: MediaItemId, generation: u32) -> Self {
|
||||
Self {
|
||||
@@ -362,3 +427,74 @@ impl PlaybackRecord {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod schedule_config_tests {
|
||||
use super::*;
|
||||
use chrono::NaiveTime;
|
||||
|
||||
fn t(h: u32, m: u32) -> NaiveTime {
|
||||
NaiveTime::from_hms_opt(h, m, 0).unwrap()
|
||||
}
|
||||
|
||||
fn make_block(start: NaiveTime, duration_mins: u32) -> ProgrammingBlock {
|
||||
ProgrammingBlock::new_algorithmic(
|
||||
"test", start, duration_mins,
|
||||
Default::default(), FillStrategy::Random,
|
||||
)
|
||||
}
|
||||
|
||||
fn cfg_with_monday_block(start: NaiveTime, dur: u32) -> ScheduleConfig {
|
||||
let mut cfg = ScheduleConfig::default();
|
||||
cfg.day_blocks.insert(Weekday::Monday, vec![make_block(start, dur)]);
|
||||
cfg
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn find_block_at_finds_active_block() {
|
||||
let cfg = cfg_with_monday_block(t(8, 0), 60);
|
||||
assert!(cfg.find_block_at(Weekday::Monday, t(8, 30)).is_some());
|
||||
assert!(cfg.find_block_at(Weekday::Monday, t(9, 0)).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn find_block_at_wrong_day_returns_none() {
|
||||
let cfg = cfg_with_monday_block(t(8, 0), 60);
|
||||
assert!(cfg.find_block_at(Weekday::Tuesday, t(8, 30)).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn v1_compat_copies_blocks_to_all_days() {
|
||||
let json = r#"{"blocks": []}"#;
|
||||
let compat: ScheduleConfigCompat = serde_json::from_str(json).unwrap();
|
||||
let cfg: ScheduleConfig = compat.into();
|
||||
assert_eq!(cfg.day_blocks.len(), 7);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn v2_payload_with_unknown_blocks_key_fails() {
|
||||
let json = r#"{"blocks": [], "day_blocks": {}}"#;
|
||||
let result: Result<ScheduleConfigCompat, _> = serde_json::from_str(json);
|
||||
match result {
|
||||
Ok(ScheduleConfigCompat::V2(cfg)) => {
|
||||
let _ = cfg;
|
||||
}
|
||||
Ok(ScheduleConfigCompat::V1(_)) => { /* acceptable */ }
|
||||
Err(_) => { /* acceptable — ambiguous payload rejected */ }
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn earliest_block_start_across_days() {
|
||||
let mut cfg = ScheduleConfig::default();
|
||||
cfg.day_blocks.insert(Weekday::Monday, vec![make_block(t(10, 0), 60)]);
|
||||
cfg.day_blocks.insert(Weekday::Friday, vec![make_block(t(7, 0), 60)]);
|
||||
assert_eq!(cfg.earliest_block_start(), Some(t(7, 0)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn empty_config_earliest_block_start_is_none() {
|
||||
let cfg = ScheduleConfig::default();
|
||||
assert!(cfg.earliest_block_start().is_none());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -58,6 +58,8 @@ mod tests {
|
||||
series_name: None,
|
||||
season_number: None,
|
||||
episode_number: None,
|
||||
thumbnail_url: None,
|
||||
collection_id: None,
|
||||
},
|
||||
source_block_id: Uuid::new_v4(),
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
pub mod entities;
|
||||
pub mod errors;
|
||||
pub mod iptv;
|
||||
pub mod library;
|
||||
pub mod ports;
|
||||
pub mod repositories;
|
||||
pub mod services;
|
||||
@@ -19,5 +20,10 @@ pub use events::DomainEvent;
|
||||
pub use ports::{Collection, IMediaProvider, IProviderRegistry, ProviderCapabilities, SeriesSummary, StreamingProtocol, StreamQuality};
|
||||
pub use repositories::*;
|
||||
pub use iptv::{generate_m3u, generate_xmltv};
|
||||
pub use library::{
|
||||
ILibraryRepository, LibraryCollection, LibraryItem, LibrarySearchFilter,
|
||||
LibrarySyncAdapter, LibrarySyncLogEntry, LibrarySyncResult,
|
||||
SeasonSummary, ShowSummary,
|
||||
};
|
||||
pub use services::{ChannelService, ScheduleEngineService, UserService};
|
||||
pub use value_objects::*;
|
||||
|
||||
187
k-tv-backend/domain/src/library.rs
Normal file
187
k-tv-backend/domain/src/library.rs
Normal file
@@ -0,0 +1,187 @@
|
||||
//! Library domain types and ports.
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use crate::{ContentType, DomainResult, IMediaProvider};
|
||||
|
||||
/// A media item stored in the local library cache.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct LibraryItem {
|
||||
pub id: String,
|
||||
pub provider_id: String,
|
||||
pub external_id: String,
|
||||
pub title: String,
|
||||
pub content_type: ContentType,
|
||||
pub duration_secs: u32,
|
||||
pub series_name: Option<String>,
|
||||
pub season_number: Option<u32>,
|
||||
pub episode_number: Option<u32>,
|
||||
pub year: Option<u16>,
|
||||
pub genres: Vec<String>,
|
||||
pub tags: Vec<String>,
|
||||
pub collection_id: Option<String>,
|
||||
pub collection_name: Option<String>,
|
||||
pub collection_type: Option<String>,
|
||||
pub thumbnail_url: Option<String>,
|
||||
pub synced_at: String,
|
||||
}
|
||||
|
||||
/// A collection summary derived from synced library items.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct LibraryCollection {
|
||||
pub id: String,
|
||||
pub name: String,
|
||||
pub collection_type: Option<String>,
|
||||
}
|
||||
|
||||
/// Result of a single provider sync run.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct LibrarySyncResult {
|
||||
pub provider_id: String,
|
||||
pub items_found: u32,
|
||||
pub duration_ms: u64,
|
||||
pub error: Option<String>,
|
||||
}
|
||||
|
||||
/// Log entry from library_sync_log table.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct LibrarySyncLogEntry {
|
||||
pub id: i64,
|
||||
pub provider_id: String,
|
||||
pub started_at: String,
|
||||
pub finished_at: Option<String>,
|
||||
pub items_found: u32,
|
||||
pub status: String,
|
||||
pub error_msg: Option<String>,
|
||||
}
|
||||
|
||||
/// Filter for searching the local library.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct LibrarySearchFilter {
|
||||
pub provider_id: Option<String>,
|
||||
pub content_type: Option<ContentType>,
|
||||
pub series_names: Vec<String>,
|
||||
pub collection_id: Option<String>,
|
||||
pub genres: Vec<String>,
|
||||
pub decade: Option<u16>,
|
||||
pub min_duration_secs: Option<u32>,
|
||||
pub max_duration_secs: Option<u32>,
|
||||
pub search_term: Option<String>,
|
||||
pub season_number: Option<u32>,
|
||||
pub offset: u32,
|
||||
pub limit: u32,
|
||||
}
|
||||
|
||||
impl Default for LibrarySearchFilter {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
provider_id: None,
|
||||
content_type: None,
|
||||
series_names: vec![],
|
||||
collection_id: None,
|
||||
genres: vec![],
|
||||
decade: None,
|
||||
min_duration_secs: None,
|
||||
max_duration_secs: None,
|
||||
search_term: None,
|
||||
season_number: None,
|
||||
offset: 0,
|
||||
limit: 50,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Aggregated summary of a TV show derived from synced episodes.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct ShowSummary {
|
||||
pub series_name: String,
|
||||
pub episode_count: u32,
|
||||
pub season_count: u32,
|
||||
pub thumbnail_url: Option<String>,
|
||||
pub genres: Vec<String>,
|
||||
}
|
||||
|
||||
/// Aggregated summary of one season of a TV show.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SeasonSummary {
|
||||
pub season_number: u32,
|
||||
pub episode_count: u32,
|
||||
pub thumbnail_url: Option<String>,
|
||||
}
|
||||
|
||||
/// Port: sync one provider's items into the library repo.
|
||||
/// DB writes are handled entirely inside implementations — no pool in the trait.
|
||||
#[async_trait]
|
||||
pub trait LibrarySyncAdapter: Send + Sync {
|
||||
async fn sync_provider(
|
||||
&self,
|
||||
provider: &dyn IMediaProvider,
|
||||
provider_id: &str,
|
||||
) -> LibrarySyncResult;
|
||||
}
|
||||
|
||||
/// Port: read/write access to the persisted library.
|
||||
#[async_trait]
|
||||
pub trait ILibraryRepository: Send + Sync {
|
||||
async fn search(&self, filter: &LibrarySearchFilter) -> DomainResult<(Vec<LibraryItem>, u32)>;
|
||||
async fn get_by_id(&self, id: &str) -> DomainResult<Option<LibraryItem>>;
|
||||
async fn list_collections(&self, provider_id: Option<&str>) -> DomainResult<Vec<LibraryCollection>>;
|
||||
async fn list_series(&self, provider_id: Option<&str>) -> DomainResult<Vec<String>>;
|
||||
async fn list_genres(&self, content_type: Option<&ContentType>, provider_id: Option<&str>) -> DomainResult<Vec<String>>;
|
||||
async fn upsert_items(&self, provider_id: &str, items: Vec<LibraryItem>) -> DomainResult<()>;
|
||||
async fn clear_provider(&self, provider_id: &str) -> DomainResult<()>;
|
||||
async fn log_sync_start(&self, provider_id: &str) -> DomainResult<i64>;
|
||||
async fn log_sync_finish(&self, log_id: i64, result: &LibrarySyncResult) -> DomainResult<()>;
|
||||
async fn latest_sync_status(&self) -> DomainResult<Vec<LibrarySyncLogEntry>>;
|
||||
async fn is_sync_running(&self, provider_id: &str) -> DomainResult<bool>;
|
||||
async fn list_shows(
|
||||
&self,
|
||||
provider_id: Option<&str>,
|
||||
search_term: Option<&str>,
|
||||
genres: &[String],
|
||||
) -> DomainResult<Vec<ShowSummary>>;
|
||||
async fn list_seasons(
|
||||
&self,
|
||||
series_name: &str,
|
||||
provider_id: Option<&str>,
|
||||
) -> DomainResult<Vec<SeasonSummary>>;
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn library_item_id_uses_double_colon_separator() {
|
||||
let item = LibraryItem {
|
||||
id: "jellyfin::abc123".to_string(),
|
||||
provider_id: "jellyfin".to_string(),
|
||||
external_id: "abc123".to_string(),
|
||||
title: "Test Movie".to_string(),
|
||||
content_type: crate::ContentType::Movie,
|
||||
duration_secs: 7200,
|
||||
series_name: None,
|
||||
season_number: None,
|
||||
episode_number: None,
|
||||
year: Some(2020),
|
||||
genres: vec!["Action".to_string()],
|
||||
tags: vec![],
|
||||
collection_id: None,
|
||||
collection_name: None,
|
||||
collection_type: None,
|
||||
thumbnail_url: None,
|
||||
synced_at: "2026-03-19T00:00:00Z".to_string(),
|
||||
};
|
||||
assert!(item.id.contains("::"));
|
||||
assert_eq!(item.provider_id, "jellyfin");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn library_search_filter_defaults_are_empty() {
|
||||
let f = LibrarySearchFilter::default();
|
||||
assert!(f.genres.is_empty());
|
||||
assert!(f.series_names.is_empty());
|
||||
assert_eq!(f.offset, 0);
|
||||
assert_eq!(f.limit, 50);
|
||||
}
|
||||
}
|
||||
@@ -3,14 +3,26 @@
|
||||
//! These traits define the interface for data persistence.
|
||||
//! Implementations live in the infra layer.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use chrono::DateTime;
|
||||
use chrono::Utc;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::entities::{Channel, GeneratedSchedule, PlaybackRecord, User};
|
||||
use crate::entities::{Channel, ChannelConfigSnapshot, GeneratedSchedule, PlaybackRecord, ScheduleConfig, User};
|
||||
use crate::errors::DomainResult;
|
||||
use crate::value_objects::{ChannelId, UserId};
|
||||
use crate::value_objects::{BlockId, ChannelId, MediaItemId, UserId};
|
||||
|
||||
/// An in-app activity event stored in the database for the admin log view.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct ActivityEvent {
|
||||
pub id: Uuid,
|
||||
pub timestamp: DateTime<Utc>,
|
||||
pub event_type: String,
|
||||
pub detail: String,
|
||||
pub channel_id: Option<Uuid>,
|
||||
}
|
||||
|
||||
/// Repository port for User persistence
|
||||
#[async_trait]
|
||||
@@ -29,6 +41,26 @@ pub trait UserRepository: Send + Sync {
|
||||
|
||||
/// Delete a user by their ID
|
||||
async fn delete(&self, id: Uuid) -> DomainResult<()>;
|
||||
|
||||
/// Count total number of users (used for first-user admin promotion)
|
||||
async fn count_users(&self) -> DomainResult<u64>;
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct ProviderConfigRow {
|
||||
pub id: String,
|
||||
pub provider_type: String,
|
||||
pub config_json: String,
|
||||
pub enabled: bool,
|
||||
pub updated_at: String,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
pub trait ProviderConfigRepository: Send + Sync {
|
||||
async fn get_all(&self) -> DomainResult<Vec<ProviderConfigRow>>;
|
||||
async fn get_by_id(&self, id: &str) -> DomainResult<Option<ProviderConfigRow>>;
|
||||
async fn upsert(&self, row: &ProviderConfigRow) -> DomainResult<()>;
|
||||
async fn delete(&self, id: &str) -> DomainResult<()>;
|
||||
}
|
||||
|
||||
/// Repository port for `Channel` persistence.
|
||||
@@ -41,6 +73,33 @@ pub trait ChannelRepository: Send + Sync {
|
||||
/// Insert or update a channel.
|
||||
async fn save(&self, channel: &Channel) -> DomainResult<()>;
|
||||
async fn delete(&self, id: ChannelId) -> DomainResult<()>;
|
||||
|
||||
/// Snapshot the current config before saving a new one.
|
||||
/// version_num is computed by the infra layer as MAX(version_num)+1 inside a transaction.
|
||||
async fn save_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
config: &ScheduleConfig,
|
||||
label: Option<String>,
|
||||
) -> DomainResult<ChannelConfigSnapshot>;
|
||||
|
||||
async fn list_config_snapshots(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<ChannelConfigSnapshot>>;
|
||||
|
||||
async fn get_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
) -> DomainResult<Option<ChannelConfigSnapshot>>;
|
||||
|
||||
async fn patch_config_snapshot_label(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
label: Option<String>,
|
||||
) -> DomainResult<Option<ChannelConfigSnapshot>>;
|
||||
}
|
||||
|
||||
/// Repository port for `GeneratedSchedule` and `PlaybackRecord` persistence.
|
||||
@@ -70,4 +129,65 @@ pub trait ScheduleRepository: Send + Sync {
|
||||
) -> DomainResult<Vec<PlaybackRecord>>;
|
||||
|
||||
async fn save_playback_record(&self, record: &PlaybackRecord) -> DomainResult<()>;
|
||||
|
||||
/// Return the most recent slot per block_id across ALL schedules for a channel.
|
||||
/// Resilient to any single generation having empty slots for a block.
|
||||
async fn find_last_slot_per_block(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<HashMap<BlockId, MediaItemId>>;
|
||||
|
||||
/// List all generated schedule headers for a channel, newest first.
|
||||
async fn list_schedule_history(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<GeneratedSchedule>>;
|
||||
|
||||
/// Fetch a specific schedule with its slots, verifying channel ownership.
|
||||
async fn get_schedule_by_id(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
schedule_id: Uuid,
|
||||
) -> DomainResult<Option<GeneratedSchedule>>;
|
||||
|
||||
/// Delete all schedules with generation > target_generation for this channel.
|
||||
/// Also deletes matching playback_records (no DB cascade between those tables).
|
||||
/// scheduled_slots cascade via FK from generated_schedules.
|
||||
async fn delete_schedules_after(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
target_generation: u32,
|
||||
) -> DomainResult<()>;
|
||||
}
|
||||
|
||||
/// Repository port for activity log persistence.
|
||||
#[async_trait]
|
||||
pub trait ActivityLogRepository: Send + Sync {
|
||||
async fn log(
|
||||
&self,
|
||||
event_type: &str,
|
||||
detail: &str,
|
||||
channel_id: Option<Uuid>,
|
||||
) -> DomainResult<()>;
|
||||
async fn recent(&self, limit: u32) -> DomainResult<Vec<ActivityEvent>>;
|
||||
}
|
||||
|
||||
/// Repository port for transcode settings persistence.
|
||||
#[async_trait]
|
||||
pub trait TranscodeSettingsRepository: Send + Sync {
|
||||
/// Load the persisted cleanup TTL. Returns None if no row exists yet.
|
||||
async fn load_cleanup_ttl(&self) -> DomainResult<Option<u32>>;
|
||||
/// Persist the cleanup TTL (upsert — always row id=1).
|
||||
async fn save_cleanup_ttl(&self, hours: u32) -> DomainResult<()>;
|
||||
}
|
||||
|
||||
/// Repository port for general admin settings (app_settings table).
|
||||
#[async_trait]
|
||||
pub trait IAppSettingsRepository: Send + Sync {
|
||||
/// Get a setting value by key. Returns None if not set.
|
||||
async fn get(&self, key: &str) -> DomainResult<Option<String>>;
|
||||
/// Set a setting value (upsert).
|
||||
async fn set(&self, key: &str, value: &str) -> DomainResult<()>;
|
||||
/// Get all settings as (key, value) pairs.
|
||||
async fn get_all(&self) -> DomainResult<Vec<(String, String)>>;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use crate::entities::Channel;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::entities::{Channel, ChannelConfigSnapshot, ScheduleConfig};
|
||||
use crate::errors::{DomainError, DomainResult};
|
||||
use crate::repositories::ChannelRepository;
|
||||
use crate::value_objects::{ChannelId, UserId};
|
||||
@@ -42,10 +44,75 @@ impl ChannelService {
|
||||
}
|
||||
|
||||
pub async fn update(&self, channel: Channel) -> DomainResult<Channel> {
|
||||
// Auto-snapshot the existing config before overwriting
|
||||
if let Some(existing) = self.channel_repo.find_by_id(channel.id).await? {
|
||||
self.channel_repo
|
||||
.save_config_snapshot(channel.id, &existing.schedule_config, None)
|
||||
.await?;
|
||||
}
|
||||
self.channel_repo.save(&channel).await?;
|
||||
Ok(channel)
|
||||
}
|
||||
|
||||
pub async fn list_config_snapshots(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<ChannelConfigSnapshot>> {
|
||||
self.channel_repo.list_config_snapshots(channel_id).await
|
||||
}
|
||||
|
||||
pub async fn get_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
) -> DomainResult<Option<ChannelConfigSnapshot>> {
|
||||
self.channel_repo.get_config_snapshot(channel_id, snapshot_id).await
|
||||
}
|
||||
|
||||
pub async fn patch_config_snapshot_label(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
label: Option<String>,
|
||||
) -> DomainResult<Option<ChannelConfigSnapshot>> {
|
||||
self.channel_repo.patch_config_snapshot_label(channel_id, snapshot_id, label).await
|
||||
}
|
||||
|
||||
/// Restore a snapshot: auto-snapshot current config, then apply the snapshot's config.
|
||||
pub async fn restore_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
) -> DomainResult<Channel> {
|
||||
let snapshot = self
|
||||
.channel_repo
|
||||
.get_config_snapshot(channel_id, snapshot_id)
|
||||
.await?
|
||||
.ok_or(DomainError::ChannelNotFound(channel_id))?;
|
||||
let mut channel = self
|
||||
.channel_repo
|
||||
.find_by_id(channel_id)
|
||||
.await?
|
||||
.ok_or(DomainError::ChannelNotFound(channel_id))?;
|
||||
// Snapshot current config before overwriting
|
||||
self.channel_repo
|
||||
.save_config_snapshot(channel_id, &channel.schedule_config, None)
|
||||
.await?;
|
||||
channel.schedule_config = snapshot.config;
|
||||
channel.updated_at = chrono::Utc::now();
|
||||
self.channel_repo.save(&channel).await?;
|
||||
Ok(channel)
|
||||
}
|
||||
|
||||
pub async fn save_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
config: &ScheduleConfig,
|
||||
label: Option<String>,
|
||||
) -> DomainResult<ChannelConfigSnapshot> {
|
||||
self.channel_repo.save_config_snapshot(channel_id, config, label).await
|
||||
}
|
||||
|
||||
/// Delete a channel, enforcing that `requester_id` is the owner.
|
||||
pub async fn delete(&self, id: ChannelId, requester_id: UserId) -> DomainResult<()> {
|
||||
let channel = self.find_by_id(id).await?;
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use rand::rngs::StdRng;
|
||||
use rand::seq::SliceRandom;
|
||||
use rand::SeedableRng;
|
||||
|
||||
use crate::entities::MediaItem;
|
||||
use crate::value_objects::{FillStrategy, MediaItemId};
|
||||
@@ -20,7 +22,7 @@ pub(super) fn fill_block<'a>(
|
||||
}
|
||||
FillStrategy::Random => {
|
||||
let mut indices: Vec<usize> = (0..pool.len()).collect();
|
||||
indices.shuffle(&mut rand::thread_rng());
|
||||
indices.shuffle(&mut StdRng::from_entropy());
|
||||
let mut remaining = target_secs;
|
||||
let mut result = Vec::new();
|
||||
for i in indices {
|
||||
@@ -127,12 +129,22 @@ pub(super) fn fill_sequential<'a>(
|
||||
};
|
||||
|
||||
// Greedily fill the block's time budget in episode order.
|
||||
// Stop at the first episode that doesn't fit — skipping would break ordering.
|
||||
let mut remaining = target_secs;
|
||||
let mut result = Vec::new();
|
||||
for item in ordered {
|
||||
for item in &ordered {
|
||||
if item.duration_secs <= remaining {
|
||||
remaining -= item.duration_secs;
|
||||
result.push(item);
|
||||
result.push(*item);
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
// Edge case: if the very first episode is longer than the entire block,
|
||||
// still include it — the slot builder clips it to block end via .min(end).
|
||||
if result.is_empty() {
|
||||
if let Some(&first) = ordered.first() {
|
||||
result.push(first);
|
||||
}
|
||||
}
|
||||
result
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
|
||||
use chrono::{DateTime, Duration, TimeZone, Utc};
|
||||
use chrono::{DateTime, Datelike, Duration, TimeZone, Utc};
|
||||
use chrono_tz::Tz;
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -21,7 +20,7 @@ mod recycle;
|
||||
|
||||
/// Core scheduling engine.
|
||||
///
|
||||
/// Generates 48-hour broadcast schedules by walking through a channel's
|
||||
/// Generates 7-day broadcast schedules by walking through a channel's
|
||||
/// `ScheduleConfig` day by day, resolving each `ProgrammingBlock` into concrete
|
||||
/// `ScheduledSlot`s via the `IMediaProvider`, and applying the `RecyclePolicy`
|
||||
/// to avoid replaying recently aired items.
|
||||
@@ -48,12 +47,12 @@ impl ScheduleEngineService {
|
||||
// Public API
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
/// Generate and persist a 48-hour schedule for `channel_id` starting at `from`.
|
||||
/// Generate and persist a 7-day schedule for `channel_id` starting at `from`.
|
||||
///
|
||||
/// The algorithm:
|
||||
/// 1. Walk each calendar day in the 48-hour window.
|
||||
/// 1. Walk each calendar day in the 7-day window.
|
||||
/// 2. For each `ProgrammingBlock`, compute its UTC wall-clock interval for that day.
|
||||
/// 3. Clip the interval to `[from, from + 48h)`.
|
||||
/// 3. Clip the interval to `[from, from + 7d)`.
|
||||
/// 4. Resolve the block content via the media provider, applying the recycle policy.
|
||||
/// 5. For `Sequential` blocks, resume from where the previous generation left off
|
||||
/// (series continuity — see `fill::fill_sequential`).
|
||||
@@ -91,21 +90,18 @@ impl ScheduleEngineService {
|
||||
.map(|s| s.generation + 1)
|
||||
.unwrap_or(1);
|
||||
|
||||
// Build the initial per-block continuity map from the previous generation's
|
||||
// last slot per block. The map is updated as each block occurrence is resolved
|
||||
// within this generation so that the second day of a 48h schedule continues
|
||||
// from where the first day ended.
|
||||
let mut block_continuity: HashMap<BlockId, MediaItemId> = latest_schedule
|
||||
.iter()
|
||||
.flat_map(|s| &s.slots)
|
||||
.fold(HashMap::new(), |mut map, slot| {
|
||||
// keep only the *last* slot per block (slots are sorted ascending)
|
||||
map.insert(slot.source_block_id, slot.item.id.clone());
|
||||
map
|
||||
});
|
||||
// Build the initial per-block continuity map from the most recent slot per
|
||||
// block across ALL schedules. This is resilient to any single generation
|
||||
// having empty slots for a block (e.g. provider returned nothing transiently).
|
||||
// The map is updated as each block occurrence is resolved within this
|
||||
// generation so the second day of a 48h schedule continues from here.
|
||||
let mut block_continuity = self
|
||||
.schedule_repo
|
||||
.find_last_slot_per_block(channel_id)
|
||||
.await?;
|
||||
|
||||
let valid_from = from;
|
||||
let valid_until = from + Duration::hours(48);
|
||||
let valid_until = from + Duration::days(7);
|
||||
|
||||
let start_date = from.with_timezone(&tz).date_naive();
|
||||
let end_date = valid_until.with_timezone(&tz).date_naive();
|
||||
@@ -114,7 +110,8 @@ impl ScheduleEngineService {
|
||||
let mut current_date = start_date;
|
||||
|
||||
while current_date <= end_date {
|
||||
for block in &channel.schedule_config.blocks {
|
||||
let weekday = crate::value_objects::Weekday::from(current_date.weekday());
|
||||
for block in channel.schedule_config.blocks_for(weekday) {
|
||||
let naive_start = current_date.and_time(block.start_time);
|
||||
|
||||
// `earliest()` handles DST gaps — if the local time doesn't exist
|
||||
@@ -127,7 +124,7 @@ impl ScheduleEngineService {
|
||||
let block_end_utc =
|
||||
block_start_utc + Duration::minutes(block.duration_mins as i64);
|
||||
|
||||
// Clip to the 48-hour window.
|
||||
// Clip to the 7-day window.
|
||||
let slot_start = block_start_utc.max(valid_from);
|
||||
let slot_end = block_end_utc.min(valid_until);
|
||||
|
||||
@@ -228,12 +225,38 @@ impl ScheduleEngineService {
|
||||
self.provider_registry.get_stream_url(item_id, quality).await
|
||||
}
|
||||
|
||||
/// List all generated schedule headers for a channel, newest first.
|
||||
pub async fn list_schedule_history(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<GeneratedSchedule>> {
|
||||
self.schedule_repo.list_schedule_history(channel_id).await
|
||||
}
|
||||
|
||||
/// Fetch a specific schedule with its slots.
|
||||
pub async fn get_schedule_by_id(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
schedule_id: uuid::Uuid,
|
||||
) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
self.schedule_repo.get_schedule_by_id(channel_id, schedule_id).await
|
||||
}
|
||||
|
||||
/// Delete all schedules with generation > target_generation for this channel.
|
||||
pub async fn delete_schedules_after(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
target_generation: u32,
|
||||
) -> DomainResult<()> {
|
||||
self.schedule_repo.delete_schedules_after(channel_id, target_generation).await
|
||||
}
|
||||
|
||||
/// Return all slots that overlap the given time window — the EPG data.
|
||||
pub fn get_epg<'a>(
|
||||
schedule: &'a GeneratedSchedule,
|
||||
pub fn get_epg(
|
||||
schedule: &GeneratedSchedule,
|
||||
from: DateTime<Utc>,
|
||||
until: DateTime<Utc>,
|
||||
) -> Vec<&'a ScheduledSlot> {
|
||||
) -> Vec<&ScheduledSlot> {
|
||||
schedule
|
||||
.slots
|
||||
.iter()
|
||||
@@ -245,6 +268,7 @@ impl ScheduleEngineService {
|
||||
// Block resolution
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
async fn resolve_block(
|
||||
&self,
|
||||
block: &ProgrammingBlock,
|
||||
@@ -310,6 +334,7 @@ impl ScheduleEngineService {
|
||||
///
|
||||
/// `last_item_id` is the ID of the last item scheduled in this block in the
|
||||
/// previous generation. Used only by `Sequential` for series continuity.
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
async fn resolve_algorithmic(
|
||||
&self,
|
||||
provider_id: &str,
|
||||
|
||||
@@ -31,7 +31,10 @@ impl UserService {
|
||||
}
|
||||
|
||||
let email = Email::try_from(email)?;
|
||||
let user = User::new(subject, email);
|
||||
let mut user = User::new(subject, email);
|
||||
if self.user_repository.count_users().await? == 0 {
|
||||
user.is_admin = true;
|
||||
}
|
||||
self.user_repository.save(&user).await?;
|
||||
Ok(user)
|
||||
}
|
||||
@@ -53,7 +56,10 @@ impl UserService {
|
||||
password_hash: &str,
|
||||
) -> DomainResult<User> {
|
||||
let email = Email::try_from(email)?;
|
||||
let user = User::new_local(email, password_hash);
|
||||
let mut user = User::new_local(email, password_hash);
|
||||
if self.user_repository.count_users().await? == 0 {
|
||||
user.is_admin = true;
|
||||
}
|
||||
self.user_repository.save(&user).await?;
|
||||
Ok(user)
|
||||
}
|
||||
|
||||
@@ -138,3 +138,64 @@ impl Default for RecyclePolicy {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Day of week, used as key in weekly schedule configs.
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum Weekday {
|
||||
Monday,
|
||||
Tuesday,
|
||||
Wednesday,
|
||||
Thursday,
|
||||
Friday,
|
||||
Saturday,
|
||||
Sunday,
|
||||
}
|
||||
|
||||
impl From<chrono::Weekday> for Weekday {
|
||||
fn from(w: chrono::Weekday) -> Self {
|
||||
match w {
|
||||
chrono::Weekday::Mon => Weekday::Monday,
|
||||
chrono::Weekday::Tue => Weekday::Tuesday,
|
||||
chrono::Weekday::Wed => Weekday::Wednesday,
|
||||
chrono::Weekday::Thu => Weekday::Thursday,
|
||||
chrono::Weekday::Fri => Weekday::Friday,
|
||||
chrono::Weekday::Sat => Weekday::Saturday,
|
||||
chrono::Weekday::Sun => Weekday::Sunday,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Weekday {
|
||||
pub fn all() -> [Weekday; 7] {
|
||||
// ISO week order: Monday = index 0, Sunday = index 6.
|
||||
// The schedule engine depends on this order when iterating days.
|
||||
[
|
||||
Weekday::Monday, Weekday::Tuesday, Weekday::Wednesday,
|
||||
Weekday::Thursday, Weekday::Friday, Weekday::Saturday, Weekday::Sunday,
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod weekday_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn from_chrono_weekday_all_variants() {
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Mon), Weekday::Monday);
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Tue), Weekday::Tuesday);
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Wed), Weekday::Wednesday);
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Thu), Weekday::Thursday);
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Fri), Weekday::Friday);
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Sat), Weekday::Saturday);
|
||||
assert_eq!(Weekday::from(chrono::Weekday::Sun), Weekday::Sunday);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn all_returns_monday_first_sunday_last() {
|
||||
let days = Weekday::all();
|
||||
assert_eq!(days[0], Weekday::Monday);
|
||||
assert_eq!(days[6], Weekday::Sunday);
|
||||
}
|
||||
}
|
||||
|
||||
5
k-tv-backend/infra/src/activity_log_repository/mod.rs
Normal file
5
k-tv-backend/infra/src/activity_log_repository/mod.rs
Normal file
@@ -0,0 +1,5 @@
|
||||
#[cfg(feature = "sqlite")]
|
||||
mod sqlite;
|
||||
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use sqlite::SqliteActivityLogRepository;
|
||||
71
k-tv-backend/infra/src/activity_log_repository/sqlite.rs
Normal file
71
k-tv-backend/infra/src/activity_log_repository/sqlite.rs
Normal file
@@ -0,0 +1,71 @@
|
||||
use async_trait::async_trait;
|
||||
use chrono::Utc;
|
||||
use uuid::Uuid;
|
||||
|
||||
use domain::{ActivityEvent, ActivityLogRepository, DomainError, DomainResult};
|
||||
|
||||
pub struct SqliteActivityLogRepository {
|
||||
pool: sqlx::SqlitePool,
|
||||
}
|
||||
|
||||
impl SqliteActivityLogRepository {
|
||||
pub fn new(pool: sqlx::SqlitePool) -> Self {
|
||||
Self { pool }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl ActivityLogRepository for SqliteActivityLogRepository {
|
||||
async fn log(
|
||||
&self,
|
||||
event_type: &str,
|
||||
detail: &str,
|
||||
channel_id: Option<Uuid>,
|
||||
) -> DomainResult<()> {
|
||||
let id = Uuid::new_v4().to_string();
|
||||
let timestamp = Utc::now().to_rfc3339();
|
||||
let channel_id_str = channel_id.map(|id| id.to_string());
|
||||
|
||||
sqlx::query(
|
||||
"INSERT INTO activity_log (id, timestamp, event_type, detail, channel_id) VALUES (?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(&id)
|
||||
.bind(×tamp)
|
||||
.bind(event_type)
|
||||
.bind(detail)
|
||||
.bind(&channel_id_str)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn recent(&self, limit: u32) -> DomainResult<Vec<ActivityEvent>> {
|
||||
let rows: Vec<(String, String, String, String, Option<String>)> = sqlx::query_as(
|
||||
"SELECT id, timestamp, event_type, detail, channel_id FROM activity_log ORDER BY timestamp DESC LIMIT ?",
|
||||
)
|
||||
.bind(limit)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
let events = rows
|
||||
.into_iter()
|
||||
.filter_map(|(id, timestamp, event_type, detail, channel_id)| {
|
||||
let id = Uuid::parse_str(&id).ok()?;
|
||||
let timestamp = timestamp.parse().ok()?;
|
||||
let channel_id = channel_id.and_then(|s| Uuid::parse_str(&s).ok());
|
||||
Some(ActivityEvent {
|
||||
id,
|
||||
timestamp,
|
||||
event_type,
|
||||
detail,
|
||||
channel_id,
|
||||
})
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(events)
|
||||
}
|
||||
}
|
||||
83
k-tv-backend/infra/src/app_settings_repository.rs
Normal file
83
k-tv-backend/infra/src/app_settings_repository.rs
Normal file
@@ -0,0 +1,83 @@
|
||||
//! SQLite implementation of IAppSettingsRepository.
|
||||
|
||||
use async_trait::async_trait;
|
||||
use sqlx::SqlitePool;
|
||||
use domain::{DomainError, DomainResult, IAppSettingsRepository};
|
||||
|
||||
pub struct SqliteAppSettingsRepository {
|
||||
pool: SqlitePool,
|
||||
}
|
||||
|
||||
impl SqliteAppSettingsRepository {
|
||||
pub fn new(pool: SqlitePool) -> Self {
|
||||
Self { pool }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl IAppSettingsRepository for SqliteAppSettingsRepository {
|
||||
async fn get(&self, key: &str) -> DomainResult<Option<String>> {
|
||||
sqlx::query_scalar::<_, String>("SELECT value FROM app_settings WHERE key = ?")
|
||||
.bind(key)
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))
|
||||
}
|
||||
|
||||
async fn set(&self, key: &str, value: &str) -> DomainResult<()> {
|
||||
sqlx::query("INSERT OR REPLACE INTO app_settings (key, value) VALUES (?, ?)")
|
||||
.bind(key)
|
||||
.bind(value)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map(|_| ())
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))
|
||||
}
|
||||
|
||||
async fn get_all(&self) -> DomainResult<Vec<(String, String)>> {
|
||||
sqlx::query_as::<_, (String, String)>("SELECT key, value FROM app_settings ORDER BY key")
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use sqlx::SqlitePool;
|
||||
use domain::IAppSettingsRepository;
|
||||
|
||||
async fn setup() -> SqlitePool {
|
||||
let pool = SqlitePool::connect(":memory:").await.unwrap();
|
||||
sqlx::query(
|
||||
"CREATE TABLE app_settings (key TEXT PRIMARY KEY, value TEXT NOT NULL)"
|
||||
).execute(&pool).await.unwrap();
|
||||
sqlx::query("INSERT INTO app_settings VALUES ('library_sync_interval_hours', '6')")
|
||||
.execute(&pool).await.unwrap();
|
||||
pool
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_returns_seeded_value() {
|
||||
let repo = SqliteAppSettingsRepository::new(setup().await);
|
||||
let val = repo.get("library_sync_interval_hours").await.unwrap();
|
||||
assert_eq!(val, Some("6".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn set_then_get() {
|
||||
let repo = SqliteAppSettingsRepository::new(setup().await);
|
||||
repo.set("library_sync_interval_hours", "12").await.unwrap();
|
||||
let val = repo.get("library_sync_interval_hours").await.unwrap();
|
||||
assert_eq!(val, Some("12".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_all_returns_all_keys() {
|
||||
let repo = SqliteAppSettingsRepository::new(setup().await);
|
||||
let all = repo.get_all().await.unwrap();
|
||||
assert!(!all.is_empty());
|
||||
assert!(all.iter().any(|(k, _)| k == "library_sync_interval_hours"));
|
||||
}
|
||||
}
|
||||
@@ -20,8 +20,10 @@ pub struct JwtConfig {
|
||||
pub issuer: Option<String>,
|
||||
/// Expected audience (for validation)
|
||||
pub audience: Option<String>,
|
||||
/// Token expiry in hours (default: 24)
|
||||
/// Access token expiry in hours (default: 24)
|
||||
pub expiry_hours: u64,
|
||||
/// Refresh token expiry in days (default: 30)
|
||||
pub refresh_expiry_days: u64,
|
||||
}
|
||||
|
||||
impl JwtConfig {
|
||||
@@ -33,6 +35,7 @@ impl JwtConfig {
|
||||
issuer: Option<String>,
|
||||
audience: Option<String>,
|
||||
expiry_hours: Option<u64>,
|
||||
refresh_expiry_days: Option<u64>,
|
||||
is_production: bool,
|
||||
) -> Result<Self, JwtError> {
|
||||
// Validate secret strength in production
|
||||
@@ -48,6 +51,7 @@ impl JwtConfig {
|
||||
issuer,
|
||||
audience,
|
||||
expiry_hours: expiry_hours.unwrap_or(24),
|
||||
refresh_expiry_days: refresh_expiry_days.unwrap_or(30),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -58,10 +62,15 @@ impl JwtConfig {
|
||||
issuer: None,
|
||||
audience: None,
|
||||
expiry_hours: 24,
|
||||
refresh_expiry_days: 30,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn default_token_type() -> String {
|
||||
"access".to_string()
|
||||
}
|
||||
|
||||
/// JWT claims structure
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct JwtClaims {
|
||||
@@ -79,6 +88,9 @@ pub struct JwtClaims {
|
||||
/// Audience
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub aud: Option<String>,
|
||||
/// Token type: "access" or "refresh". Defaults to "access" for backward compat.
|
||||
#[serde(default = "default_token_type")]
|
||||
pub token_type: String,
|
||||
}
|
||||
|
||||
/// JWT-related errors
|
||||
@@ -141,7 +153,7 @@ impl JwtValidator {
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a JWT token for the given user
|
||||
/// Create an access JWT token for the given user
|
||||
pub fn create_token(&self, user: &User) -> Result<String, JwtError> {
|
||||
let now = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
@@ -157,6 +169,30 @@ impl JwtValidator {
|
||||
iat: now,
|
||||
iss: self.config.issuer.clone(),
|
||||
aud: self.config.audience.clone(),
|
||||
token_type: "access".to_string(),
|
||||
};
|
||||
|
||||
let header = Header::new(Algorithm::HS256);
|
||||
encode(&header, &claims, &self.encoding_key).map_err(JwtError::CreationFailed)
|
||||
}
|
||||
|
||||
/// Create a refresh JWT token for the given user (longer-lived)
|
||||
pub fn create_refresh_token(&self, user: &User) -> Result<String, JwtError> {
|
||||
let now = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("Time went backwards")
|
||||
.as_secs() as usize;
|
||||
|
||||
let expiry = now + (self.config.refresh_expiry_days as usize * 86400);
|
||||
|
||||
let claims = JwtClaims {
|
||||
sub: user.id.to_string(),
|
||||
email: user.email.as_ref().to_string(),
|
||||
exp: expiry,
|
||||
iat: now,
|
||||
iss: self.config.issuer.clone(),
|
||||
aud: self.config.audience.clone(),
|
||||
token_type: "refresh".to_string(),
|
||||
};
|
||||
|
||||
let header = Header::new(Algorithm::HS256);
|
||||
@@ -176,14 +212,28 @@ impl JwtValidator {
|
||||
Ok(token_data.claims)
|
||||
}
|
||||
|
||||
/// Validate an access token — rejects refresh tokens
|
||||
pub fn validate_access_token(&self, token: &str) -> Result<JwtClaims, JwtError> {
|
||||
let claims = self.validate_token(token)?;
|
||||
if claims.token_type != "access" {
|
||||
return Err(JwtError::ValidationFailed("Not an access token".to_string()));
|
||||
}
|
||||
Ok(claims)
|
||||
}
|
||||
|
||||
/// Validate a refresh token — rejects access tokens
|
||||
pub fn validate_refresh_token(&self, token: &str) -> Result<JwtClaims, JwtError> {
|
||||
let claims = self.validate_token(token)?;
|
||||
if claims.token_type != "refresh" {
|
||||
return Err(JwtError::ValidationFailed("Not a refresh token".to_string()));
|
||||
}
|
||||
Ok(claims)
|
||||
}
|
||||
|
||||
/// Get the user ID (subject) from a token without full validation
|
||||
/// Useful for logging/debugging, but should not be trusted for auth
|
||||
pub fn decode_unverified(&self, token: &str) -> Result<JwtClaims, JwtError> {
|
||||
let mut validation = Validation::new(Algorithm::HS256);
|
||||
validation.insecure_disable_signature_validation();
|
||||
validation.validate_exp = false;
|
||||
|
||||
let token_data = decode::<JwtClaims>(token, &self.decoding_key, &validation)
|
||||
let token_data = jsonwebtoken::dangerous::insecure_decode::<JwtClaims>(token)
|
||||
.map_err(|_| JwtError::InvalidFormat)?;
|
||||
|
||||
Ok(token_data.claims)
|
||||
@@ -232,6 +282,7 @@ mod tests {
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
true, // Production mode
|
||||
);
|
||||
|
||||
@@ -245,6 +296,7 @@ mod tests {
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
false, // Development mode
|
||||
);
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ use chrono::{DateTime, Utc};
|
||||
use sqlx::FromRow;
|
||||
use uuid::Uuid;
|
||||
|
||||
use domain::{AccessMode, Channel, ChannelId, DomainError, LogoPosition, RecyclePolicy, ScheduleConfig, UserId};
|
||||
use domain::{AccessMode, Channel, ChannelId, DomainError, LogoPosition, RecyclePolicy, ScheduleConfig, ScheduleConfigCompat, UserId};
|
||||
|
||||
#[derive(Debug, FromRow)]
|
||||
pub(super) struct ChannelRow {
|
||||
@@ -44,10 +44,11 @@ impl TryFrom<ChannelRow> for Channel {
|
||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid channel UUID: {}", e)))?;
|
||||
let owner_id: UserId = Uuid::parse_str(&row.owner_id)
|
||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid owner UUID: {}", e)))?;
|
||||
let schedule_config: ScheduleConfig = serde_json::from_str(&row.schedule_config)
|
||||
let schedule_config: ScheduleConfig = serde_json::from_str::<ScheduleConfigCompat>(&row.schedule_config)
|
||||
.map_err(|e| {
|
||||
DomainError::RepositoryError(format!("Invalid schedule_config JSON: {}", e))
|
||||
})?;
|
||||
})
|
||||
.map(ScheduleConfig::from)?;
|
||||
let recycle_policy: RecyclePolicy = serde_json::from_str(&row.recycle_policy)
|
||||
.map_err(|e| {
|
||||
DomainError::RepositoryError(format!("Invalid recycle_policy JSON: {}", e))
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
use async_trait::async_trait;
|
||||
use chrono::{DateTime, Utc};
|
||||
use sqlx::Row;
|
||||
use uuid::Uuid;
|
||||
|
||||
use domain::{Channel, ChannelId, ChannelRepository, DomainError, DomainResult, UserId};
|
||||
use domain::{Channel, ChannelConfigSnapshot, ChannelId, ChannelRepository, DomainError, DomainResult, ScheduleConfig, ScheduleConfigCompat, UserId};
|
||||
|
||||
use super::mapping::{ChannelRow, SELECT_COLS};
|
||||
|
||||
@@ -139,4 +142,129 @@ impl ChannelRepository for SqliteChannelRepository {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn save_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
config: &ScheduleConfig,
|
||||
label: Option<String>,
|
||||
) -> DomainResult<ChannelConfigSnapshot> {
|
||||
let id = Uuid::new_v4();
|
||||
let now = Utc::now();
|
||||
let config_json = serde_json::to_string(config)
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
let mut tx = self.pool.begin().await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
let version_num: i64 = sqlx::query_scalar(
|
||||
"SELECT COALESCE(MAX(version_num), 0) + 1 FROM channel_config_snapshots WHERE channel_id = ?"
|
||||
)
|
||||
.bind(channel_id.to_string())
|
||||
.fetch_one(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
sqlx::query(
|
||||
"INSERT INTO channel_config_snapshots (id, channel_id, config_json, version_num, label, created_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?)"
|
||||
)
|
||||
.bind(id.to_string())
|
||||
.bind(channel_id.to_string())
|
||||
.bind(&config_json)
|
||||
.bind(version_num)
|
||||
.bind(&label)
|
||||
.bind(now.to_rfc3339())
|
||||
.execute(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
tx.commit().await.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
Ok(ChannelConfigSnapshot { id, channel_id, config: config.clone(), version_num, label, created_at: now })
|
||||
}
|
||||
|
||||
async fn list_config_snapshots(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<ChannelConfigSnapshot>> {
|
||||
let rows = sqlx::query(
|
||||
"SELECT id, config_json, version_num, label, created_at
|
||||
FROM channel_config_snapshots WHERE channel_id = ?
|
||||
ORDER BY version_num DESC"
|
||||
)
|
||||
.bind(channel_id.to_string())
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
rows.iter().map(|row| {
|
||||
let id: Uuid = row.get::<String, _>("id").parse()
|
||||
.map_err(|_| DomainError::RepositoryError("bad uuid".into()))?;
|
||||
let config_json: String = row.get("config_json");
|
||||
let config_compat: ScheduleConfigCompat = serde_json::from_str(&config_json)
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
let config: ScheduleConfig = config_compat.into();
|
||||
let version_num: i64 = row.get("version_num");
|
||||
let label: Option<String> = row.get("label");
|
||||
let created_at_str: String = row.get("created_at");
|
||||
let created_at = created_at_str.parse::<DateTime<Utc>>()
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
Ok(ChannelConfigSnapshot { id, channel_id, config, version_num, label, created_at })
|
||||
}).collect()
|
||||
}
|
||||
|
||||
async fn get_config_snapshot(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
) -> DomainResult<Option<ChannelConfigSnapshot>> {
|
||||
let row = sqlx::query(
|
||||
"SELECT id, config_json, version_num, label, created_at
|
||||
FROM channel_config_snapshots WHERE id = ? AND channel_id = ?"
|
||||
)
|
||||
.bind(snapshot_id.to_string())
|
||||
.bind(channel_id.to_string())
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
match row {
|
||||
None => Ok(None),
|
||||
Some(row) => {
|
||||
let config_json: String = row.get("config_json");
|
||||
let config_compat: ScheduleConfigCompat = serde_json::from_str(&config_json)
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
let config: ScheduleConfig = config_compat.into();
|
||||
let version_num: i64 = row.get("version_num");
|
||||
let label: Option<String> = row.get("label");
|
||||
let created_at_str: String = row.get("created_at");
|
||||
let created_at = created_at_str.parse::<DateTime<Utc>>()
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
Ok(Some(ChannelConfigSnapshot { id: snapshot_id, channel_id, config, version_num, label, created_at }))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn patch_config_snapshot_label(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
snapshot_id: Uuid,
|
||||
label: Option<String>,
|
||||
) -> DomainResult<Option<ChannelConfigSnapshot>> {
|
||||
let updated = sqlx::query(
|
||||
"UPDATE channel_config_snapshots SET label = ? WHERE id = ? AND channel_id = ? RETURNING id"
|
||||
)
|
||||
.bind(&label)
|
||||
.bind(snapshot_id.to_string())
|
||||
.bind(channel_id.to_string())
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
if updated.is_none() {
|
||||
return Ok(None);
|
||||
}
|
||||
self.get_config_snapshot(channel_id, snapshot_id).await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use crate::db::DatabasePool;
|
||||
use domain::{ChannelRepository, ScheduleRepository, UserRepository};
|
||||
use domain::{ActivityLogRepository, ChannelRepository, IAppSettingsRepository, ILibraryRepository, ProviderConfigRepository, ScheduleRepository, TranscodeSettingsRepository, UserRepository};
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum FactoryError {
|
||||
@@ -51,6 +51,40 @@ pub async fn build_channel_repository(
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn build_activity_log_repository(
|
||||
pool: &DatabasePool,
|
||||
) -> FactoryResult<Arc<dyn ActivityLogRepository>> {
|
||||
match pool {
|
||||
#[cfg(feature = "sqlite")]
|
||||
DatabasePool::Sqlite(pool) => Ok(Arc::new(
|
||||
crate::activity_log_repository::SqliteActivityLogRepository::new(pool.clone()),
|
||||
)),
|
||||
#[cfg(feature = "postgres")]
|
||||
DatabasePool::Postgres(_pool) => Err(FactoryError::NotImplemented(
|
||||
"ActivityLogRepository not yet implemented for Postgres".to_string(),
|
||||
)),
|
||||
#[allow(unreachable_patterns)]
|
||||
_ => Err(FactoryError::NotImplemented(
|
||||
"No database feature enabled".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn build_provider_config_repository(
|
||||
pool: &DatabasePool,
|
||||
) -> FactoryResult<Arc<dyn ProviderConfigRepository>> {
|
||||
match pool {
|
||||
#[cfg(feature = "sqlite")]
|
||||
DatabasePool::Sqlite(pool) => Ok(Arc::new(
|
||||
crate::provider_config_repository::SqliteProviderConfigRepository::new(pool.clone()),
|
||||
)),
|
||||
#[allow(unreachable_patterns)]
|
||||
_ => Err(FactoryError::NotImplemented(
|
||||
"ProviderConfigRepository not implemented for this database".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn build_schedule_repository(
|
||||
pool: &DatabasePool,
|
||||
) -> FactoryResult<Arc<dyn ScheduleRepository>> {
|
||||
@@ -69,3 +103,88 @@ pub async fn build_schedule_repository(
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn build_transcode_settings_repository(
|
||||
pool: &DatabasePool,
|
||||
) -> FactoryResult<Arc<dyn TranscodeSettingsRepository>> {
|
||||
match pool {
|
||||
#[cfg(feature = "sqlite")]
|
||||
DatabasePool::Sqlite(p) => Ok(Arc::new(
|
||||
crate::transcode_settings_repository::SqliteTranscodeSettingsRepository::new(p.clone()),
|
||||
)),
|
||||
#[allow(unreachable_patterns)]
|
||||
_ => Err(FactoryError::NotImplemented(
|
||||
"TranscodeSettingsRepository not implemented for this database".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn build_library_repository(
|
||||
pool: &DatabasePool,
|
||||
) -> FactoryResult<Arc<dyn ILibraryRepository>> {
|
||||
match pool {
|
||||
#[cfg(feature = "sqlite")]
|
||||
DatabasePool::Sqlite(pool) => Ok(Arc::new(
|
||||
crate::library_repository::SqliteLibraryRepository::new(pool.clone()),
|
||||
)),
|
||||
#[allow(unreachable_patterns)]
|
||||
_ => Err(FactoryError::NotImplemented(
|
||||
"LibraryRepository not implemented for this database".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn build_app_settings_repository(
|
||||
pool: &DatabasePool,
|
||||
) -> FactoryResult<Arc<dyn IAppSettingsRepository>> {
|
||||
match pool {
|
||||
#[cfg(feature = "sqlite")]
|
||||
DatabasePool::Sqlite(pool) => Ok(Arc::new(
|
||||
crate::app_settings_repository::SqliteAppSettingsRepository::new(pool.clone()),
|
||||
)),
|
||||
#[allow(unreachable_patterns)]
|
||||
_ => Err(FactoryError::NotImplemented(
|
||||
"AppSettingsRepository not implemented for this database".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
pub struct LocalFilesBundle {
|
||||
pub provider: Arc<crate::LocalFilesProvider>,
|
||||
pub local_index: Arc<crate::LocalIndex>,
|
||||
pub transcode_manager: Option<Arc<crate::TranscodeManager>>,
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
pub async fn build_local_files_bundle(
|
||||
pool: &DatabasePool,
|
||||
root_dir: std::path::PathBuf,
|
||||
transcode_dir: Option<std::path::PathBuf>,
|
||||
cleanup_ttl_hours: u32,
|
||||
base_url: String,
|
||||
provider_id: &str,
|
||||
) -> FactoryResult<LocalFilesBundle> {
|
||||
match pool {
|
||||
#[cfg(feature = "sqlite")]
|
||||
DatabasePool::Sqlite(sqlite_pool) => {
|
||||
let cfg = crate::LocalFilesConfig {
|
||||
root_dir,
|
||||
base_url,
|
||||
transcode_dir: transcode_dir.clone(),
|
||||
cleanup_ttl_hours,
|
||||
};
|
||||
let idx = Arc::new(crate::LocalIndex::new(&cfg, sqlite_pool.clone(), provider_id.to_string()).await);
|
||||
let tm = transcode_dir.as_ref().map(|td| {
|
||||
std::fs::create_dir_all(td).ok();
|
||||
crate::TranscodeManager::new(td.clone(), cleanup_ttl_hours)
|
||||
});
|
||||
let provider = Arc::new(crate::LocalFilesProvider::new(Arc::clone(&idx), cfg, tm.clone()));
|
||||
Ok(LocalFilesBundle { provider, local_index: idx, transcode_manager: tm })
|
||||
}
|
||||
#[allow(unreachable_patterns)]
|
||||
_ => Err(FactoryError::NotImplemented(
|
||||
"local-files requires SQLite".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/// Connection details for a single Jellyfin instance.
|
||||
#[derive(Debug, Clone)]
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
pub struct JellyfinConfig {
|
||||
/// e.g. `"http://192.168.1.10:8096"` — no trailing slash
|
||||
pub base_url: String,
|
||||
|
||||
@@ -31,5 +31,8 @@ pub(super) fn map_jellyfin_item(item: JellyfinItem) -> Option<MediaItem> {
|
||||
series_name: item.series_name,
|
||||
season_number: item.parent_index_number,
|
||||
episode_number: item.index_number,
|
||||
// TODO(library-sync): populate thumbnail_url from Jellyfin image API and collection_id from parent_id when FullSyncAdapter is implemented (Task 5)
|
||||
thumbnail_url: None,
|
||||
collection_id: None,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -73,6 +73,10 @@ impl JellyfinMediaProvider {
|
||||
// requested — season first, then episode within the season.
|
||||
params.push(("SortBy", "ParentIndexNumber,IndexNumber".into()));
|
||||
params.push(("SortOrder", "Ascending".into()));
|
||||
// Prevent Jellyfin from returning Season/Series container items.
|
||||
if filter.content_type.is_none() {
|
||||
params.push(("IncludeItemTypes", "Episode".into()));
|
||||
}
|
||||
} else {
|
||||
// No series filter — scope to the collection (library) if one is set.
|
||||
if let Some(parent_id) = filter.collections.first() {
|
||||
@@ -376,12 +380,11 @@ impl IMediaProvider for JellyfinMediaProvider {
|
||||
if resp.status().is_success() {
|
||||
let info: JellyfinPlaybackInfoResponse = resp.json().await
|
||||
.map_err(|e| DomainError::InfrastructureError(format!("PlaybackInfo parse failed: {e}")))?;
|
||||
if let Some(src) = info.media_sources.first() {
|
||||
if src.supports_direct_stream {
|
||||
if let Some(rel_url) = &src.direct_stream_url {
|
||||
return Ok(format!("{}{}&api_key={}", self.config.base_url, rel_url, self.config.api_key));
|
||||
}
|
||||
}
|
||||
if let Some(src) = info.media_sources.first()
|
||||
&& src.supports_direct_stream
|
||||
&& let Some(rel_url) = &src.direct_stream_url
|
||||
{
|
||||
return Ok(format!("{}{}&api_key={}", self.config.base_url, rel_url, self.config.api_key));
|
||||
}
|
||||
}
|
||||
// Fallback: HLS at 8 Mbps
|
||||
@@ -395,7 +398,7 @@ impl IMediaProvider for JellyfinMediaProvider {
|
||||
impl JellyfinMediaProvider {
|
||||
fn hls_url(&self, item_id: &MediaItemId, bitrate: u32) -> String {
|
||||
format!(
|
||||
"{}/Videos/{}/master.m3u8?videoCodec=h264&audioCodec=aac&VideoBitRate={}&mediaSourceId={}&api_key={}",
|
||||
"{}/Videos/{}/master.m3u8?videoCodec=h264&audioCodec=aac&VideoBitRate={}&mediaSourceId={}&SubtitleMethod=Hls&subtitleCodec=vtt&api_key={}",
|
||||
self.config.base_url,
|
||||
item_id.as_ref(),
|
||||
bitrate,
|
||||
|
||||
@@ -18,8 +18,16 @@ pub mod db;
|
||||
pub mod factory;
|
||||
pub mod jellyfin;
|
||||
pub mod provider_registry;
|
||||
mod library_sync;
|
||||
pub use library_sync::FullSyncAdapter;
|
||||
|
||||
mod app_settings_repository;
|
||||
mod activity_log_repository;
|
||||
mod channel_repository;
|
||||
mod library_repository;
|
||||
mod provider_config_repository;
|
||||
mod schedule_repository;
|
||||
mod transcode_settings_repository;
|
||||
mod user_repository;
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
@@ -29,12 +37,24 @@ pub mod local_files;
|
||||
pub use db::run_migrations;
|
||||
pub use provider_registry::ProviderRegistry;
|
||||
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use app_settings_repository::SqliteAppSettingsRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use activity_log_repository::SqliteActivityLogRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use user_repository::SqliteUserRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use channel_repository::SqliteChannelRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use provider_config_repository::SqliteProviderConfigRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use schedule_repository::SqliteScheduleRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use transcode_settings_repository::SqliteTranscodeSettingsRepository;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use library_repository::SqliteLibraryRepository;
|
||||
|
||||
pub use domain::TranscodeSettingsRepository;
|
||||
|
||||
#[cfg(feature = "jellyfin")]
|
||||
pub use jellyfin::{JellyfinConfig, JellyfinMediaProvider};
|
||||
|
||||
508
k-tv-backend/infra/src/library_repository.rs
Normal file
508
k-tv-backend/infra/src/library_repository.rs
Normal file
@@ -0,0 +1,508 @@
|
||||
//! SQLite implementation of ILibraryRepository.
|
||||
|
||||
use async_trait::async_trait;
|
||||
use sqlx::SqlitePool;
|
||||
|
||||
use domain::{
|
||||
ContentType, DomainError, DomainResult, ILibraryRepository,
|
||||
LibraryCollection, LibraryItem, LibrarySearchFilter, LibrarySyncLogEntry, LibrarySyncResult,
|
||||
SeasonSummary, ShowSummary,
|
||||
};
|
||||
|
||||
pub struct SqliteLibraryRepository {
|
||||
pool: SqlitePool,
|
||||
}
|
||||
|
||||
impl SqliteLibraryRepository {
|
||||
pub fn new(pool: SqlitePool) -> Self {
|
||||
Self { pool }
|
||||
}
|
||||
}
|
||||
|
||||
fn content_type_str(ct: &ContentType) -> &'static str {
|
||||
match ct {
|
||||
ContentType::Movie => "movie",
|
||||
ContentType::Episode => "episode",
|
||||
ContentType::Short => "short",
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_content_type(s: &str) -> ContentType {
|
||||
match s {
|
||||
"episode" => ContentType::Episode,
|
||||
"short" => ContentType::Short,
|
||||
_ => ContentType::Movie,
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl ILibraryRepository for SqliteLibraryRepository {
|
||||
async fn search(&self, filter: &LibrarySearchFilter) -> DomainResult<(Vec<LibraryItem>, u32)> {
|
||||
let mut conditions: Vec<String> = vec![];
|
||||
if let Some(ref p) = filter.provider_id {
|
||||
conditions.push(format!("provider_id = '{}'", p.replace('\'', "''")));
|
||||
}
|
||||
if let Some(ref ct) = filter.content_type {
|
||||
conditions.push(format!("content_type = '{}'", content_type_str(ct)));
|
||||
}
|
||||
if let Some(ref st) = filter.search_term {
|
||||
conditions.push(format!("title LIKE '%{}%'", st.replace('\'', "''")));
|
||||
}
|
||||
if let Some(ref cid) = filter.collection_id {
|
||||
conditions.push(format!("collection_id = '{}'", cid.replace('\'', "''")));
|
||||
}
|
||||
if let Some(decade) = filter.decade {
|
||||
let end = decade + 10;
|
||||
conditions.push(format!("year >= {} AND year < {}", decade, end));
|
||||
}
|
||||
if let Some(min) = filter.min_duration_secs {
|
||||
conditions.push(format!("duration_secs >= {}", min));
|
||||
}
|
||||
if let Some(max) = filter.max_duration_secs {
|
||||
conditions.push(format!("duration_secs <= {}", max));
|
||||
}
|
||||
if !filter.series_names.is_empty() {
|
||||
let quoted: Vec<String> = filter.series_names.iter()
|
||||
.map(|s| format!("'{}'", s.replace('\'', "''")))
|
||||
.collect();
|
||||
conditions.push(format!("series_name IN ({})", quoted.join(",")));
|
||||
}
|
||||
if !filter.genres.is_empty() {
|
||||
let genre_conditions: Vec<String> = filter.genres.iter()
|
||||
.map(|g| format!("EXISTS (SELECT 1 FROM json_each(library_items.genres) WHERE value = '{}')", g.replace('\'', "''")))
|
||||
.collect();
|
||||
conditions.push(format!("({})", genre_conditions.join(" OR ")));
|
||||
}
|
||||
if let Some(sn) = filter.season_number {
|
||||
conditions.push(format!("season_number = {}", sn));
|
||||
}
|
||||
|
||||
let where_clause = if conditions.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
format!("WHERE {}", conditions.join(" AND "))
|
||||
};
|
||||
|
||||
let count_sql = format!("SELECT COUNT(*) FROM library_items {}", where_clause);
|
||||
let total: i64 = sqlx::query_scalar(&count_sql)
|
||||
.fetch_one(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
let items_sql = format!(
|
||||
"SELECT * FROM library_items {} ORDER BY title ASC LIMIT {} OFFSET {}",
|
||||
where_clause, filter.limit, filter.offset
|
||||
);
|
||||
|
||||
let rows = sqlx::query_as::<_, LibraryItemRow>(&items_sql)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
Ok((rows.into_iter().map(Into::into).collect(), total as u32))
|
||||
}
|
||||
|
||||
async fn get_by_id(&self, id: &str) -> DomainResult<Option<LibraryItem>> {
|
||||
let row = sqlx::query_as::<_, LibraryItemRow>(
|
||||
"SELECT * FROM library_items WHERE id = ?"
|
||||
)
|
||||
.bind(id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
Ok(row.map(Into::into))
|
||||
}
|
||||
|
||||
async fn list_collections(&self, provider_id: Option<&str>) -> DomainResult<Vec<LibraryCollection>> {
|
||||
let rows: Vec<(String, Option<String>, Option<String>)> = if let Some(p) = provider_id {
|
||||
sqlx::query_as::<_, (String, Option<String>, Option<String>)>(
|
||||
"SELECT DISTINCT collection_id, collection_name, collection_type
|
||||
FROM library_items WHERE collection_id IS NOT NULL AND provider_id = ?
|
||||
ORDER BY collection_name ASC"
|
||||
).bind(p).fetch_all(&self.pool).await
|
||||
} else {
|
||||
sqlx::query_as::<_, (String, Option<String>, Option<String>)>(
|
||||
"SELECT DISTINCT collection_id, collection_name, collection_type
|
||||
FROM library_items WHERE collection_id IS NOT NULL
|
||||
ORDER BY collection_name ASC"
|
||||
).fetch_all(&self.pool).await
|
||||
}.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
Ok(rows.into_iter().map(|(id, name, ct)| LibraryCollection {
|
||||
id,
|
||||
name: name.unwrap_or_default(),
|
||||
collection_type: ct,
|
||||
}).collect())
|
||||
}
|
||||
|
||||
async fn list_series(&self, provider_id: Option<&str>) -> DomainResult<Vec<String>> {
|
||||
let rows: Vec<(String,)> = if let Some(p) = provider_id {
|
||||
sqlx::query_as(
|
||||
"SELECT DISTINCT series_name FROM library_items
|
||||
WHERE series_name IS NOT NULL AND provider_id = ? ORDER BY series_name ASC"
|
||||
).bind(p).fetch_all(&self.pool).await
|
||||
} else {
|
||||
sqlx::query_as(
|
||||
"SELECT DISTINCT series_name FROM library_items
|
||||
WHERE series_name IS NOT NULL ORDER BY series_name ASC"
|
||||
).fetch_all(&self.pool).await
|
||||
}.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
Ok(rows.into_iter().map(|(s,)| s).collect())
|
||||
}
|
||||
|
||||
async fn list_genres(&self, content_type: Option<&ContentType>, provider_id: Option<&str>) -> DomainResult<Vec<String>> {
|
||||
let sql = match (content_type, provider_id) {
|
||||
(Some(ct), Some(p)) => format!(
|
||||
"SELECT DISTINCT je.value FROM library_items li, json_each(li.genres) je
|
||||
WHERE li.content_type = '{}' AND li.provider_id = '{}' ORDER BY je.value ASC",
|
||||
content_type_str(ct), p.replace('\'', "''")
|
||||
),
|
||||
(Some(ct), None) => format!(
|
||||
"SELECT DISTINCT je.value FROM library_items li, json_each(li.genres) je
|
||||
WHERE li.content_type = '{}' ORDER BY je.value ASC",
|
||||
content_type_str(ct)
|
||||
),
|
||||
(None, Some(p)) => format!(
|
||||
"SELECT DISTINCT je.value FROM library_items li, json_each(li.genres) je
|
||||
WHERE li.provider_id = '{}' ORDER BY je.value ASC",
|
||||
p.replace('\'', "''")
|
||||
),
|
||||
(None, None) => "SELECT DISTINCT je.value FROM library_items li, json_each(li.genres) je ORDER BY je.value ASC".to_string(),
|
||||
};
|
||||
let rows: Vec<(String,)> = sqlx::query_as(&sql)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
Ok(rows.into_iter().map(|(s,)| s).collect())
|
||||
}
|
||||
|
||||
async fn upsert_items(&self, _provider_id: &str, items: Vec<LibraryItem>) -> DomainResult<()> {
|
||||
let mut tx = self.pool.begin().await.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
for item in items {
|
||||
sqlx::query(
|
||||
"INSERT OR REPLACE INTO library_items
|
||||
(id, provider_id, external_id, title, content_type, duration_secs,
|
||||
series_name, season_number, episode_number, year, genres, tags,
|
||||
collection_id, collection_name, collection_type, thumbnail_url, synced_at)
|
||||
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
|
||||
)
|
||||
.bind(&item.id).bind(&item.provider_id).bind(&item.external_id)
|
||||
.bind(&item.title).bind(content_type_str(&item.content_type))
|
||||
.bind(item.duration_secs as i64)
|
||||
.bind(&item.series_name).bind(item.season_number.map(|n| n as i64))
|
||||
.bind(item.episode_number.map(|n| n as i64))
|
||||
.bind(item.year.map(|n| n as i64))
|
||||
.bind(serde_json::to_string(&item.genres).unwrap_or_default())
|
||||
.bind(serde_json::to_string(&item.tags).unwrap_or_default())
|
||||
.bind(&item.collection_id).bind(&item.collection_name)
|
||||
.bind(&item.collection_type).bind(&item.thumbnail_url)
|
||||
.bind(&item.synced_at)
|
||||
.execute(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
}
|
||||
tx.commit().await.map_err(|e| DomainError::InfrastructureError(e.to_string()))
|
||||
}
|
||||
|
||||
async fn clear_provider(&self, provider_id: &str) -> DomainResult<()> {
|
||||
sqlx::query("DELETE FROM library_items WHERE provider_id = ?")
|
||||
.bind(provider_id)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map(|_| ())
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))
|
||||
}
|
||||
|
||||
async fn log_sync_start(&self, provider_id: &str) -> DomainResult<i64> {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
let id = sqlx::query_scalar::<_, i64>(
|
||||
"INSERT INTO library_sync_log (provider_id, started_at, status)
|
||||
VALUES (?, ?, 'running') RETURNING id"
|
||||
)
|
||||
.bind(provider_id).bind(&now)
|
||||
.fetch_one(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
async fn log_sync_finish(&self, log_id: i64, result: &LibrarySyncResult) -> DomainResult<()> {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
let status = if result.error.is_none() { "done" } else { "error" };
|
||||
sqlx::query(
|
||||
"UPDATE library_sync_log
|
||||
SET finished_at = ?, items_found = ?, status = ?, error_msg = ?
|
||||
WHERE id = ?"
|
||||
)
|
||||
.bind(&now).bind(result.items_found as i64)
|
||||
.bind(status).bind(&result.error).bind(log_id)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map(|_| ())
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))
|
||||
}
|
||||
|
||||
async fn latest_sync_status(&self) -> DomainResult<Vec<LibrarySyncLogEntry>> {
|
||||
let rows = sqlx::query_as::<_, SyncLogRow>(
|
||||
"SELECT * FROM library_sync_log
|
||||
WHERE id IN (
|
||||
SELECT MAX(id) FROM library_sync_log GROUP BY provider_id
|
||||
)
|
||||
ORDER BY started_at DESC"
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
Ok(rows.into_iter().map(|r| LibrarySyncLogEntry {
|
||||
id: r.id, provider_id: r.provider_id, started_at: r.started_at,
|
||||
finished_at: r.finished_at, items_found: r.items_found as u32,
|
||||
status: r.status, error_msg: r.error_msg,
|
||||
}).collect())
|
||||
}
|
||||
|
||||
async fn is_sync_running(&self, provider_id: &str) -> DomainResult<bool> {
|
||||
let count: i64 = sqlx::query_scalar(
|
||||
"SELECT COUNT(*) FROM library_sync_log WHERE provider_id = ? AND status = 'running'"
|
||||
)
|
||||
.bind(provider_id)
|
||||
.fetch_one(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
Ok(count > 0)
|
||||
}
|
||||
|
||||
async fn list_shows(
|
||||
&self,
|
||||
provider_id: Option<&str>,
|
||||
search_term: Option<&str>,
|
||||
genres: &[String],
|
||||
) -> DomainResult<Vec<ShowSummary>> {
|
||||
let mut conditions = vec![
|
||||
"content_type = 'episode'".to_string(),
|
||||
"series_name IS NOT NULL".to_string(),
|
||||
];
|
||||
if let Some(p) = provider_id {
|
||||
conditions.push(format!("provider_id = '{}'", p.replace('\'', "''")));
|
||||
}
|
||||
if let Some(st) = search_term {
|
||||
let escaped = st.replace('\'', "''");
|
||||
conditions.push(format!(
|
||||
"(title LIKE '%{escaped}%' OR series_name LIKE '%{escaped}%')"
|
||||
));
|
||||
}
|
||||
if !genres.is_empty() {
|
||||
let genre_conditions: Vec<String> = genres
|
||||
.iter()
|
||||
.map(|g| format!(
|
||||
"EXISTS (SELECT 1 FROM json_each(library_items.genres) WHERE value = '{}')",
|
||||
g.replace('\'', "''")
|
||||
))
|
||||
.collect();
|
||||
conditions.push(format!("({})", genre_conditions.join(" OR ")));
|
||||
}
|
||||
let where_clause = format!("WHERE {}", conditions.join(" AND "));
|
||||
let sql = format!(
|
||||
"SELECT series_name, COUNT(*) AS episode_count, COUNT(DISTINCT season_number) AS season_count, MAX(thumbnail_url) AS thumbnail_url, GROUP_CONCAT(genres, ',') AS genres_blob FROM library_items {} GROUP BY series_name ORDER BY series_name ASC",
|
||||
where_clause
|
||||
);
|
||||
let rows = sqlx::query_as::<_, ShowSummaryRow>(&sql)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
Ok(rows
|
||||
.into_iter()
|
||||
.map(|r| {
|
||||
let genres: Vec<String> = r
|
||||
.genres_blob
|
||||
.split("],[")
|
||||
.flat_map(|chunk| {
|
||||
let cleaned = chunk.trim_start_matches('[').trim_end_matches(']');
|
||||
cleaned
|
||||
.split(',')
|
||||
.filter_map(|s| {
|
||||
let s = s.trim().trim_matches('"');
|
||||
if s.is_empty() { None } else { Some(s.to_string()) }
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<std::collections::HashSet<_>>()
|
||||
.into_iter()
|
||||
.collect();
|
||||
ShowSummary {
|
||||
series_name: r.series_name,
|
||||
episode_count: r.episode_count as u32,
|
||||
season_count: r.season_count as u32,
|
||||
thumbnail_url: r.thumbnail_url,
|
||||
genres,
|
||||
}
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
|
||||
async fn list_seasons(
|
||||
&self,
|
||||
series_name: &str,
|
||||
provider_id: Option<&str>,
|
||||
) -> DomainResult<Vec<SeasonSummary>> {
|
||||
let mut conditions = vec![
|
||||
format!("series_name = '{}'", series_name.replace('\'', "''")),
|
||||
"content_type = 'episode'".to_string(),
|
||||
];
|
||||
if let Some(p) = provider_id {
|
||||
conditions.push(format!("provider_id = '{}'", p.replace('\'', "''")));
|
||||
}
|
||||
let where_clause = format!("WHERE {}", conditions.join(" AND "));
|
||||
let sql = format!(
|
||||
"SELECT season_number, COUNT(*) AS episode_count, MAX(thumbnail_url) AS thumbnail_url FROM library_items {} GROUP BY season_number ORDER BY season_number ASC",
|
||||
where_clause
|
||||
);
|
||||
let rows = sqlx::query_as::<_, SeasonSummaryRow>(&sql)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
|
||||
Ok(rows
|
||||
.into_iter()
|
||||
.map(|r| SeasonSummary {
|
||||
season_number: r.season_number as u32,
|
||||
episode_count: r.episode_count as u32,
|
||||
thumbnail_url: r.thumbnail_url,
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
}
|
||||
|
||||
// ── SQLx row types ─────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct LibraryItemRow {
|
||||
id: String, provider_id: String, external_id: String, title: String,
|
||||
content_type: String, duration_secs: i64,
|
||||
series_name: Option<String>, season_number: Option<i64>, episode_number: Option<i64>,
|
||||
year: Option<i64>, genres: String, tags: String,
|
||||
collection_id: Option<String>, collection_name: Option<String>, collection_type: Option<String>,
|
||||
thumbnail_url: Option<String>, synced_at: String,
|
||||
}
|
||||
|
||||
impl From<LibraryItemRow> for LibraryItem {
|
||||
fn from(r: LibraryItemRow) -> Self {
|
||||
Self {
|
||||
id: r.id, provider_id: r.provider_id, external_id: r.external_id,
|
||||
title: r.title, content_type: parse_content_type(&r.content_type),
|
||||
duration_secs: r.duration_secs as u32,
|
||||
series_name: r.series_name,
|
||||
season_number: r.season_number.map(|n| n as u32),
|
||||
episode_number: r.episode_number.map(|n| n as u32),
|
||||
year: r.year.map(|n| n as u16),
|
||||
genres: serde_json::from_str(&r.genres).unwrap_or_default(),
|
||||
tags: serde_json::from_str(&r.tags).unwrap_or_default(),
|
||||
collection_id: r.collection_id, collection_name: r.collection_name,
|
||||
collection_type: r.collection_type, thumbnail_url: r.thumbnail_url,
|
||||
synced_at: r.synced_at,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct SyncLogRow {
|
||||
id: i64, provider_id: String, started_at: String, finished_at: Option<String>,
|
||||
items_found: i64, status: String, error_msg: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct ShowSummaryRow {
|
||||
series_name: String,
|
||||
episode_count: i64,
|
||||
season_count: i64,
|
||||
thumbnail_url: Option<String>,
|
||||
genres_blob: String,
|
||||
}
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct SeasonSummaryRow {
|
||||
season_number: i64,
|
||||
episode_count: i64,
|
||||
thumbnail_url: Option<String>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use sqlx::SqlitePool;
|
||||
use domain::{LibraryItem, LibrarySearchFilter, ContentType};
|
||||
|
||||
async fn setup() -> SqlitePool {
|
||||
let pool = SqlitePool::connect(":memory:").await.unwrap();
|
||||
sqlx::query(
|
||||
"CREATE TABLE library_items (
|
||||
id TEXT PRIMARY KEY, provider_id TEXT NOT NULL, external_id TEXT NOT NULL,
|
||||
title TEXT NOT NULL, content_type TEXT NOT NULL, duration_secs INTEGER NOT NULL DEFAULT 0,
|
||||
series_name TEXT, season_number INTEGER, episode_number INTEGER, year INTEGER,
|
||||
genres TEXT NOT NULL DEFAULT '[]', tags TEXT NOT NULL DEFAULT '[]',
|
||||
collection_id TEXT, collection_name TEXT, collection_type TEXT,
|
||||
thumbnail_url TEXT, synced_at TEXT NOT NULL
|
||||
)"
|
||||
).execute(&pool).await.unwrap();
|
||||
sqlx::query(
|
||||
"CREATE TABLE library_sync_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT, provider_id TEXT NOT NULL,
|
||||
started_at TEXT NOT NULL, finished_at TEXT, items_found INTEGER NOT NULL DEFAULT 0,
|
||||
status TEXT NOT NULL DEFAULT 'running', error_msg TEXT
|
||||
)"
|
||||
).execute(&pool).await.unwrap();
|
||||
pool
|
||||
}
|
||||
|
||||
fn make_item(id: &str, provider: &str, title: &str) -> LibraryItem {
|
||||
LibraryItem {
|
||||
id: id.to_string(), provider_id: provider.to_string(), external_id: id.to_string(),
|
||||
title: title.to_string(), content_type: ContentType::Movie,
|
||||
duration_secs: 3600, series_name: None, season_number: None, episode_number: None,
|
||||
year: Some(2020), genres: vec!["Action".to_string()], tags: vec![],
|
||||
collection_id: None, collection_name: None, collection_type: None,
|
||||
thumbnail_url: None, synced_at: "2026-03-19T00:00:00Z".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn upsert_then_search_returns_items() {
|
||||
let pool = setup().await;
|
||||
let repo = SqliteLibraryRepository::new(pool);
|
||||
let items = vec![make_item("jellyfin::1", "jellyfin", "Movie A")];
|
||||
repo.upsert_items("jellyfin", items).await.unwrap();
|
||||
|
||||
let (results, total) = repo.search(&LibrarySearchFilter { limit: 50, ..Default::default() }).await.unwrap();
|
||||
assert_eq!(total, 1);
|
||||
assert_eq!(results[0].title, "Movie A");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn clear_provider_removes_only_that_provider() {
|
||||
let pool = setup().await;
|
||||
let repo = SqliteLibraryRepository::new(pool);
|
||||
repo.upsert_items("jellyfin", vec![make_item("jellyfin::1", "jellyfin", "Jelly Movie")]).await.unwrap();
|
||||
repo.upsert_items("local", vec![make_item("local::1", "local", "Local Movie")]).await.unwrap();
|
||||
repo.clear_provider("jellyfin").await.unwrap();
|
||||
|
||||
let (results, _) = repo.search(&LibrarySearchFilter { limit: 50, ..Default::default() }).await.unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].provider_id, "local");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn is_sync_running_reflects_status() {
|
||||
let pool = setup().await;
|
||||
let repo = SqliteLibraryRepository::new(pool);
|
||||
assert!(!repo.is_sync_running("jellyfin").await.unwrap());
|
||||
let log_id = repo.log_sync_start("jellyfin").await.unwrap();
|
||||
assert!(repo.is_sync_running("jellyfin").await.unwrap());
|
||||
let result = domain::LibrarySyncResult {
|
||||
provider_id: "jellyfin".to_string(), items_found: 5, duration_ms: 100, error: None,
|
||||
};
|
||||
repo.log_sync_finish(log_id, &result).await.unwrap();
|
||||
assert!(!repo.is_sync_running("jellyfin").await.unwrap());
|
||||
}
|
||||
}
|
||||
249
k-tv-backend/infra/src/library_sync.rs
Normal file
249
k-tv-backend/infra/src/library_sync.rs
Normal file
@@ -0,0 +1,249 @@
|
||||
//! Full-sync library sync adapter: truncate + re-insert all provider items.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use std::time::Instant;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use domain::{
|
||||
Collection, ILibraryRepository, IMediaProvider, LibraryItem,
|
||||
LibrarySyncAdapter, LibrarySyncResult, MediaFilter,
|
||||
};
|
||||
|
||||
pub struct FullSyncAdapter {
|
||||
repo: Arc<dyn ILibraryRepository>,
|
||||
}
|
||||
|
||||
impl FullSyncAdapter {
|
||||
pub fn new(repo: Arc<dyn ILibraryRepository>) -> Self {
|
||||
Self { repo }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl LibrarySyncAdapter for FullSyncAdapter {
|
||||
async fn sync_provider(
|
||||
&self,
|
||||
provider: &dyn IMediaProvider,
|
||||
provider_id: &str,
|
||||
) -> LibrarySyncResult {
|
||||
let start = Instant::now();
|
||||
|
||||
// Check for running sync first
|
||||
match self.repo.is_sync_running(provider_id).await {
|
||||
Ok(true) => {
|
||||
return LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found: 0,
|
||||
duration_ms: 0,
|
||||
error: Some("sync already running".to_string()),
|
||||
};
|
||||
}
|
||||
Err(e) => {
|
||||
return LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found: 0,
|
||||
duration_ms: 0,
|
||||
error: Some(e.to_string()),
|
||||
};
|
||||
}
|
||||
Ok(false) => {}
|
||||
}
|
||||
|
||||
let log_id = match self.repo.log_sync_start(provider_id).await {
|
||||
Ok(id) => id,
|
||||
Err(e) => {
|
||||
return LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found: 0,
|
||||
duration_ms: start.elapsed().as_millis() as u64,
|
||||
error: Some(e.to_string()),
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
// Fetch collections for name/type enrichment — build a lookup map
|
||||
let collections: Vec<Collection> = provider.list_collections().await.unwrap_or_default();
|
||||
let collection_map: HashMap<String, &Collection> =
|
||||
collections.iter().map(|c| (c.id.clone(), c)).collect();
|
||||
|
||||
// Fetch all items
|
||||
let media_items = match provider.fetch_items(&MediaFilter::default()).await {
|
||||
Ok(items) => items,
|
||||
Err(e) => {
|
||||
let result = LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found: 0,
|
||||
duration_ms: start.elapsed().as_millis() as u64,
|
||||
error: Some(e.to_string()),
|
||||
};
|
||||
let _ = self.repo.log_sync_finish(log_id, &result).await;
|
||||
return result;
|
||||
}
|
||||
};
|
||||
|
||||
let items_found = media_items.len() as u32;
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
|
||||
let library_items: Vec<LibraryItem> = media_items
|
||||
.into_iter()
|
||||
.map(|item| {
|
||||
let raw_id = item.id.into_inner();
|
||||
let id = format!("{}::{}", provider_id, raw_id);
|
||||
// Enrich with collection name/type using the lookup map.
|
||||
let (col_name, col_type) = item.collection_id.as_deref()
|
||||
.and_then(|cid| collection_map.get(cid))
|
||||
.map(|c| (Some(c.name.clone()), c.collection_type.clone()))
|
||||
.unwrap_or((None, None));
|
||||
LibraryItem {
|
||||
id,
|
||||
provider_id: provider_id.to_string(),
|
||||
external_id: raw_id,
|
||||
title: item.title,
|
||||
content_type: item.content_type,
|
||||
duration_secs: item.duration_secs,
|
||||
series_name: item.series_name,
|
||||
season_number: item.season_number,
|
||||
episode_number: item.episode_number,
|
||||
year: item.year,
|
||||
genres: item.genres,
|
||||
tags: item.tags,
|
||||
collection_id: item.collection_id,
|
||||
collection_name: col_name,
|
||||
collection_type: col_type,
|
||||
thumbnail_url: item.thumbnail_url,
|
||||
synced_at: now.clone(),
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
// Truncate + insert
|
||||
if let Err(e) = self.repo.clear_provider(provider_id).await {
|
||||
let result = LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found: 0,
|
||||
duration_ms: start.elapsed().as_millis() as u64,
|
||||
error: Some(e.to_string()),
|
||||
};
|
||||
let _ = self.repo.log_sync_finish(log_id, &result).await;
|
||||
return result;
|
||||
}
|
||||
|
||||
let result = match self.repo.upsert_items(provider_id, library_items).await {
|
||||
Ok(()) => LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found,
|
||||
duration_ms: start.elapsed().as_millis() as u64,
|
||||
error: None,
|
||||
},
|
||||
Err(e) => LibrarySyncResult {
|
||||
provider_id: provider_id.to_string(),
|
||||
items_found: 0,
|
||||
duration_ms: start.elapsed().as_millis() as u64,
|
||||
error: Some(e.to_string()),
|
||||
},
|
||||
};
|
||||
|
||||
let _ = self.repo.log_sync_finish(log_id, &result).await;
|
||||
result
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use async_trait::async_trait;
|
||||
use domain::*;
|
||||
|
||||
struct MockProvider {
|
||||
items: Vec<MediaItem>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl IMediaProvider for MockProvider {
|
||||
fn capabilities(&self) -> ProviderCapabilities {
|
||||
ProviderCapabilities {
|
||||
collections: true,
|
||||
series: false,
|
||||
genres: false,
|
||||
tags: false,
|
||||
decade: false,
|
||||
search: false,
|
||||
streaming_protocol: StreamingProtocol::Hls,
|
||||
rescan: false,
|
||||
transcode: false,
|
||||
}
|
||||
}
|
||||
async fn fetch_items(&self, _filter: &MediaFilter) -> DomainResult<Vec<MediaItem>> {
|
||||
Ok(self.items.clone())
|
||||
}
|
||||
async fn fetch_by_id(&self, _id: &MediaItemId) -> DomainResult<Option<MediaItem>> { Ok(None) }
|
||||
async fn get_stream_url(&self, _id: &MediaItemId, _q: &StreamQuality) -> DomainResult<String> { Ok(String::new()) }
|
||||
async fn list_collections(&self) -> DomainResult<Vec<Collection>> { Ok(vec![]) }
|
||||
async fn list_series(&self, _col: Option<&str>) -> DomainResult<Vec<SeriesSummary>> { Ok(vec![]) }
|
||||
async fn list_genres(&self, _ct: Option<&ContentType>) -> DomainResult<Vec<String>> { Ok(vec![]) }
|
||||
}
|
||||
|
||||
struct SpyRepo {
|
||||
upserted: Arc<Mutex<Vec<LibraryItem>>>,
|
||||
cleared: Arc<Mutex<Vec<String>>>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl ILibraryRepository for SpyRepo {
|
||||
async fn search(&self, _f: &LibrarySearchFilter) -> DomainResult<(Vec<LibraryItem>, u32)> { Ok((vec![], 0)) }
|
||||
async fn get_by_id(&self, _id: &str) -> DomainResult<Option<LibraryItem>> { Ok(None) }
|
||||
async fn list_collections(&self, _p: Option<&str>) -> DomainResult<Vec<LibraryCollection>> { Ok(vec![]) }
|
||||
async fn list_series(&self, _p: Option<&str>) -> DomainResult<Vec<String>> { Ok(vec![]) }
|
||||
async fn list_genres(&self, _ct: Option<&ContentType>, _p: Option<&str>) -> DomainResult<Vec<String>> { Ok(vec![]) }
|
||||
async fn upsert_items(&self, _pid: &str, items: Vec<LibraryItem>) -> DomainResult<()> {
|
||||
self.upserted.lock().unwrap().extend(items);
|
||||
Ok(())
|
||||
}
|
||||
async fn clear_provider(&self, pid: &str) -> DomainResult<()> {
|
||||
self.cleared.lock().unwrap().push(pid.to_string());
|
||||
Ok(())
|
||||
}
|
||||
async fn log_sync_start(&self, _pid: &str) -> DomainResult<i64> { Ok(1) }
|
||||
async fn log_sync_finish(&self, _id: i64, _r: &LibrarySyncResult) -> DomainResult<()> { Ok(()) }
|
||||
async fn latest_sync_status(&self) -> DomainResult<Vec<LibrarySyncLogEntry>> { Ok(vec![]) }
|
||||
async fn is_sync_running(&self, _pid: &str) -> DomainResult<bool> { Ok(false) }
|
||||
async fn list_shows(&self, _p: Option<&str>, _st: Option<&str>, _g: &[String]) -> DomainResult<Vec<domain::ShowSummary>> { Ok(vec![]) }
|
||||
async fn list_seasons(&self, _sn: &str, _p: Option<&str>) -> DomainResult<Vec<domain::SeasonSummary>> { Ok(vec![]) }
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn sync_clears_then_upserts_items() {
|
||||
let upserted = Arc::new(Mutex::new(vec![]));
|
||||
let cleared = Arc::new(Mutex::new(vec![]));
|
||||
let repo: Arc<dyn ILibraryRepository> = Arc::new(SpyRepo {
|
||||
upserted: Arc::clone(&upserted),
|
||||
cleared: Arc::clone(&cleared),
|
||||
});
|
||||
let adapter = FullSyncAdapter::new(Arc::clone(&repo));
|
||||
let provider = MockProvider {
|
||||
items: vec![MediaItem {
|
||||
id: MediaItemId::new("abc".to_string()),
|
||||
title: "Test Movie".to_string(),
|
||||
content_type: ContentType::Movie,
|
||||
duration_secs: 3600,
|
||||
description: None,
|
||||
series_name: None,
|
||||
season_number: None,
|
||||
episode_number: None,
|
||||
year: None,
|
||||
genres: vec![],
|
||||
tags: vec![],
|
||||
thumbnail_url: None,
|
||||
collection_id: None,
|
||||
}],
|
||||
};
|
||||
|
||||
let result = adapter.sync_provider(&provider, "jellyfin").await;
|
||||
assert!(result.error.is_none());
|
||||
assert_eq!(result.items_found, 1);
|
||||
assert_eq!(cleared.lock().unwrap().as_slice(), &["jellyfin"]);
|
||||
assert_eq!(upserted.lock().unwrap().len(), 1);
|
||||
}
|
||||
}
|
||||
@@ -36,15 +36,17 @@ pub fn decode_id(id: &MediaItemId) -> Option<String> {
|
||||
pub struct LocalIndex {
|
||||
items: Arc<RwLock<HashMap<MediaItemId, LocalFileItem>>>,
|
||||
pub root_dir: PathBuf,
|
||||
provider_id: String,
|
||||
pool: sqlx::SqlitePool,
|
||||
}
|
||||
|
||||
impl LocalIndex {
|
||||
/// Create the index, immediately loading persisted entries from SQLite.
|
||||
pub async fn new(config: &LocalFilesConfig, pool: sqlx::SqlitePool) -> Self {
|
||||
pub async fn new(config: &LocalFilesConfig, pool: sqlx::SqlitePool, provider_id: String) -> Self {
|
||||
let idx = Self {
|
||||
items: Arc::new(RwLock::new(HashMap::new())),
|
||||
root_dir: config.root_dir.clone(),
|
||||
provider_id,
|
||||
pool,
|
||||
};
|
||||
idx.load_from_db().await;
|
||||
@@ -65,8 +67,10 @@ impl LocalIndex {
|
||||
}
|
||||
|
||||
let rows = sqlx::query_as::<_, Row>(
|
||||
"SELECT id, rel_path, title, duration_secs, year, tags, top_dir FROM local_files_index",
|
||||
"SELECT id, rel_path, title, duration_secs, year, tags, top_dir \
|
||||
FROM local_files_index WHERE provider_id = ?",
|
||||
)
|
||||
.bind(&self.provider_id)
|
||||
.fetch_all(&self.pool)
|
||||
.await;
|
||||
|
||||
@@ -86,7 +90,7 @@ impl LocalIndex {
|
||||
};
|
||||
map.insert(MediaItemId::new(row.id), item);
|
||||
}
|
||||
info!("Local files index: loaded {} items from DB", map.len());
|
||||
info!("Local files index [{}]: loaded {} items from DB", self.provider_id, map.len());
|
||||
}
|
||||
Err(e) => {
|
||||
// Table might not exist yet on first run — that's fine.
|
||||
@@ -100,7 +104,7 @@ impl LocalIndex {
|
||||
/// Returns the number of items found. Called on startup (background task)
|
||||
/// and via `POST /files/rescan`.
|
||||
pub async fn rescan(&self) -> u32 {
|
||||
info!("Local files: scanning {:?}", self.root_dir);
|
||||
info!("Local files [{}]: scanning {:?}", self.provider_id, self.root_dir);
|
||||
let new_items = scan_dir(&self.root_dir).await;
|
||||
let count = new_items.len() as u32;
|
||||
|
||||
@@ -119,15 +123,16 @@ impl LocalIndex {
|
||||
error!("Failed to persist local files index: {}", e);
|
||||
}
|
||||
|
||||
info!("Local files: indexed {} items", count);
|
||||
info!("Local files [{}]: indexed {} items", self.provider_id, count);
|
||||
count
|
||||
}
|
||||
|
||||
async fn save_to_db(&self, items: &[LocalFileItem]) -> Result<(), sqlx::Error> {
|
||||
// Rebuild the table in one transaction.
|
||||
// Rebuild the table in one transaction, scoped to this provider.
|
||||
let mut tx = self.pool.begin().await?;
|
||||
|
||||
sqlx::query("DELETE FROM local_files_index")
|
||||
sqlx::query("DELETE FROM local_files_index WHERE provider_id = ?")
|
||||
.bind(&self.provider_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
|
||||
@@ -137,8 +142,8 @@ impl LocalIndex {
|
||||
let tags_json = serde_json::to_string(&item.tags).unwrap_or_else(|_| "[]".into());
|
||||
sqlx::query(
|
||||
"INSERT INTO local_files_index \
|
||||
(id, rel_path, title, duration_secs, year, tags, top_dir, scanned_at) \
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
(id, rel_path, title, duration_secs, year, tags, top_dir, scanned_at, provider_id) \
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(&id)
|
||||
.bind(&item.rel_path)
|
||||
@@ -148,6 +153,7 @@ impl LocalIndex {
|
||||
.bind(&tags_json)
|
||||
.bind(&item.top_dir)
|
||||
.bind(&now)
|
||||
.bind(&self.provider_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
|
||||
@@ -51,6 +51,8 @@ fn to_media_item(id: MediaItemId, item: &LocalFileItem) -> MediaItem {
|
||||
series_name: None,
|
||||
season_number: None,
|
||||
episode_number: None,
|
||||
thumbnail_url: None,
|
||||
collection_id: None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -86,10 +88,8 @@ impl IMediaProvider for LocalFilesProvider {
|
||||
} else {
|
||||
ContentType::Movie
|
||||
};
|
||||
if let Some(ref ct) = filter.content_type {
|
||||
if &content_type != ct {
|
||||
return None;
|
||||
}
|
||||
if let Some(ref ct) = filter.content_type && &content_type != ct {
|
||||
return None;
|
||||
}
|
||||
|
||||
// collections: match against top_dir
|
||||
@@ -117,22 +117,16 @@ impl IMediaProvider for LocalFilesProvider {
|
||||
}
|
||||
|
||||
// duration bounds
|
||||
if let Some(min) = filter.min_duration_secs {
|
||||
if item.duration_secs < min {
|
||||
return None;
|
||||
}
|
||||
if let Some(min) = filter.min_duration_secs && item.duration_secs < min {
|
||||
return None;
|
||||
}
|
||||
if let Some(max) = filter.max_duration_secs {
|
||||
if item.duration_secs > max {
|
||||
return None;
|
||||
}
|
||||
if let Some(max) = filter.max_duration_secs && item.duration_secs > max {
|
||||
return None;
|
||||
}
|
||||
|
||||
// search_term: case-insensitive substring in title
|
||||
if let Some(ref q) = filter.search_term {
|
||||
if !item.title.to_lowercase().contains(&q.to_lowercase()) {
|
||||
return None;
|
||||
}
|
||||
if let Some(ref q) = filter.search_term && !item.title.to_lowercase().contains(&q.to_lowercase()) {
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(to_media_item(id, &item))
|
||||
|
||||
@@ -171,15 +171,13 @@ impl TranscodeManager {
|
||||
continue;
|
||||
}
|
||||
let playlist = path.join("playlist.m3u8");
|
||||
if let Ok(meta) = tokio::fs::metadata(&playlist).await {
|
||||
if let Ok(modified) = meta.modified() {
|
||||
if let Ok(age) = now.duration_since(modified) {
|
||||
if age > ttl {
|
||||
warn!("cleanup: removing stale transcode {:?}", path);
|
||||
let _ = tokio::fs::remove_dir_all(&path).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
if let Ok(meta) = tokio::fs::metadata(&playlist).await
|
||||
&& let Ok(modified) = meta.modified()
|
||||
&& let Ok(age) = now.duration_since(modified)
|
||||
&& age > ttl
|
||||
{
|
||||
warn!("cleanup: removing stale transcode {:?}", path);
|
||||
let _ = tokio::fs::remove_dir_all(&path).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
4
k-tv-backend/infra/src/provider_config_repository/mod.rs
Normal file
4
k-tv-backend/infra/src/provider_config_repository/mod.rs
Normal file
@@ -0,0 +1,4 @@
|
||||
#[cfg(feature = "sqlite")]
|
||||
mod sqlite;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use sqlite::SqliteProviderConfigRepository;
|
||||
84
k-tv-backend/infra/src/provider_config_repository/sqlite.rs
Normal file
84
k-tv-backend/infra/src/provider_config_repository/sqlite.rs
Normal file
@@ -0,0 +1,84 @@
|
||||
use async_trait::async_trait;
|
||||
use domain::{DomainError, DomainResult, ProviderConfigRepository, ProviderConfigRow};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct SqliteProviderConfigRepository {
|
||||
pool: sqlx::SqlitePool,
|
||||
}
|
||||
|
||||
impl SqliteProviderConfigRepository {
|
||||
pub fn new(pool: sqlx::SqlitePool) -> Self {
|
||||
Self { pool }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl ProviderConfigRepository for SqliteProviderConfigRepository {
|
||||
async fn get_all(&self) -> DomainResult<Vec<ProviderConfigRow>> {
|
||||
let rows: Vec<(String, String, String, i64, String)> = sqlx::query_as(
|
||||
"SELECT id, provider_type, config_json, enabled, updated_at FROM provider_configs",
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
Ok(rows
|
||||
.into_iter()
|
||||
.map(|(id, provider_type, config_json, enabled, updated_at)| ProviderConfigRow {
|
||||
id,
|
||||
provider_type,
|
||||
config_json,
|
||||
enabled: enabled != 0,
|
||||
updated_at,
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
|
||||
async fn get_by_id(&self, id: &str) -> DomainResult<Option<ProviderConfigRow>> {
|
||||
let row: Option<(String, String, String, i64, String)> = sqlx::query_as(
|
||||
"SELECT id, provider_type, config_json, enabled, updated_at FROM provider_configs WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
Ok(row.map(|(id, provider_type, config_json, enabled, updated_at)| ProviderConfigRow {
|
||||
id,
|
||||
provider_type,
|
||||
config_json,
|
||||
enabled: enabled != 0,
|
||||
updated_at,
|
||||
}))
|
||||
}
|
||||
|
||||
async fn upsert(&self, row: &ProviderConfigRow) -> DomainResult<()> {
|
||||
sqlx::query(
|
||||
r#"INSERT INTO provider_configs (id, provider_type, config_json, enabled, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
provider_type = excluded.provider_type,
|
||||
config_json = excluded.config_json,
|
||||
enabled = excluded.enabled,
|
||||
updated_at = excluded.updated_at"#,
|
||||
)
|
||||
.bind(&row.id)
|
||||
.bind(&row.provider_type)
|
||||
.bind(&row.config_json)
|
||||
.bind(row.enabled as i64)
|
||||
.bind(&row.updated_at)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn delete(&self, id: &str) -> DomainResult<()> {
|
||||
sqlx::query("DELETE FROM provider_configs WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -42,6 +42,14 @@ impl ProviderRegistry {
|
||||
self.providers.is_empty()
|
||||
}
|
||||
|
||||
/// Return the provider registered under `id`, if any.
|
||||
pub fn get_provider(&self, id: &str) -> Option<Arc<dyn IMediaProvider>> {
|
||||
self.providers
|
||||
.iter()
|
||||
.find(|(pid, _)| pid == id)
|
||||
.map(|(_, p)| Arc::clone(p))
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Internal helpers
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
@@ -29,6 +29,12 @@ pub(super) struct SlotRow {
|
||||
pub source_block_id: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, FromRow)]
|
||||
pub(super) struct LastSlotRow {
|
||||
pub source_block_id: String,
|
||||
pub item: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, FromRow)]
|
||||
pub(super) struct PlaybackRecordRow {
|
||||
pub id: String,
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
use async_trait::async_trait;
|
||||
use chrono::{DateTime, Utc};
|
||||
|
||||
use domain::{ChannelId, DomainError, DomainResult, GeneratedSchedule, PlaybackRecord, ScheduleRepository};
|
||||
use std::collections::HashMap;
|
||||
|
||||
use super::mapping::{map_schedule, PlaybackRecordRow, ScheduleRow, SlotRow};
|
||||
use domain::{BlockId, ChannelId, DomainError, DomainResult, GeneratedSchedule, MediaItemId, PlaybackRecord, ScheduleRepository};
|
||||
|
||||
use super::mapping::{map_schedule, LastSlotRow, PlaybackRecordRow, ScheduleRow, SlotRow};
|
||||
|
||||
pub struct PostgresScheduleRepository {
|
||||
pool: sqlx::Pool<sqlx::Postgres>,
|
||||
@@ -143,6 +145,41 @@ impl ScheduleRepository for PostgresScheduleRepository {
|
||||
rows.into_iter().map(PlaybackRecord::try_from).collect()
|
||||
}
|
||||
|
||||
async fn find_last_slot_per_block(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<HashMap<BlockId, MediaItemId>> {
|
||||
let channel_id_str = channel_id.to_string();
|
||||
let rows: Vec<LastSlotRow> = sqlx::query_as(
|
||||
"SELECT ss.source_block_id, ss.item \
|
||||
FROM scheduled_slots ss \
|
||||
INNER JOIN generated_schedules gs ON gs.id = ss.schedule_id \
|
||||
WHERE gs.channel_id = $1 \
|
||||
AND ss.start_at = ( \
|
||||
SELECT MAX(ss2.start_at) \
|
||||
FROM scheduled_slots ss2 \
|
||||
INNER JOIN generated_schedules gs2 ON gs2.id = ss2.schedule_id \
|
||||
WHERE ss2.source_block_id = ss.source_block_id \
|
||||
AND gs2.channel_id = $2 \
|
||||
)",
|
||||
)
|
||||
.bind(&channel_id_str)
|
||||
.bind(&channel_id_str)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
let mut map = HashMap::new();
|
||||
for row in rows {
|
||||
let block_id = uuid::Uuid::parse_str(&row.source_block_id)
|
||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid block UUID: {}", e)))?;
|
||||
let item: domain::MediaItem = serde_json::from_str(&row.item)
|
||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid slot item JSON: {}", e)))?;
|
||||
map.insert(block_id, item.id);
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
async fn save_playback_record(&self, record: &PlaybackRecord) -> DomainResult<()> {
|
||||
sqlx::query(
|
||||
r#"
|
||||
|
||||
@@ -1,9 +1,12 @@
|
||||
use async_trait::async_trait;
|
||||
use chrono::{DateTime, Utc};
|
||||
|
||||
use domain::{ChannelId, DomainError, DomainResult, GeneratedSchedule, PlaybackRecord, ScheduleRepository};
|
||||
use std::collections::HashMap;
|
||||
|
||||
use super::mapping::{map_schedule, PlaybackRecordRow, ScheduleRow, SlotRow};
|
||||
use domain::{BlockId, ChannelId, DomainError, DomainResult, GeneratedSchedule, MediaItemId, PlaybackRecord, ScheduleRepository};
|
||||
use uuid::Uuid;
|
||||
|
||||
use super::mapping::{map_schedule, LastSlotRow, PlaybackRecordRow, ScheduleRow, SlotRow};
|
||||
|
||||
pub struct SqliteScheduleRepository {
|
||||
pool: sqlx::SqlitePool,
|
||||
@@ -146,6 +149,112 @@ impl ScheduleRepository for SqliteScheduleRepository {
|
||||
rows.into_iter().map(PlaybackRecord::try_from).collect()
|
||||
}
|
||||
|
||||
async fn find_last_slot_per_block(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<HashMap<BlockId, MediaItemId>> {
|
||||
let channel_id_str = channel_id.to_string();
|
||||
let rows: Vec<LastSlotRow> = sqlx::query_as(
|
||||
"SELECT ss.source_block_id, ss.item \
|
||||
FROM scheduled_slots ss \
|
||||
INNER JOIN generated_schedules gs ON gs.id = ss.schedule_id \
|
||||
WHERE gs.channel_id = ? \
|
||||
AND ss.start_at = ( \
|
||||
SELECT MAX(ss2.start_at) \
|
||||
FROM scheduled_slots ss2 \
|
||||
INNER JOIN generated_schedules gs2 ON gs2.id = ss2.schedule_id \
|
||||
WHERE ss2.source_block_id = ss.source_block_id \
|
||||
AND gs2.channel_id = ? \
|
||||
)",
|
||||
)
|
||||
.bind(&channel_id_str)
|
||||
.bind(&channel_id_str)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
let mut map = HashMap::new();
|
||||
for row in rows {
|
||||
let block_id = uuid::Uuid::parse_str(&row.source_block_id)
|
||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid block UUID: {}", e)))?;
|
||||
let item: domain::MediaItem = serde_json::from_str(&row.item)
|
||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid slot item JSON: {}", e)))?;
|
||||
map.insert(block_id, item.id);
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
async fn list_schedule_history(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
) -> DomainResult<Vec<GeneratedSchedule>> {
|
||||
let rows: Vec<ScheduleRow> = sqlx::query_as(
|
||||
"SELECT id, channel_id, valid_from, valid_until, generation \
|
||||
FROM generated_schedules WHERE channel_id = ? ORDER BY generation DESC",
|
||||
)
|
||||
.bind(channel_id.to_string())
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
rows.into_iter()
|
||||
.map(|r| map_schedule(r, vec![]))
|
||||
.collect()
|
||||
}
|
||||
|
||||
async fn get_schedule_by_id(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
schedule_id: Uuid,
|
||||
) -> DomainResult<Option<GeneratedSchedule>> {
|
||||
let row: Option<ScheduleRow> = sqlx::query_as(
|
||||
"SELECT id, channel_id, valid_from, valid_until, generation \
|
||||
FROM generated_schedules WHERE id = ? AND channel_id = ?",
|
||||
)
|
||||
.bind(schedule_id.to_string())
|
||||
.bind(channel_id.to_string())
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
match row {
|
||||
None => Ok(None),
|
||||
Some(r) => {
|
||||
let slots = self.fetch_slots(&r.id).await?;
|
||||
Some(map_schedule(r, slots)).transpose()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn delete_schedules_after(
|
||||
&self,
|
||||
channel_id: ChannelId,
|
||||
target_generation: u32,
|
||||
) -> DomainResult<()> {
|
||||
let target_gen = target_generation as i64;
|
||||
let ch = channel_id.to_string();
|
||||
|
||||
sqlx::query(
|
||||
"DELETE FROM playback_records WHERE channel_id = ? AND generation > ?",
|
||||
)
|
||||
.bind(&ch)
|
||||
.bind(target_gen)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
sqlx::query(
|
||||
"DELETE FROM generated_schedules WHERE channel_id = ? AND generation > ?",
|
||||
)
|
||||
.bind(&ch)
|
||||
.bind(target_gen)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn save_playback_record(&self, record: &PlaybackRecord) -> DomainResult<()> {
|
||||
sqlx::query(
|
||||
r#"
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
#[cfg(feature = "sqlite")]
|
||||
mod sqlite;
|
||||
#[cfg(feature = "sqlite")]
|
||||
pub use sqlite::SqliteTranscodeSettingsRepository;
|
||||
@@ -0,0 +1,34 @@
|
||||
use async_trait::async_trait;
|
||||
use domain::{DomainError, DomainResult, TranscodeSettingsRepository};
|
||||
use sqlx::SqlitePool;
|
||||
|
||||
pub struct SqliteTranscodeSettingsRepository {
|
||||
pool: SqlitePool,
|
||||
}
|
||||
|
||||
impl SqliteTranscodeSettingsRepository {
|
||||
pub fn new(pool: SqlitePool) -> Self {
|
||||
Self { pool }
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl TranscodeSettingsRepository for SqliteTranscodeSettingsRepository {
|
||||
async fn load_cleanup_ttl(&self) -> DomainResult<Option<u32>> {
|
||||
let row: Option<(i64,)> =
|
||||
sqlx::query_as("SELECT cleanup_ttl_hours FROM transcode_settings WHERE id = 1")
|
||||
.fetch_optional(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
Ok(row.map(|(h,)| h as u32))
|
||||
}
|
||||
|
||||
async fn save_cleanup_ttl(&self, hours: u32) -> DomainResult<()> {
|
||||
sqlx::query("UPDATE transcode_settings SET cleanup_ttl_hours = ? WHERE id = 1")
|
||||
.bind(hours as i64)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::InfrastructureError(e.to_string()))?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -10,6 +10,7 @@ pub(super) struct UserRow {
|
||||
pub subject: String,
|
||||
pub email: String,
|
||||
pub password_hash: Option<String>,
|
||||
pub is_admin: i64,
|
||||
pub created_at: String,
|
||||
}
|
||||
|
||||
@@ -36,6 +37,7 @@ impl TryFrom<UserRow> for User {
|
||||
row.subject,
|
||||
email,
|
||||
row.password_hash,
|
||||
row.is_admin != 0,
|
||||
created_at,
|
||||
))
|
||||
}
|
||||
|
||||
@@ -22,7 +22,7 @@ impl UserRepository for PostgresUserRepository {
|
||||
async fn find_by_id(&self, id: Uuid) -> DomainResult<Option<User>> {
|
||||
let id_str = id.to_string();
|
||||
let row: Option<UserRow> = sqlx::query_as(
|
||||
"SELECT id, subject, email, password_hash, created_at FROM users WHERE id = $1",
|
||||
"SELECT id, subject, email, password_hash, is_admin, created_at FROM users WHERE id = $1",
|
||||
)
|
||||
.bind(&id_str)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -34,7 +34,7 @@ impl UserRepository for PostgresUserRepository {
|
||||
|
||||
async fn find_by_subject(&self, subject: &str) -> DomainResult<Option<User>> {
|
||||
let row: Option<UserRow> = sqlx::query_as(
|
||||
"SELECT id, subject, email, password_hash, created_at FROM users WHERE subject = $1",
|
||||
"SELECT id, subject, email, password_hash, is_admin, created_at FROM users WHERE subject = $1",
|
||||
)
|
||||
.bind(subject)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -46,7 +46,7 @@ impl UserRepository for PostgresUserRepository {
|
||||
|
||||
async fn find_by_email(&self, email: &str) -> DomainResult<Option<User>> {
|
||||
let row: Option<UserRow> = sqlx::query_as(
|
||||
"SELECT id, subject, email, password_hash, created_at FROM users WHERE email = $1",
|
||||
"SELECT id, subject, email, password_hash, is_admin, created_at FROM users WHERE email = $1",
|
||||
)
|
||||
.bind(email)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -62,18 +62,20 @@ impl UserRepository for PostgresUserRepository {
|
||||
|
||||
sqlx::query(
|
||||
r#"
|
||||
INSERT INTO users (id, subject, email, password_hash, created_at)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
INSERT INTO users (id, subject, email, password_hash, is_admin, created_at)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
subject = excluded.subject,
|
||||
email = excluded.email,
|
||||
password_hash = excluded.password_hash
|
||||
password_hash = excluded.password_hash,
|
||||
is_admin = excluded.is_admin
|
||||
"#,
|
||||
)
|
||||
.bind(&id)
|
||||
.bind(&user.subject)
|
||||
.bind(user.email.as_ref())
|
||||
.bind(&user.password_hash)
|
||||
.bind(user.is_admin)
|
||||
.bind(&created_at)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
@@ -99,4 +101,12 @@ impl UserRepository for PostgresUserRepository {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn count_users(&self) -> DomainResult<u64> {
|
||||
let (count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM users")
|
||||
.fetch_one(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
Ok(count as u64)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,7 +22,7 @@ impl UserRepository for SqliteUserRepository {
|
||||
async fn find_by_id(&self, id: Uuid) -> DomainResult<Option<User>> {
|
||||
let id_str = id.to_string();
|
||||
let row: Option<UserRow> = sqlx::query_as(
|
||||
"SELECT id, subject, email, password_hash, created_at FROM users WHERE id = ?",
|
||||
"SELECT id, subject, email, password_hash, is_admin, created_at FROM users WHERE id = ?",
|
||||
)
|
||||
.bind(&id_str)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -34,7 +34,7 @@ impl UserRepository for SqliteUserRepository {
|
||||
|
||||
async fn find_by_subject(&self, subject: &str) -> DomainResult<Option<User>> {
|
||||
let row: Option<UserRow> = sqlx::query_as(
|
||||
"SELECT id, subject, email, password_hash, created_at FROM users WHERE subject = ?",
|
||||
"SELECT id, subject, email, password_hash, is_admin, created_at FROM users WHERE subject = ?",
|
||||
)
|
||||
.bind(subject)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -46,7 +46,7 @@ impl UserRepository for SqliteUserRepository {
|
||||
|
||||
async fn find_by_email(&self, email: &str) -> DomainResult<Option<User>> {
|
||||
let row: Option<UserRow> = sqlx::query_as(
|
||||
"SELECT id, subject, email, password_hash, created_at FROM users WHERE email = ?",
|
||||
"SELECT id, subject, email, password_hash, is_admin, created_at FROM users WHERE email = ?",
|
||||
)
|
||||
.bind(email)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -62,18 +62,20 @@ impl UserRepository for SqliteUserRepository {
|
||||
|
||||
sqlx::query(
|
||||
r#"
|
||||
INSERT INTO users (id, subject, email, password_hash, created_at)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
INSERT INTO users (id, subject, email, password_hash, is_admin, created_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
subject = excluded.subject,
|
||||
email = excluded.email,
|
||||
password_hash = excluded.password_hash
|
||||
password_hash = excluded.password_hash,
|
||||
is_admin = excluded.is_admin
|
||||
"#,
|
||||
)
|
||||
.bind(&id)
|
||||
.bind(&user.subject)
|
||||
.bind(user.email.as_ref())
|
||||
.bind(&user.password_hash)
|
||||
.bind(user.is_admin as i64)
|
||||
.bind(&created_at)
|
||||
.execute(&self.pool)
|
||||
.await
|
||||
@@ -100,6 +102,14 @@ impl UserRepository for SqliteUserRepository {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn count_users(&self) -> DomainResult<u64> {
|
||||
let (count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM users")
|
||||
.fetch_one(&self.pool)
|
||||
.await
|
||||
.map_err(|e| DomainError::RepositoryError(e.to_string()))?;
|
||||
Ok(count as u64)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
name = "mcp"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
default-run = "mcp"
|
||||
|
||||
[features]
|
||||
default = ["sqlite", "jellyfin"]
|
||||
|
||||
@@ -9,5 +9,5 @@ pub fn json_err(e: serde_json::Error) -> String {
|
||||
}
|
||||
|
||||
pub fn ok_json<T: serde::Serialize>(value: &T) -> String {
|
||||
serde_json::to_string(value).unwrap_or_else(|e| json_err(e))
|
||||
serde_json::to_string(value).unwrap_or_else(json_err)
|
||||
}
|
||||
|
||||
@@ -87,20 +87,19 @@ async fn main() -> anyhow::Result<()> {
|
||||
|
||||
#[cfg(feature = "local-files")]
|
||||
if let Some(dir) = std::env::var("LOCAL_FILES_DIR").ok().map(std::path::PathBuf::from) {
|
||||
if let k_core::db::DatabasePool::Sqlite(ref sqlite_pool) = db_pool {
|
||||
let base_url = std::env::var("BASE_URL")
|
||||
.unwrap_or_else(|_| "http://localhost:3000".to_string());
|
||||
let lf_cfg = infra::LocalFilesConfig {
|
||||
root_dir: dir,
|
||||
base_url,
|
||||
transcode_dir: None,
|
||||
cleanup_ttl_hours: 24,
|
||||
};
|
||||
let idx = Arc::new(infra::LocalIndex::new(&lf_cfg, sqlite_pool.clone()).await);
|
||||
let scan_idx = Arc::clone(&idx);
|
||||
tokio::spawn(async move { scan_idx.rescan().await; });
|
||||
registry.register("local", Arc::new(infra::LocalFilesProvider::new(idx, lf_cfg, None)));
|
||||
}
|
||||
let k_core::db::DatabasePool::Sqlite(ref sqlite_pool) = db_pool;
|
||||
let base_url = std::env::var("BASE_URL")
|
||||
.unwrap_or_else(|_| "http://localhost:3000".to_string());
|
||||
let lf_cfg = infra::LocalFilesConfig {
|
||||
root_dir: dir,
|
||||
base_url,
|
||||
transcode_dir: None,
|
||||
cleanup_ttl_hours: 24,
|
||||
};
|
||||
let idx = Arc::new(infra::LocalIndex::new(&lf_cfg, sqlite_pool.clone()).await);
|
||||
let scan_idx = Arc::clone(&idx);
|
||||
tokio::spawn(async move { scan_idx.rescan().await; });
|
||||
registry.register("local", Arc::new(infra::LocalFilesProvider::new(idx, lf_cfg, None)));
|
||||
}
|
||||
|
||||
if registry.is_empty() {
|
||||
|
||||
@@ -59,14 +59,16 @@ pub struct DeleteChannelParams {
|
||||
pub struct SetScheduleConfigParams {
|
||||
/// Channel UUID
|
||||
pub channel_id: String,
|
||||
/// JSON array of ProgrammingBlock objects
|
||||
pub blocks_json: String,
|
||||
/// JSON object of the full ScheduleConfig shape: {"monday": [...], "tuesday": [...], ...}
|
||||
pub day_blocks_json: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, JsonSchema)]
|
||||
pub struct AddBlockParams {
|
||||
/// Channel UUID
|
||||
pub channel_id: String,
|
||||
/// Day of week: "monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"
|
||||
pub day: String,
|
||||
/// ProgrammingBlock serialized as JSON
|
||||
pub block_json: String,
|
||||
}
|
||||
@@ -163,43 +165,44 @@ impl KTvMcpServer {
|
||||
}
|
||||
|
||||
#[tool(
|
||||
description = "Replace a channel's entire schedule config. blocks_json is a JSON array of ProgrammingBlock objects."
|
||||
description = "Replace a channel's entire schedule config. day_blocks_json is a JSON object of the ScheduleConfig shape: {\"monday\": [...], ...}"
|
||||
)]
|
||||
async fn set_schedule_config(&self, #[tool(aggr)] p: SetScheduleConfigParams) -> String {
|
||||
let channel_id = match parse_uuid(&p.channel_id) {
|
||||
Ok(id) => id,
|
||||
Err(e) => return e,
|
||||
};
|
||||
let blocks: Vec<ProgrammingBlock> = match serde_json::from_str(&p.blocks_json) {
|
||||
Ok(b) => b,
|
||||
let config: ScheduleConfig = match serde_json::from_str(&p.day_blocks_json) {
|
||||
Ok(c) => c,
|
||||
Err(e) => {
|
||||
return serde_json::json!({"error": format!("invalid blocks_json: {e}")})
|
||||
return serde_json::json!({"error": format!("invalid day_blocks_json: {e}")})
|
||||
.to_string()
|
||||
}
|
||||
};
|
||||
channels::set_schedule_config(
|
||||
&self.channel_service,
|
||||
channel_id,
|
||||
ScheduleConfig { blocks },
|
||||
)
|
||||
.await
|
||||
channels::set_schedule_config(&self.channel_service, channel_id, config).await
|
||||
}
|
||||
|
||||
#[tool(
|
||||
description = "Append a ProgrammingBlock to a channel's schedule. block_json is a serialized ProgrammingBlock."
|
||||
description = "Append a ProgrammingBlock to a channel's schedule for a specific day. day: monday|tuesday|wednesday|thursday|friday|saturday|sunday. block_json is a serialized ProgrammingBlock."
|
||||
)]
|
||||
async fn add_programming_block(&self, #[tool(aggr)] p: AddBlockParams) -> String {
|
||||
let channel_id = match parse_uuid(&p.channel_id) {
|
||||
Ok(id) => id,
|
||||
Err(e) => return e,
|
||||
};
|
||||
let day: domain::Weekday = match serde_json::from_str(&format!("\"{}\"", p.day)) {
|
||||
Ok(d) => d,
|
||||
Err(e) => {
|
||||
return serde_json::json!({"error": format!("invalid day: {e}")}).to_string()
|
||||
}
|
||||
};
|
||||
let block: ProgrammingBlock = match serde_json::from_str(&p.block_json) {
|
||||
Ok(b) => b,
|
||||
Err(e) => {
|
||||
return serde_json::json!({"error": format!("invalid block_json: {e}")}).to_string()
|
||||
}
|
||||
};
|
||||
channels::add_programming_block(&self.channel_service, channel_id, block).await
|
||||
channels::add_programming_block(&self.channel_service, channel_id, day, block).await
|
||||
}
|
||||
|
||||
#[tool(description = "Remove a programming block from a channel's schedule by block UUID")]
|
||||
|
||||
@@ -95,13 +95,17 @@ pub async fn set_schedule_config(
|
||||
pub async fn add_programming_block(
|
||||
svc: &Arc<ChannelService>,
|
||||
channel_id: Uuid,
|
||||
day: domain::Weekday,
|
||||
block: domain::ProgrammingBlock,
|
||||
) -> String {
|
||||
let mut channel: Channel = match svc.find_by_id(channel_id).await {
|
||||
Ok(c) => c,
|
||||
Err(e) => return domain_err(e),
|
||||
};
|
||||
channel.schedule_config.blocks.push(block);
|
||||
channel.schedule_config.day_blocks
|
||||
.entry(day)
|
||||
.or_default()
|
||||
.push(block);
|
||||
channel.updated_at = chrono::Utc::now();
|
||||
match svc.update(channel).await {
|
||||
Ok(c) => ok_json(&c),
|
||||
@@ -118,7 +122,9 @@ pub async fn remove_programming_block(
|
||||
Ok(c) => c,
|
||||
Err(e) => return domain_err(e),
|
||||
};
|
||||
channel.schedule_config.blocks.retain(|b| b.id != block_id);
|
||||
for blocks in channel.schedule_config.day_blocks.values_mut() {
|
||||
blocks.retain(|b| b.id != block_id);
|
||||
}
|
||||
channel.updated_at = chrono::Utc::now();
|
||||
match svc.update(channel).await {
|
||||
Ok(c) => ok_json(&c),
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
CREATE TABLE IF NOT EXISTS activity_log (
|
||||
id TEXT PRIMARY KEY NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
event_type TEXT NOT NULL,
|
||||
detail TEXT NOT NULL,
|
||||
channel_id TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_activity_log_timestamp ON activity_log(timestamp DESC);
|
||||
@@ -0,0 +1,8 @@
|
||||
ALTER TABLE users ADD COLUMN is_admin INTEGER NOT NULL DEFAULT 0;
|
||||
|
||||
CREATE TABLE provider_configs (
|
||||
provider_type TEXT PRIMARY KEY,
|
||||
config_json TEXT NOT NULL,
|
||||
enabled INTEGER NOT NULL DEFAULT 1,
|
||||
updated_at TEXT NOT NULL
|
||||
);
|
||||
@@ -0,0 +1,12 @@
|
||||
CREATE TABLE channel_config_snapshots (
|
||||
id TEXT PRIMARY KEY NOT NULL,
|
||||
channel_id TEXT NOT NULL REFERENCES channels(id) ON DELETE CASCADE,
|
||||
config_json TEXT NOT NULL,
|
||||
version_num INTEGER NOT NULL,
|
||||
label TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
UNIQUE (channel_id, version_num)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_config_snapshots_channel
|
||||
ON channel_config_snapshots(channel_id, version_num DESC);
|
||||
@@ -0,0 +1,17 @@
|
||||
-- Recreate provider_configs with per-instance id as PK
|
||||
CREATE TABLE provider_configs_new (
|
||||
id TEXT PRIMARY KEY,
|
||||
provider_type TEXT NOT NULL,
|
||||
config_json TEXT NOT NULL,
|
||||
enabled INTEGER NOT NULL DEFAULT 1,
|
||||
updated_at TEXT NOT NULL
|
||||
);
|
||||
INSERT INTO provider_configs_new (id, provider_type, config_json, enabled, updated_at)
|
||||
SELECT provider_type, provider_type, config_json, enabled, updated_at
|
||||
FROM provider_configs;
|
||||
DROP TABLE provider_configs;
|
||||
ALTER TABLE provider_configs_new RENAME TO provider_configs;
|
||||
|
||||
-- Scope local_files_index entries by provider instance
|
||||
ALTER TABLE local_files_index ADD COLUMN provider_id TEXT NOT NULL DEFAULT 'local';
|
||||
CREATE INDEX IF NOT EXISTS idx_local_files_provider ON local_files_index(provider_id);
|
||||
@@ -0,0 +1,37 @@
|
||||
CREATE TABLE IF NOT EXISTS library_items (
|
||||
id TEXT PRIMARY KEY,
|
||||
provider_id TEXT NOT NULL,
|
||||
external_id TEXT NOT NULL,
|
||||
title TEXT NOT NULL,
|
||||
content_type TEXT NOT NULL,
|
||||
duration_secs INTEGER NOT NULL DEFAULT 0,
|
||||
series_name TEXT,
|
||||
season_number INTEGER,
|
||||
episode_number INTEGER,
|
||||
year INTEGER,
|
||||
genres TEXT NOT NULL DEFAULT '[]',
|
||||
tags TEXT NOT NULL DEFAULT '[]',
|
||||
collection_id TEXT,
|
||||
collection_name TEXT,
|
||||
collection_type TEXT,
|
||||
thumbnail_url TEXT,
|
||||
synced_at TEXT NOT NULL
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_library_items_provider ON library_items(provider_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_library_items_content_type ON library_items(content_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_library_items_series ON library_items(series_name);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_library_items_provider_external ON library_items(provider_id, external_id);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS library_sync_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
provider_id TEXT NOT NULL,
|
||||
started_at TEXT NOT NULL,
|
||||
finished_at TEXT,
|
||||
items_found INTEGER NOT NULL DEFAULT 0,
|
||||
status TEXT NOT NULL DEFAULT 'running',
|
||||
error_msg TEXT
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_library_sync_log_provider ON library_sync_log(provider_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_library_sync_log_provider_started ON library_sync_log(provider_id, started_at DESC);
|
||||
@@ -0,0 +1,6 @@
|
||||
CREATE TABLE IF NOT EXISTS app_settings (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
|
||||
INSERT OR IGNORE INTO app_settings(key, value) VALUES ('library_sync_interval_hours', '6');
|
||||
@@ -1,36 +1,41 @@
|
||||
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
|
||||
# K-TV Frontend
|
||||
|
||||
## Getting Started
|
||||
Next.js 16 / React 19 EPG viewer and channel manager for [k-tv](../README.md) — the self-hosted linear TV orchestration backend.
|
||||
|
||||
First, run the development server:
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
# or
|
||||
yarn dev
|
||||
# or
|
||||
cp .env.local.example .env.local
|
||||
# Edit .env.local — set NEXT_PUBLIC_API_URL to your backend URL
|
||||
pnpm dev
|
||||
# or
|
||||
bun dev
|
||||
```
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
|
||||
Open `http://localhost:3001` in your browser.
|
||||
|
||||
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
|
||||
## Environment Variables
|
||||
|
||||
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
|
||||
| Variable | Where | Description |
|
||||
|----------|-------|-------------|
|
||||
| `NEXT_PUBLIC_API_URL` | build arg + runtime | Browser-side API base URL. **Baked in at build time** — must point to the public backend URL. Default: `http://localhost:4000/api/v1` |
|
||||
| `API_URL` | runtime only | Server-side API URL for Next.js API routes (e.g. stream redirect resolver). Falls back to `NEXT_PUBLIC_API_URL` if not set. Use this to set a private backend address in Docker. |
|
||||
|
||||
## Learn More
|
||||
## Routes
|
||||
|
||||
To learn more about Next.js, take a look at the following resources:
|
||||
| Path | Auth | Description |
|
||||
|------|------|-------------|
|
||||
| `/tv` | public | TV player — EPG grid, channel switching, HLS/direct video stream |
|
||||
| `/dashboard` | required | Channel management — create, edit, configure schedule blocks |
|
||||
| `/admin` | required | Live server log stream + recent activity log |
|
||||
| `/login` | — | Email/password login |
|
||||
| `/register` | — | New account registration |
|
||||
|
||||
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
|
||||
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
|
||||
## Architecture
|
||||
|
||||
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
|
||||
- **`lib/api.ts`** — typed fetch client wrapping all backend endpoints (`api.auth.*`, `api.channels.*`, `api.schedule.*`, `api.library.*`)
|
||||
- **`hooks/`** — TanStack Query v5 hooks for all data fetching and mutations; components never fetch directly
|
||||
- **`context/auth-context.tsx`** — JWT stored in localStorage, `isLoaded` flag prevents flash redirects
|
||||
- Components are props-only; all business logic lives in hooks
|
||||
|
||||
## Deploy on Vercel
|
||||
## Docker
|
||||
|
||||
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
|
||||
|
||||
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
|
||||
See the [root README](../README.md) for build and deploy instructions.
|
||||
|
||||
@@ -8,12 +8,13 @@ import { useConfig } from "@/hooks/use-channels";
|
||||
export default function LoginPage() {
|
||||
const [email, setEmail] = useState("");
|
||||
const [password, setPassword] = useState("");
|
||||
const [rememberMe, setRememberMe] = useState(false);
|
||||
const { mutate: login, isPending, error } = useLogin();
|
||||
const { data: config } = useConfig();
|
||||
|
||||
const handleSubmit = (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
login({ email, password });
|
||||
login({ email, password, rememberMe });
|
||||
};
|
||||
|
||||
return (
|
||||
@@ -54,6 +55,23 @@ export default function LoginPage() {
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="space-y-1">
|
||||
<label className="flex cursor-pointer items-center gap-2">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={rememberMe}
|
||||
onChange={(e) => setRememberMe(e.target.checked)}
|
||||
className="h-3.5 w-3.5 rounded border-zinc-600 bg-zinc-900 accent-white"
|
||||
/>
|
||||
<span className="text-xs text-zinc-400">Remember me</span>
|
||||
</label>
|
||||
{rememberMe && (
|
||||
<p className="pl-5 text-xs text-amber-500/80">
|
||||
A refresh token will be stored locally — don't share it.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{error && <p className="text-xs text-red-400">{error.message}</p>}
|
||||
|
||||
<button
|
||||
|
||||
@@ -0,0 +1,73 @@
|
||||
"use client";
|
||||
|
||||
import type { ActivityEvent } from "@/lib/types";
|
||||
|
||||
const eventColors: Record<string, string> = {
|
||||
channel_created: "bg-green-900/40 text-green-400",
|
||||
channel_updated: "bg-blue-900/40 text-blue-400",
|
||||
channel_deleted: "bg-red-900/40 text-red-400",
|
||||
schedule_generated: "bg-violet-900/40 text-violet-400",
|
||||
user_login: "bg-zinc-800 text-zinc-400",
|
||||
};
|
||||
|
||||
function fmtTs(ts: string) {
|
||||
try {
|
||||
const d = new Date(ts);
|
||||
return d.toLocaleTimeString(undefined, { hour12: false });
|
||||
} catch {
|
||||
return ts;
|
||||
}
|
||||
}
|
||||
|
||||
interface ActivityLogPanelProps {
|
||||
events: ActivityEvent[];
|
||||
isLoading: boolean;
|
||||
}
|
||||
|
||||
export function ActivityLogPanel({ events, isLoading }: ActivityLogPanelProps) {
|
||||
return (
|
||||
<div className="flex h-full flex-col">
|
||||
<div className="border-b border-zinc-800 px-4 py-2.5">
|
||||
<span className="text-xs font-semibold uppercase tracking-widest text-violet-400">
|
||||
Activity
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div className="flex-1 overflow-y-auto p-3">
|
||||
{isLoading && events.length === 0 ? (
|
||||
<p className="mt-8 text-center text-xs text-zinc-600">Loading…</p>
|
||||
) : events.length === 0 ? (
|
||||
<p className="mt-8 text-center text-xs text-zinc-600">No activity yet.</p>
|
||||
) : (
|
||||
<div className="flex flex-col gap-2">
|
||||
{events.map((event) => (
|
||||
<div
|
||||
key={event.id}
|
||||
className="rounded-md border border-zinc-800 bg-zinc-900 p-3"
|
||||
>
|
||||
<div className="mb-1 flex items-center gap-2">
|
||||
<span
|
||||
className={`rounded px-1.5 py-0.5 text-[10px] font-medium ${
|
||||
eventColors[event.event_type] ?? "bg-zinc-800 text-zinc-400"
|
||||
}`}
|
||||
>
|
||||
{event.event_type.replace(/_/g, " ")}
|
||||
</span>
|
||||
<span className="ml-auto font-mono text-[10px] text-zinc-600">
|
||||
{fmtTs(event.timestamp)}
|
||||
</span>
|
||||
</div>
|
||||
<p className="text-xs text-zinc-300">{event.detail}</p>
|
||||
{event.channel_id && (
|
||||
<p className="mt-0.5 font-mono text-[10px] text-zinc-600">
|
||||
ch: {event.channel_id.slice(0, 8)}…
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,438 @@
|
||||
"use client";
|
||||
|
||||
import { useState } from "react";
|
||||
import {
|
||||
useProviderConfigs,
|
||||
useCreateProvider,
|
||||
useUpdateProvider,
|
||||
useDeleteProvider,
|
||||
useTestProvider,
|
||||
} from "@/hooks/use-admin-providers";
|
||||
import { useConfig } from "@/hooks/use-config";
|
||||
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Input } from "@/components/ui/input";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { Switch } from "@/components/ui/switch";
|
||||
import { Badge } from "@/components/ui/badge";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog";
|
||||
import {
|
||||
Select,
|
||||
SelectContent,
|
||||
SelectItem,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
} from "@/components/ui/select";
|
||||
import { CheckCircle, XCircle, Loader2, Plus, Trash2 } from "lucide-react";
|
||||
import { ApiRequestError } from "@/lib/api";
|
||||
import type { ProviderConfig } from "@/lib/types";
|
||||
|
||||
const PROVIDER_FIELDS: Record<
|
||||
string,
|
||||
Array<{ key: string; label: string; type?: string; required?: boolean }>
|
||||
> = {
|
||||
jellyfin: [
|
||||
{ key: "base_url", label: "Base URL", required: true },
|
||||
{ key: "api_key", label: "API Key", type: "password", required: true },
|
||||
{ key: "user_id", label: "User ID", required: true },
|
||||
],
|
||||
local_files: [
|
||||
{ key: "files_dir", label: "Files Directory", required: true },
|
||||
{ key: "transcode_dir", label: "Transcode Directory" },
|
||||
{ key: "cleanup_ttl_hours", label: "Cleanup TTL Hours" },
|
||||
],
|
||||
};
|
||||
|
||||
function isValidInstanceId(id: string): boolean {
|
||||
return id.length >= 1 && id.length <= 40 && /^[a-zA-Z0-9-]+$/.test(id);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Existing instance card
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface ProviderCardProps {
|
||||
config: ProviderConfig;
|
||||
existingIds: string[];
|
||||
}
|
||||
|
||||
function ProviderCard({ config }: ProviderCardProps) {
|
||||
const fields = PROVIDER_FIELDS[config.provider_type] ?? [];
|
||||
const [formValues, setFormValues] = useState<Record<string, string>>(
|
||||
() => config.config_json ?? {},
|
||||
);
|
||||
const [enabled, setEnabled] = useState(config.enabled);
|
||||
const [conflictError, setConflictError] = useState(false);
|
||||
const [testResult, setTestResult] = useState<{ ok: boolean; message: string } | null>(null);
|
||||
|
||||
const updateProvider = useUpdateProvider();
|
||||
const deleteProvider = useDeleteProvider();
|
||||
const testProvider = useTestProvider();
|
||||
|
||||
const handleSave = async () => {
|
||||
setConflictError(false);
|
||||
try {
|
||||
await updateProvider.mutateAsync({
|
||||
id: config.id,
|
||||
payload: { config_json: formValues, enabled },
|
||||
});
|
||||
} catch (e: unknown) {
|
||||
if (e instanceof ApiRequestError && e.status === 409) {
|
||||
setConflictError(true);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const handleTest = async () => {
|
||||
setTestResult(null);
|
||||
const result = await testProvider.mutateAsync({
|
||||
provider_type: config.provider_type,
|
||||
config_json: formValues,
|
||||
});
|
||||
setTestResult(result);
|
||||
};
|
||||
|
||||
const handleDelete = async () => {
|
||||
if (!confirm(`Delete provider instance "${config.id}"?`)) return;
|
||||
await deleteProvider.mutateAsync(config.id);
|
||||
};
|
||||
|
||||
return (
|
||||
<Card className="border-zinc-800 bg-zinc-900">
|
||||
<CardHeader className="flex flex-row items-center justify-between pb-3">
|
||||
<div className="flex items-center gap-2">
|
||||
<Badge variant="outline" className="font-mono text-xs text-zinc-300 border-zinc-600">
|
||||
{config.id}
|
||||
</Badge>
|
||||
<span className="text-xs text-zinc-500 capitalize">
|
||||
{config.provider_type.replace("_", " ")}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<span className="text-xs text-zinc-400">Enabled</span>
|
||||
<Switch checked={enabled} onCheckedChange={setEnabled} />
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={handleDelete}
|
||||
disabled={deleteProvider.isPending}
|
||||
className="h-7 w-7 text-zinc-500 hover:text-red-400"
|
||||
>
|
||||
{deleteProvider.isPending ? (
|
||||
<Loader2 className="h-3.5 w-3.5 animate-spin" />
|
||||
) : (
|
||||
<Trash2 className="h-3.5 w-3.5" />
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</CardHeader>
|
||||
<CardContent className="space-y-3">
|
||||
{conflictError && (
|
||||
<div className="rounded border border-yellow-600/40 bg-yellow-950/30 px-3 py-2 text-xs text-yellow-400">
|
||||
UI config disabled — set <code>CONFIG_SOURCE=db</code> on the server
|
||||
</div>
|
||||
)}
|
||||
{fields.map((field) => (
|
||||
<div key={field.key} className="space-y-1">
|
||||
<Label className="text-xs text-zinc-400">
|
||||
{field.label}
|
||||
{field.required && <span className="ml-1 text-red-400">*</span>}
|
||||
</Label>
|
||||
<Input
|
||||
type={field.type ?? "text"}
|
||||
value={formValues[field.key] ?? ""}
|
||||
onChange={(e) =>
|
||||
setFormValues((prev) => ({ ...prev, [field.key]: e.target.value }))
|
||||
}
|
||||
placeholder={
|
||||
field.type === "password" ? "••••••••" : `Enter ${field.label.toLowerCase()}`
|
||||
}
|
||||
className="h-8 border-zinc-700 bg-zinc-800 text-xs text-zinc-100"
|
||||
/>
|
||||
</div>
|
||||
))}
|
||||
{testResult && (
|
||||
<div
|
||||
className={`flex items-center gap-2 rounded px-3 py-2 text-xs ${
|
||||
testResult.ok
|
||||
? "bg-green-950/30 text-green-400"
|
||||
: "bg-red-950/30 text-red-400"
|
||||
}`}
|
||||
>
|
||||
{testResult.ok ? (
|
||||
<CheckCircle className="h-3.5 w-3.5" />
|
||||
) : (
|
||||
<XCircle className="h-3.5 w-3.5" />
|
||||
)}
|
||||
{testResult.message}
|
||||
</div>
|
||||
)}
|
||||
<div className="flex gap-2 pt-1">
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={handleTest}
|
||||
disabled={testProvider.isPending}
|
||||
className="border-zinc-700 text-xs"
|
||||
>
|
||||
{testProvider.isPending && <Loader2 className="mr-1 h-3 w-3 animate-spin" />}
|
||||
Test Connection
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={handleSave}
|
||||
disabled={updateProvider.isPending}
|
||||
className="text-xs"
|
||||
>
|
||||
{updateProvider.isPending && <Loader2 className="mr-1 h-3 w-3 animate-spin" />}
|
||||
Save
|
||||
</Button>
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Add Instance dialog
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface AddInstanceDialogProps {
|
||||
open: boolean;
|
||||
onClose: () => void;
|
||||
availableTypes: string[];
|
||||
existingIds: string[];
|
||||
}
|
||||
|
||||
function AddInstanceDialog({ open, onClose, availableTypes, existingIds }: AddInstanceDialogProps) {
|
||||
const [instanceId, setInstanceId] = useState("");
|
||||
const [providerType, setProviderType] = useState(availableTypes[0] ?? "");
|
||||
const [formValues, setFormValues] = useState<Record<string, string>>({});
|
||||
const [idError, setIdError] = useState<string | null>(null);
|
||||
const [apiError, setApiError] = useState<string | null>(null);
|
||||
|
||||
const createProvider = useCreateProvider();
|
||||
const testProvider = useTestProvider();
|
||||
const [testResult, setTestResult] = useState<{ ok: boolean; message: string } | null>(null);
|
||||
|
||||
const fields = PROVIDER_FIELDS[providerType] ?? [];
|
||||
|
||||
const handleTypeChange = (t: string) => {
|
||||
setProviderType(t);
|
||||
setFormValues({});
|
||||
setTestResult(null);
|
||||
};
|
||||
|
||||
const validateId = (id: string): string | null => {
|
||||
if (!id) return "ID is required";
|
||||
if (!isValidInstanceId(id)) return "Only alphanumeric characters and hyphens, 1–40 chars";
|
||||
if (existingIds.includes(id)) return "An instance with this ID already exists";
|
||||
return null;
|
||||
};
|
||||
|
||||
const handleCreate = async () => {
|
||||
const err = validateId(instanceId);
|
||||
if (err) { setIdError(err); return; }
|
||||
setIdError(null);
|
||||
setApiError(null);
|
||||
try {
|
||||
await createProvider.mutateAsync({
|
||||
id: instanceId,
|
||||
provider_type: providerType,
|
||||
config_json: formValues,
|
||||
enabled: true,
|
||||
});
|
||||
onClose();
|
||||
setInstanceId("");
|
||||
setFormValues({});
|
||||
setTestResult(null);
|
||||
} catch (e: unknown) {
|
||||
if (e instanceof ApiRequestError && e.status === 409) {
|
||||
setIdError("An instance with this ID already exists");
|
||||
} else if (e instanceof ApiRequestError) {
|
||||
setApiError(e.message);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const handleTest = async () => {
|
||||
setTestResult(null);
|
||||
const result = await testProvider.mutateAsync({
|
||||
provider_type: providerType,
|
||||
config_json: formValues,
|
||||
});
|
||||
setTestResult(result);
|
||||
};
|
||||
|
||||
return (
|
||||
<Dialog open={open} onOpenChange={(v) => { if (!v) onClose(); }}>
|
||||
<DialogContent className="border-zinc-800 bg-zinc-950 text-zinc-100 max-w-md">
|
||||
<DialogHeader>
|
||||
<DialogTitle className="text-sm font-semibold">Add Provider Instance</DialogTitle>
|
||||
</DialogHeader>
|
||||
<div className="space-y-4 pt-2">
|
||||
<div className="space-y-1">
|
||||
<Label className="text-xs text-zinc-400">
|
||||
Instance ID <span className="text-red-400">*</span>
|
||||
</Label>
|
||||
<Input
|
||||
value={instanceId}
|
||||
onChange={(e) => {
|
||||
setInstanceId(e.target.value);
|
||||
setIdError(null);
|
||||
}}
|
||||
placeholder="e.g. jellyfin-main"
|
||||
className="h-8 border-zinc-700 bg-zinc-800 text-xs text-zinc-100 font-mono"
|
||||
/>
|
||||
{idError && <p className="text-xs text-red-400">{idError}</p>}
|
||||
<p className="text-xs text-zinc-600">Alphanumeric + hyphens, 1–40 chars</p>
|
||||
</div>
|
||||
|
||||
<div className="space-y-1">
|
||||
<Label className="text-xs text-zinc-400">
|
||||
Provider Type <span className="text-red-400">*</span>
|
||||
</Label>
|
||||
<Select value={providerType} onValueChange={handleTypeChange}>
|
||||
<SelectTrigger className="h-8 border-zinc-700 bg-zinc-800 text-xs text-zinc-100">
|
||||
<SelectValue />
|
||||
</SelectTrigger>
|
||||
<SelectContent className="border-zinc-700 bg-zinc-900">
|
||||
{availableTypes.map((t) => (
|
||||
<SelectItem key={t} value={t} className="text-xs capitalize text-zinc-200">
|
||||
{t.replace("_", " ")}
|
||||
</SelectItem>
|
||||
))}
|
||||
</SelectContent>
|
||||
</Select>
|
||||
</div>
|
||||
|
||||
{fields.map((field) => (
|
||||
<div key={field.key} className="space-y-1">
|
||||
<Label className="text-xs text-zinc-400">
|
||||
{field.label}
|
||||
{field.required && <span className="ml-1 text-red-400">*</span>}
|
||||
</Label>
|
||||
<Input
|
||||
type={field.type ?? "text"}
|
||||
value={formValues[field.key] ?? ""}
|
||||
onChange={(e) =>
|
||||
setFormValues((prev) => ({ ...prev, [field.key]: e.target.value }))
|
||||
}
|
||||
placeholder={
|
||||
field.type === "password" ? "••••••••" : `Enter ${field.label.toLowerCase()}`
|
||||
}
|
||||
className="h-8 border-zinc-700 bg-zinc-800 text-xs text-zinc-100"
|
||||
/>
|
||||
</div>
|
||||
))}
|
||||
|
||||
{testResult && (
|
||||
<div
|
||||
className={`flex items-center gap-2 rounded px-3 py-2 text-xs ${
|
||||
testResult.ok
|
||||
? "bg-green-950/30 text-green-400"
|
||||
: "bg-red-950/30 text-red-400"
|
||||
}`}
|
||||
>
|
||||
{testResult.ok ? (
|
||||
<CheckCircle className="h-3.5 w-3.5" />
|
||||
) : (
|
||||
<XCircle className="h-3.5 w-3.5" />
|
||||
)}
|
||||
{testResult.message}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{apiError && (
|
||||
<p className="text-xs text-red-400">{apiError}</p>
|
||||
)}
|
||||
|
||||
<div className="flex gap-2 pt-1">
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={handleTest}
|
||||
disabled={testProvider.isPending}
|
||||
className="border-zinc-700 text-xs"
|
||||
>
|
||||
{testProvider.isPending && <Loader2 className="mr-1 h-3 w-3 animate-spin" />}
|
||||
Test
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={handleCreate}
|
||||
disabled={createProvider.isPending}
|
||||
className="text-xs"
|
||||
>
|
||||
{createProvider.isPending && <Loader2 className="mr-1 h-3 w-3 animate-spin" />}
|
||||
Create
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Panel
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function ProviderSettingsPanel() {
|
||||
const { data: config } = useConfig();
|
||||
const { data: providerConfigs = [] } = useProviderConfigs();
|
||||
const [addOpen, setAddOpen] = useState(false);
|
||||
|
||||
const availableTypes = config?.available_provider_types ?? [];
|
||||
const existingIds = providerConfigs.map((c) => c.id);
|
||||
|
||||
return (
|
||||
<div className="space-y-4 p-6">
|
||||
<div className="flex items-center justify-between">
|
||||
<div>
|
||||
<h2 className="text-sm font-semibold text-zinc-100">Provider Instances</h2>
|
||||
<p className="mt-0.5 text-xs text-zinc-500">
|
||||
Manage media provider instances. Requires <code>CONFIG_SOURCE=db</code> on the server.
|
||||
</p>
|
||||
</div>
|
||||
{availableTypes.length > 0 && (
|
||||
<Button
|
||||
size="sm"
|
||||
variant="outline"
|
||||
onClick={() => setAddOpen(true)}
|
||||
className="border-zinc-700 text-xs gap-1"
|
||||
>
|
||||
<Plus className="h-3.5 w-3.5" />
|
||||
Add Instance
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{availableTypes.length === 0 ? (
|
||||
<p className="text-xs text-zinc-500">No providers available in this build.</p>
|
||||
) : providerConfigs.length === 0 ? (
|
||||
<p className="text-xs text-zinc-500">
|
||||
No provider instances configured. Click "Add Instance" to get started.
|
||||
</p>
|
||||
) : (
|
||||
<div className="space-y-4">
|
||||
{providerConfigs.map((c) => (
|
||||
<ProviderCard key={c.id} config={c} existingIds={existingIds} />
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<AddInstanceDialog
|
||||
open={addOpen}
|
||||
onClose={() => setAddOpen(false)}
|
||||
availableTypes={availableTypes}
|
||||
existingIds={existingIds}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
129
k-tv-frontend/app/(main)/admin/components/server-logs-panel.tsx
Normal file
129
k-tv-frontend/app/(main)/admin/components/server-logs-panel.tsx
Normal file
@@ -0,0 +1,129 @@
|
||||
"use client";
|
||||
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
import type { LogLine } from "@/lib/types";
|
||||
|
||||
const LEVELS = ["DEBUG", "INFO", "WARN", "ERROR"] as const;
|
||||
|
||||
const levelColor: Record<string, string> = {
|
||||
DEBUG: "text-zinc-500",
|
||||
INFO: "text-zinc-300",
|
||||
WARN: "text-yellow-400",
|
||||
ERROR: "text-red-400",
|
||||
};
|
||||
|
||||
interface ServerLogsPanelProps {
|
||||
lines: LogLine[];
|
||||
connected: boolean;
|
||||
onClear: () => void;
|
||||
}
|
||||
|
||||
export function ServerLogsPanel({ lines, connected, onClear }: ServerLogsPanelProps) {
|
||||
const scrollRef = useRef<HTMLDivElement>(null);
|
||||
const [autoScroll, setAutoScroll] = useState(true);
|
||||
const [levelFilter, setLevelFilter] = useState<Set<string>>(
|
||||
new Set(["DEBUG", "INFO", "WARN", "ERROR"]),
|
||||
);
|
||||
|
||||
const filtered = lines.filter((l) => levelFilter.has(l.level.toUpperCase()));
|
||||
|
||||
useEffect(() => {
|
||||
if (!autoScroll) return;
|
||||
scrollRef.current?.scrollTo({ top: scrollRef.current.scrollHeight });
|
||||
}, [filtered.length, autoScroll]);
|
||||
|
||||
const toggleLevel = (level: string) => {
|
||||
setLevelFilter((prev) => {
|
||||
const next = new Set(prev);
|
||||
if (next.has(level)) next.delete(level);
|
||||
else next.add(level);
|
||||
return next;
|
||||
});
|
||||
};
|
||||
|
||||
const handleScroll = () => {
|
||||
const el = scrollRef.current;
|
||||
if (!el) return;
|
||||
const atBottom = el.scrollHeight - el.scrollTop - el.clientHeight < 40;
|
||||
setAutoScroll(atBottom);
|
||||
};
|
||||
|
||||
const fmtTime = (ts: string) => {
|
||||
if (!ts) return "";
|
||||
try {
|
||||
return new Date(ts).toLocaleTimeString(undefined, { hour12: false });
|
||||
} catch {
|
||||
return ts;
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="flex h-full flex-col">
|
||||
{/* Header */}
|
||||
<div className="flex items-center gap-2 border-b border-zinc-800 px-4 py-2.5">
|
||||
<span className="text-xs font-semibold uppercase tracking-widest text-violet-400">
|
||||
Server Logs
|
||||
</span>
|
||||
<span
|
||||
className={`rounded px-1.5 py-0.5 text-[10px] font-medium ${
|
||||
connected
|
||||
? "bg-green-900/40 text-green-400"
|
||||
: "bg-zinc-800 text-zinc-500"
|
||||
}`}
|
||||
>
|
||||
{connected ? "● live" : "○ disconnected"}
|
||||
</span>
|
||||
|
||||
<div className="ml-auto flex items-center gap-1">
|
||||
{LEVELS.map((lvl) => (
|
||||
<button
|
||||
key={lvl}
|
||||
onClick={() => toggleLevel(lvl)}
|
||||
className={`rounded px-2 py-0.5 text-[10px] font-medium transition-opacity ${
|
||||
levelFilter.has(lvl) ? "opacity-100" : "opacity-30"
|
||||
} ${levelColor[lvl] ?? "text-zinc-400"}`}
|
||||
>
|
||||
{lvl}
|
||||
</button>
|
||||
))}
|
||||
<button
|
||||
onClick={() => {
|
||||
onClear();
|
||||
setAutoScroll(true);
|
||||
}}
|
||||
className="ml-2 rounded px-2 py-0.5 text-[10px] text-zinc-500 hover:text-zinc-300"
|
||||
>
|
||||
Clear
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Log lines */}
|
||||
<div
|
||||
ref={scrollRef}
|
||||
onScroll={handleScroll}
|
||||
className="flex-1 overflow-y-auto bg-zinc-950 px-4 py-2 font-mono text-[11px] leading-relaxed"
|
||||
>
|
||||
{filtered.length === 0 ? (
|
||||
<p className="mt-8 text-center text-zinc-600">
|
||||
{connected ? "Waiting for log events…" : "Connecting to server…"}
|
||||
</p>
|
||||
) : (
|
||||
filtered.map((line, i) => (
|
||||
<div key={i} className="flex gap-3">
|
||||
<span className="shrink-0 text-zinc-600">{fmtTime(line.timestamp)}</span>
|
||||
<span
|
||||
className={`w-10 shrink-0 font-semibold ${levelColor[line.level?.toUpperCase()] ?? "text-zinc-400"}`}
|
||||
>
|
||||
{line.level?.toUpperCase()}
|
||||
</span>
|
||||
<span className="shrink-0 text-zinc-500">{line.target}</span>
|
||||
<span className="text-zinc-200">{line.message}</span>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
<div className="text-violet-400">{connected && lines.length > 0 ? "▋" : ""}</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
27
k-tv-frontend/app/(main)/admin/layout.tsx
Normal file
27
k-tv-frontend/app/(main)/admin/layout.tsx
Normal file
@@ -0,0 +1,27 @@
|
||||
"use client";
|
||||
|
||||
import { useEffect, type ReactNode } from "react";
|
||||
import { useRouter } from "next/navigation";
|
||||
import { useCurrentUser } from "@/hooks/use-auth";
|
||||
import { useAuthContext } from "@/context/auth-context";
|
||||
|
||||
export default function AdminLayout({ children }: { children: ReactNode }) {
|
||||
const { token, isLoaded } = useAuthContext();
|
||||
const router = useRouter();
|
||||
const { data: user, isLoading } = useCurrentUser();
|
||||
|
||||
useEffect(() => {
|
||||
if (!isLoaded) return;
|
||||
if (!token) {
|
||||
router.replace("/login");
|
||||
return;
|
||||
}
|
||||
if (!isLoading && user && !user.is_admin) {
|
||||
router.replace("/dashboard");
|
||||
}
|
||||
}, [isLoaded, token, user, isLoading, router]);
|
||||
|
||||
if (!isLoaded || isLoading || !user?.is_admin) return null;
|
||||
|
||||
return <>{children}</>;
|
||||
}
|
||||
75
k-tv-frontend/app/(main)/admin/page.tsx
Normal file
75
k-tv-frontend/app/(main)/admin/page.tsx
Normal file
@@ -0,0 +1,75 @@
|
||||
"use client";
|
||||
|
||||
import { useState, useEffect } from "react";
|
||||
import { useAuthContext } from "@/context/auth-context";
|
||||
import { useActivityLog, useServerLogs } from "@/hooks/use-admin";
|
||||
import { ServerLogsPanel } from "./components/server-logs-panel";
|
||||
import { ActivityLogPanel } from "./components/activity-log-panel";
|
||||
import { ProviderSettingsPanel } from "./components/provider-settings-panel";
|
||||
import { Tabs, TabsList, TabsTrigger, TabsContent } from "@/components/ui/tabs";
|
||||
|
||||
export default function AdminPage() {
|
||||
const { token } = useAuthContext();
|
||||
|
||||
const { lines, connected } = useServerLogs(token);
|
||||
const [localLines, setLocalLines] = useState(lines);
|
||||
|
||||
// Sync external lines into local state so Clear can reset without clearing the hook
|
||||
useEffect(() => {
|
||||
setLocalLines(lines);
|
||||
}, [lines]);
|
||||
|
||||
const { data: events = [], isLoading } = useActivityLog(token);
|
||||
|
||||
return (
|
||||
<div className="flex flex-1 flex-col overflow-hidden">
|
||||
{/* Page header */}
|
||||
<div className="flex items-center gap-3 border-b border-zinc-800 px-6 py-4">
|
||||
<h1 className="text-base font-semibold text-zinc-100">Admin</h1>
|
||||
<span className="text-xs text-zinc-500">System monitoring & logs</span>
|
||||
</div>
|
||||
|
||||
<Tabs defaultValue="logs" className="flex flex-1 flex-col overflow-hidden">
|
||||
<div className="border-b border-zinc-800 px-6">
|
||||
<TabsList className="h-9 bg-transparent p-0 gap-1">
|
||||
<TabsTrigger
|
||||
value="logs"
|
||||
className="rounded-none border-b-2 border-transparent px-3 py-1.5 text-xs data-[state=active]:border-zinc-100 data-[state=active]:bg-transparent data-[state=active]:text-zinc-100"
|
||||
>
|
||||
Logs
|
||||
</TabsTrigger>
|
||||
<TabsTrigger
|
||||
value="providers"
|
||||
className="rounded-none border-b-2 border-transparent px-3 py-1.5 text-xs data-[state=active]:border-zinc-100 data-[state=active]:bg-transparent data-[state=active]:text-zinc-100"
|
||||
>
|
||||
Providers
|
||||
</TabsTrigger>
|
||||
</TabsList>
|
||||
</div>
|
||||
|
||||
<TabsContent value="logs" className="flex min-h-0 flex-1 overflow-hidden mt-0">
|
||||
{/* Two-column layout */}
|
||||
<div className="flex min-h-0 flex-1 overflow-hidden">
|
||||
{/* Left: server logs */}
|
||||
<div className="flex min-w-0 flex-1 flex-col overflow-hidden border-r border-zinc-800">
|
||||
<ServerLogsPanel
|
||||
lines={localLines}
|
||||
connected={connected}
|
||||
onClear={() => setLocalLines([])}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Right: activity log */}
|
||||
<div className="flex w-80 shrink-0 flex-col overflow-hidden">
|
||||
<ActivityLogPanel events={events} isLoading={isLoading} />
|
||||
</div>
|
||||
</div>
|
||||
</TabsContent>
|
||||
|
||||
<TabsContent value="providers" className="flex-1 overflow-auto mt-0">
|
||||
<ProviderSettingsPanel />
|
||||
</TabsContent>
|
||||
</Tabs>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
17
k-tv-frontend/app/(main)/components/admin-nav-link.tsx
Normal file
17
k-tv-frontend/app/(main)/components/admin-nav-link.tsx
Normal file
@@ -0,0 +1,17 @@
|
||||
"use client";
|
||||
|
||||
import Link from "next/link";
|
||||
import { useCurrentUser } from "@/hooks/use-auth";
|
||||
|
||||
export function AdminNavLink() {
|
||||
const { data: user } = useCurrentUser();
|
||||
if (!user?.is_admin) return null;
|
||||
return (
|
||||
<Link
|
||||
href="/admin"
|
||||
className="rounded-md px-3 py-1.5 text-sm text-zinc-400 transition-colors hover:bg-zinc-800 hover:text-zinc-100"
|
||||
>
|
||||
Admin
|
||||
</Link>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,57 @@
|
||||
import type { AccessMode } from "@/lib/types";
|
||||
|
||||
interface AccessSettingsEditorProps {
|
||||
accessMode: AccessMode;
|
||||
accessPassword: string;
|
||||
onAccessModeChange: (mode: AccessMode) => void;
|
||||
onAccessPasswordChange: (pw: string) => void;
|
||||
label?: string;
|
||||
passwordLabel?: string;
|
||||
passwordHint?: string;
|
||||
}
|
||||
|
||||
export function AccessSettingsEditor({
|
||||
accessMode,
|
||||
accessPassword,
|
||||
onAccessModeChange,
|
||||
onAccessPasswordChange,
|
||||
label = "Access",
|
||||
passwordLabel = "Password",
|
||||
passwordHint = "Leave blank to keep existing password",
|
||||
}: AccessSettingsEditorProps) {
|
||||
return (
|
||||
<div className="space-y-2">
|
||||
<div className="space-y-1.5">
|
||||
<label className="block text-xs font-medium text-zinc-400">{label}</label>
|
||||
<select
|
||||
value={accessMode}
|
||||
onChange={(e) => {
|
||||
onAccessModeChange(e.target.value as AccessMode);
|
||||
onAccessPasswordChange("");
|
||||
}}
|
||||
className="w-full rounded-md border border-zinc-700 bg-zinc-800 px-3 py-2 text-sm text-zinc-100 focus:border-zinc-500 focus:outline-none"
|
||||
>
|
||||
<option value="public">Public</option>
|
||||
<option value="password_protected">Password protected</option>
|
||||
<option value="account_required">Account required</option>
|
||||
<option value="owner_only">Owner only</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
{accessMode === "password_protected" && (
|
||||
<div className="space-y-1.5">
|
||||
<label className="block text-xs font-medium text-zinc-400">
|
||||
{passwordLabel}
|
||||
</label>
|
||||
<input
|
||||
type="password"
|
||||
placeholder={passwordHint}
|
||||
value={accessPassword}
|
||||
onChange={(e) => onAccessPasswordChange(e.target.value)}
|
||||
className="w-full rounded-md border border-zinc-700 bg-zinc-800 px-3 py-2 text-sm text-zinc-100 placeholder:text-zinc-600 focus:border-zinc-500 focus:outline-none"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,342 @@
|
||||
"use client";
|
||||
|
||||
import { useState } from "react";
|
||||
import { TagInput } from "./tag-input";
|
||||
import { SeriesPicker } from "./series-picker";
|
||||
import { FilterPreview } from "./filter-preview";
|
||||
import { useCollections, useSeries, useGenres } from "@/hooks/use-library";
|
||||
import type {
|
||||
BlockContent,
|
||||
ContentType,
|
||||
FillStrategy,
|
||||
MediaFilter,
|
||||
ProviderInfo,
|
||||
} from "@/lib/types";
|
||||
import type { FieldErrors } from "@/lib/schemas";
|
||||
|
||||
function Field({
|
||||
label,
|
||||
hint,
|
||||
error,
|
||||
children,
|
||||
}: {
|
||||
label: string;
|
||||
hint?: string;
|
||||
error?: string;
|
||||
children: React.ReactNode;
|
||||
}) {
|
||||
return (
|
||||
<div className="space-y-1.5">
|
||||
<label className="block text-xs font-medium text-zinc-400">{label}</label>
|
||||
{children}
|
||||
{error ? (
|
||||
<p className="text-[11px] text-red-400">{error}</p>
|
||||
) : hint ? (
|
||||
<p className="text-[11px] text-zinc-600">{hint}</p>
|
||||
) : null}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function NativeSelect({
|
||||
value,
|
||||
onChange,
|
||||
children,
|
||||
}: {
|
||||
value: string;
|
||||
onChange: (v: string) => void;
|
||||
children: React.ReactNode;
|
||||
}) {
|
||||
return (
|
||||
<select
|
||||
value={value}
|
||||
onChange={(e) => onChange(e.target.value)}
|
||||
className="w-full rounded-md border border-zinc-700 bg-zinc-800 px-3 py-2 text-sm text-zinc-100 focus:border-zinc-500 focus:outline-none"
|
||||
>
|
||||
{children}
|
||||
</select>
|
||||
);
|
||||
}
|
||||
|
||||
function NumberInput({
|
||||
value,
|
||||
onChange,
|
||||
min,
|
||||
placeholder,
|
||||
error,
|
||||
}: {
|
||||
value: number | "";
|
||||
onChange: (v: number | "") => void;
|
||||
min?: number;
|
||||
placeholder?: string;
|
||||
error?: boolean;
|
||||
}) {
|
||||
return (
|
||||
<input
|
||||
type="number"
|
||||
min={min}
|
||||
value={value}
|
||||
placeholder={placeholder}
|
||||
onChange={(e) =>
|
||||
onChange(e.target.value === "" ? "" : Number(e.target.value))
|
||||
}
|
||||
className={`w-full rounded-md border bg-zinc-800 px-3 py-2 text-sm text-zinc-100 placeholder:text-zinc-600 focus:outline-none ${error ? "border-red-500 focus:border-red-400" : "border-zinc-700 focus:border-zinc-500"}`}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
interface AlgorithmicFilterEditorProps {
|
||||
content: Extract<BlockContent, { type: "algorithmic" }>;
|
||||
pfx: string;
|
||||
errors: FieldErrors;
|
||||
providers: ProviderInfo[];
|
||||
setFilter: (patch: Partial<MediaFilter>) => void;
|
||||
setStrategy: (strategy: FillStrategy) => void;
|
||||
setProviderId: (id: string) => void;
|
||||
}
|
||||
|
||||
export function AlgorithmicFilterEditor({
|
||||
content,
|
||||
pfx,
|
||||
errors,
|
||||
providers,
|
||||
setFilter,
|
||||
setStrategy,
|
||||
setProviderId,
|
||||
}: AlgorithmicFilterEditorProps) {
|
||||
const [showGenres, setShowGenres] = useState(false);
|
||||
|
||||
const providerId = content.provider_id ?? "";
|
||||
const capabilities =
|
||||
providers.find((p) => p.id === providerId)?.capabilities ??
|
||||
providers[0]?.capabilities;
|
||||
|
||||
const { data: collections, isLoading: loadingCollections } = useCollections(
|
||||
providerId || undefined,
|
||||
);
|
||||
const { data: series, isLoading: loadingSeries } = useSeries(undefined, {
|
||||
enabled: capabilities?.series !== false,
|
||||
provider: providerId || undefined,
|
||||
});
|
||||
const { data: genreOptions } = useGenres(
|
||||
content.filter.content_type ?? undefined,
|
||||
{
|
||||
enabled: capabilities?.genres !== false,
|
||||
provider: providerId || undefined,
|
||||
},
|
||||
);
|
||||
|
||||
const isEpisode = content.filter.content_type === "episode";
|
||||
const collectionLabel =
|
||||
capabilities?.collections && !capabilities?.series && !capabilities?.genres
|
||||
? "Directory"
|
||||
: "Library";
|
||||
|
||||
return (
|
||||
<div className="space-y-3 rounded-md border border-zinc-700/50 bg-zinc-800 p-3">
|
||||
<p className="text-[11px] font-medium uppercase tracking-wider text-zinc-500">
|
||||
Filter
|
||||
</p>
|
||||
|
||||
{providers.length > 1 && (
|
||||
<Field label="Provider">
|
||||
<NativeSelect value={providerId} onChange={setProviderId}>
|
||||
{providers.map((p) => (
|
||||
<option key={p.id} value={p.id}>
|
||||
{p.id}
|
||||
</option>
|
||||
))}
|
||||
</NativeSelect>
|
||||
</Field>
|
||||
)}
|
||||
|
||||
<div className="grid grid-cols-2 gap-3">
|
||||
<Field label="Media type">
|
||||
<NativeSelect
|
||||
value={content.filter.content_type ?? ""}
|
||||
onChange={(v) =>
|
||||
setFilter({
|
||||
content_type: v === "" ? null : (v as ContentType),
|
||||
series_names:
|
||||
v !== "episode" ? [] : content.filter.series_names,
|
||||
})
|
||||
}
|
||||
>
|
||||
<option value="">Any</option>
|
||||
<option value="movie">Movie</option>
|
||||
<option value="episode">Episode</option>
|
||||
<option value="short">Short</option>
|
||||
</NativeSelect>
|
||||
</Field>
|
||||
<Field label="Strategy">
|
||||
<NativeSelect
|
||||
value={content.strategy}
|
||||
onChange={(v) => setStrategy(v as FillStrategy)}
|
||||
>
|
||||
<option value="random">Random</option>
|
||||
<option value="best_fit">Best fit</option>
|
||||
<option value="sequential">Sequential</option>
|
||||
</NativeSelect>
|
||||
</Field>
|
||||
</div>
|
||||
|
||||
{isEpisode && capabilities?.series !== false && (
|
||||
<Field
|
||||
label="Series"
|
||||
hint={
|
||||
content.strategy === "sequential"
|
||||
? "Episodes will play in chronological order"
|
||||
: "Filter to specific shows, or leave empty for all"
|
||||
}
|
||||
>
|
||||
<SeriesPicker
|
||||
values={content.filter.series_names ?? []}
|
||||
onChange={(v) => setFilter({ series_names: v })}
|
||||
series={series ?? []}
|
||||
isLoading={loadingSeries}
|
||||
/>
|
||||
</Field>
|
||||
)}
|
||||
|
||||
<Field
|
||||
label={collectionLabel}
|
||||
hint={
|
||||
loadingCollections
|
||||
? `Loading ${collectionLabel.toLowerCase()}s…`
|
||||
: collections
|
||||
? `Scope this block to one ${collectionLabel.toLowerCase()}`
|
||||
: `Enter a provider ${collectionLabel.toLowerCase()} ID`
|
||||
}
|
||||
>
|
||||
{collections && collections.length > 0 ? (
|
||||
<NativeSelect
|
||||
value={content.filter.collections[0] ?? ""}
|
||||
onChange={(v) => setFilter({ collections: v ? [v] : [] })}
|
||||
>
|
||||
<option value="">All libraries</option>
|
||||
{collections.map((c) => (
|
||||
<option key={c.id} value={c.id}>
|
||||
{c.name}
|
||||
{c.collection_type ? ` (${c.collection_type})` : ""}
|
||||
</option>
|
||||
))}
|
||||
</NativeSelect>
|
||||
) : (
|
||||
<TagInput
|
||||
values={content.filter.collections}
|
||||
onChange={(v) => setFilter({ collections: v })}
|
||||
placeholder="Library ID…"
|
||||
/>
|
||||
)}
|
||||
</Field>
|
||||
|
||||
{capabilities?.genres !== false && (
|
||||
<Field label="Genres" hint="Press Enter or comma to add">
|
||||
<TagInput
|
||||
values={content.filter.genres}
|
||||
onChange={(v) => setFilter({ genres: v })}
|
||||
placeholder="Comedy, Animation…"
|
||||
/>
|
||||
{genreOptions && genreOptions.length > 0 && (
|
||||
<div className="mt-1.5">
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => setShowGenres((s) => !s)}
|
||||
className="text-[11px] text-zinc-600 hover:text-zinc-400"
|
||||
>
|
||||
{showGenres ? "Hide" : "Browse"} available genres
|
||||
</button>
|
||||
{showGenres && (
|
||||
<div className="mt-1.5 flex flex-wrap gap-1">
|
||||
{genreOptions
|
||||
.filter((g) => !content.filter.genres.includes(g))
|
||||
.map((g) => (
|
||||
<button
|
||||
key={g}
|
||||
type="button"
|
||||
onClick={() =>
|
||||
setFilter({ genres: [...content.filter.genres, g] })
|
||||
}
|
||||
className="rounded px-1.5 py-0.5 text-[11px] bg-zinc-700/50 text-zinc-400 hover:bg-zinc-700 hover:text-zinc-200"
|
||||
>
|
||||
+ {g}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</Field>
|
||||
)}
|
||||
|
||||
<Field label="Tags" hint="Press Enter or comma to add">
|
||||
<TagInput
|
||||
values={content.filter.tags}
|
||||
onChange={(v) => setFilter({ tags: v })}
|
||||
placeholder="classic, family…"
|
||||
/>
|
||||
</Field>
|
||||
|
||||
<div className="grid grid-cols-3 gap-3">
|
||||
<Field
|
||||
label="Decade"
|
||||
hint="e.g. 1990"
|
||||
error={errors[`${pfx}.content.filter.decade`]}
|
||||
>
|
||||
<NumberInput
|
||||
value={content.filter.decade ?? ""}
|
||||
onChange={(v) =>
|
||||
setFilter({ decade: v === "" ? null : (v as number) })
|
||||
}
|
||||
placeholder="1990"
|
||||
error={!!errors[`${pfx}.content.filter.decade`]}
|
||||
/>
|
||||
</Field>
|
||||
<Field
|
||||
label="Min duration (min)"
|
||||
error={errors[`${pfx}.content.filter.min_duration_secs`]}
|
||||
>
|
||||
<NumberInput
|
||||
value={
|
||||
content.filter.min_duration_secs != null
|
||||
? Math.round(content.filter.min_duration_secs / 60)
|
||||
: ""
|
||||
}
|
||||
onChange={(v) =>
|
||||
setFilter({
|
||||
min_duration_secs: v === "" ? null : (v as number) * 60,
|
||||
})
|
||||
}
|
||||
placeholder="30"
|
||||
error={!!errors[`${pfx}.content.filter.min_duration_secs`]}
|
||||
/>
|
||||
</Field>
|
||||
<Field
|
||||
label="Max duration (min)"
|
||||
error={errors[`${pfx}.content.filter.max_duration_secs`]}
|
||||
>
|
||||
<NumberInput
|
||||
value={
|
||||
content.filter.max_duration_secs != null
|
||||
? Math.round(content.filter.max_duration_secs / 60)
|
||||
: ""
|
||||
}
|
||||
onChange={(v) =>
|
||||
setFilter({
|
||||
max_duration_secs: v === "" ? null : (v as number) * 60,
|
||||
})
|
||||
}
|
||||
placeholder="120"
|
||||
error={!!errors[`${pfx}.content.filter.max_duration_secs`]}
|
||||
/>
|
||||
</Field>
|
||||
</div>
|
||||
|
||||
<FilterPreview
|
||||
filter={content.filter}
|
||||
strategy={content.strategy}
|
||||
provider={providerId || undefined}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
"use client";
|
||||
|
||||
import { useRef, useState, useEffect } from "react";
|
||||
import { useRef, useState, useEffect, useLayoutEffect } from "react";
|
||||
import type { ProgrammingBlock } from "@/lib/types";
|
||||
|
||||
const SNAP_MINS = 15;
|
||||
@@ -63,12 +63,13 @@ export function BlockTimeline({
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
const dragRef = useRef<DragState | null>(null);
|
||||
const blocksRef = useRef(blocks);
|
||||
blocksRef.current = blocks;
|
||||
|
||||
const onChangeRef = useRef(onChange);
|
||||
onChangeRef.current = onChange;
|
||||
const onCreateRef = useRef(onCreateBlock);
|
||||
onCreateRef.current = onCreateBlock;
|
||||
useLayoutEffect(() => {
|
||||
blocksRef.current = blocks;
|
||||
onChangeRef.current = onChange;
|
||||
onCreateRef.current = onCreateBlock;
|
||||
});
|
||||
|
||||
const [draft, setDraft] = useState<Draft | null>(null);
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"use client";
|
||||
|
||||
import { useState } from "react";
|
||||
import { useState, useMemo } from "react";
|
||||
import Link from "next/link";
|
||||
import {
|
||||
Pencil,
|
||||
@@ -11,6 +11,7 @@ import {
|
||||
Download,
|
||||
ChevronUp,
|
||||
ChevronDown,
|
||||
History,
|
||||
} from "lucide-react";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { useActiveSchedule } from "@/hooks/use-channels";
|
||||
@@ -29,14 +30,18 @@ interface ChannelCardProps {
|
||||
onExport: () => void;
|
||||
onMoveUp: () => void;
|
||||
onMoveDown: () => void;
|
||||
onScheduleHistory: () => void;
|
||||
}
|
||||
|
||||
function useScheduleStatus(channelId: string) {
|
||||
const { data: schedule } = useActiveSchedule(channelId);
|
||||
// eslint-disable-next-line react-hooks/purity -- Date.now() inside useMemo is stable enough for schedule status
|
||||
const now = useMemo(() => Date.now(), []);
|
||||
|
||||
if (!schedule) return { status: "none" as const, label: null };
|
||||
|
||||
const expiresAt = new Date(schedule.valid_until);
|
||||
const hoursLeft = (expiresAt.getTime() - Date.now()) / (1000 * 60 * 60);
|
||||
const hoursLeft = (expiresAt.getTime() - now) / (1000 * 60 * 60);
|
||||
|
||||
if (hoursLeft < 0) {
|
||||
return { status: "expired" as const, label: "Schedule expired" };
|
||||
@@ -66,9 +71,12 @@ export function ChannelCard({
|
||||
onExport,
|
||||
onMoveUp,
|
||||
onMoveDown,
|
||||
onScheduleHistory,
|
||||
}: ChannelCardProps) {
|
||||
const [confirmOpen, setConfirmOpen] = useState(false);
|
||||
const blockCount = channel.schedule_config.blocks.length;
|
||||
const blockCount = Object.values(channel.schedule_config.day_blocks).reduce(
|
||||
(sum, blocks) => sum + blocks.length, 0
|
||||
);
|
||||
const { status, label } = useScheduleStatus(channel.id);
|
||||
|
||||
const scheduleColor =
|
||||
@@ -180,6 +188,15 @@ export function ChannelCard({
|
||||
>
|
||||
<CalendarDays className="size-3.5" />
|
||||
</Button>
|
||||
<Button
|
||||
size="icon-sm"
|
||||
variant="ghost"
|
||||
onClick={onScheduleHistory}
|
||||
title="Schedule history"
|
||||
className="text-zinc-600 hover:text-zinc-200"
|
||||
>
|
||||
<History className="size-3.5" />
|
||||
</Button>
|
||||
<Button
|
||||
size="icon-sm"
|
||||
asChild
|
||||
|
||||
@@ -0,0 +1,119 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { Sheet, SheetContent, SheetHeader, SheetTitle } from '@/components/ui/sheet'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { Input } from '@/components/ui/input'
|
||||
import { useConfigHistory, usePinSnapshot, useRestoreConfig } from '@/hooks/use-channels'
|
||||
import { cn } from '@/lib/utils'
|
||||
|
||||
interface Props {
|
||||
channelId: string
|
||||
open: boolean
|
||||
onOpenChange: (open: boolean) => void
|
||||
}
|
||||
|
||||
export function ConfigHistorySheet({ channelId, open, onOpenChange }: Props) {
|
||||
const { data: snapshots } = useConfigHistory(channelId)
|
||||
const pin = usePinSnapshot()
|
||||
const restore = useRestoreConfig()
|
||||
const [pinningId, setPinningId] = useState<string | null>(null)
|
||||
const [pinLabel, setPinLabel] = useState('')
|
||||
|
||||
return (
|
||||
<Sheet open={open} onOpenChange={onOpenChange}>
|
||||
<SheetContent>
|
||||
<SheetHeader>
|
||||
<SheetTitle>Config history</SheetTitle>
|
||||
</SheetHeader>
|
||||
<div className="flex flex-col gap-2 mt-4 overflow-y-auto px-4 pb-4">
|
||||
{(snapshots ?? []).map((snap, i) => (
|
||||
<div
|
||||
key={snap.id}
|
||||
className={cn(
|
||||
'flex items-center gap-3 p-3 rounded border',
|
||||
i === 0 ? 'border-green-700 bg-green-950/30' : 'border-border'
|
||||
)}
|
||||
>
|
||||
<div className="flex-1 min-w-0">
|
||||
<div className="text-sm font-medium">
|
||||
v{snap.version_num} —{' '}
|
||||
{new Date(snap.created_at).toLocaleString()}
|
||||
{i === 0 && (
|
||||
<span className="ml-2 text-xs text-green-400 bg-green-950 px-1.5 py-0.5 rounded">
|
||||
current
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
{snap.label ? (
|
||||
<div className="text-xs text-amber-400 mt-0.5">📌 {snap.label}</div>
|
||||
) : (
|
||||
<div className="text-xs text-muted-foreground">Auto-saved</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{i === 0 && (
|
||||
pinningId === snap.id ? (
|
||||
<div className="flex gap-1 items-center">
|
||||
<Input
|
||||
value={pinLabel}
|
||||
onChange={e => setPinLabel(e.target.value)}
|
||||
className="h-7 text-xs w-32"
|
||||
placeholder="label…"
|
||||
onKeyDown={e => {
|
||||
if (e.key === 'Enter') {
|
||||
pin.mutate({ channelId, snapId: snap.id, label: pinLabel })
|
||||
setPinningId(null)
|
||||
}
|
||||
if (e.key === 'Escape') setPinningId(null)
|
||||
}}
|
||||
/>
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
pin.mutate({ channelId, snapId: snap.id, label: pinLabel })
|
||||
setPinningId(null)
|
||||
}}
|
||||
>
|
||||
Save
|
||||
</Button>
|
||||
<Button size="sm" variant="ghost" onClick={() => setPinningId(null)}>
|
||||
✕
|
||||
</Button>
|
||||
</div>
|
||||
) : (
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setPinningId(snap.id)
|
||||
setPinLabel(snap.label ?? '')
|
||||
}}
|
||||
>
|
||||
Pin
|
||||
</Button>
|
||||
)
|
||||
)}
|
||||
|
||||
{i > 0 && (
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={() => restore.mutate({ channelId, snapId: snap.id })}
|
||||
disabled={restore.isPending}
|
||||
>
|
||||
Restore
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
))}
|
||||
{(snapshots ?? []).length === 0 && (
|
||||
<p className="text-sm text-muted-foreground text-center py-8">
|
||||
No history yet. History is created automatically when you save changes.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
</SheetContent>
|
||||
</Sheet>
|
||||
)
|
||||
}
|
||||
@@ -0,0 +1,85 @@
|
||||
import { Plus, Upload, RefreshCw, Antenna, Settings2 } from "lucide-react";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import type { ProviderCapabilities } from "@/lib/types";
|
||||
|
||||
interface DashboardHeaderProps {
|
||||
hasChannels: boolean;
|
||||
canTranscode: boolean;
|
||||
canRescan: boolean;
|
||||
isRegeneratingAll: boolean;
|
||||
isRescanPending: boolean;
|
||||
capabilities: ProviderCapabilities | undefined;
|
||||
onTranscodeOpen: () => void;
|
||||
onRescan: () => void;
|
||||
onRegenerateAll: () => void;
|
||||
onIptvOpen: () => void;
|
||||
onImportOpen: () => void;
|
||||
onCreateOpen: () => void;
|
||||
}
|
||||
|
||||
export function DashboardHeader({
|
||||
hasChannels,
|
||||
canTranscode,
|
||||
canRescan,
|
||||
isRegeneratingAll,
|
||||
isRescanPending,
|
||||
onTranscodeOpen,
|
||||
onRescan,
|
||||
onRegenerateAll,
|
||||
onIptvOpen,
|
||||
onImportOpen,
|
||||
onCreateOpen,
|
||||
}: DashboardHeaderProps) {
|
||||
return (
|
||||
<div className="flex items-center justify-between">
|
||||
<div>
|
||||
<h1 className="text-xl font-semibold text-zinc-100">My Channels</h1>
|
||||
<p className="mt-0.5 text-sm text-zinc-500">Build your broadcast lineup</p>
|
||||
</div>
|
||||
<div className="flex gap-2">
|
||||
{canTranscode && (
|
||||
<Button onClick={onTranscodeOpen} title="Transcode settings">
|
||||
<Settings2 className="size-4" />
|
||||
Transcode
|
||||
</Button>
|
||||
)}
|
||||
{canRescan && (
|
||||
<Button
|
||||
onClick={onRescan}
|
||||
disabled={isRescanPending}
|
||||
title="Rescan local files directory"
|
||||
>
|
||||
<RefreshCw
|
||||
className={`size-4 ${isRescanPending ? "animate-spin" : ""}`}
|
||||
/>
|
||||
Rescan library
|
||||
</Button>
|
||||
)}
|
||||
{hasChannels && (
|
||||
<Button
|
||||
onClick={onRegenerateAll}
|
||||
disabled={isRegeneratingAll}
|
||||
title="Regenerate schedules for all channels"
|
||||
>
|
||||
<RefreshCw
|
||||
className={`size-4 ${isRegeneratingAll ? "animate-spin" : ""}`}
|
||||
/>
|
||||
Regenerate all
|
||||
</Button>
|
||||
)}
|
||||
<Button onClick={onIptvOpen}>
|
||||
<Antenna className="size-4" />
|
||||
IPTV
|
||||
</Button>
|
||||
<Button onClick={onImportOpen}>
|
||||
<Upload className="size-4" />
|
||||
Import
|
||||
</Button>
|
||||
<Button onClick={onCreateOpen}>
|
||||
<Plus className="size-4" />
|
||||
New channel
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user