fix(infra): deserialize channel schedule_config via ScheduleConfigCompat for V1 compat
This commit is contained in:
284
docs/superpowers/specs/2026-03-17-scheduling-v2-design.md
Normal file
284
docs/superpowers/specs/2026-03-17-scheduling-v2-design.md
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
# Scheduling V2 — Design Spec
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The current scheduler is a 48h rolling window with a flat block list per channel. This works as MVP but has two major gaps for everyday use:
|
||||||
|
|
||||||
|
1. **No weekly patterns** — users can't say "Monday runs X, weekends run Y"; all blocks repeat identically every day.
|
||||||
|
2. **No history or recovery** — overwriting a channel config loses the previous setup forever; a bug that resets a sequential series (e.g. Sopranos resets from S3E4 to S1E1) has no recovery path.
|
||||||
|
|
||||||
|
This spec covers two features: **weekly scheduling** and **schedule history**.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feature 1: Weekly Scheduling (7-day grid)
|
||||||
|
|
||||||
|
### Data model
|
||||||
|
|
||||||
|
`ScheduleConfig` changes from a flat block list to a day-keyed map:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// BEFORE
|
||||||
|
pub struct ScheduleConfig {
|
||||||
|
pub blocks: Vec<ProgrammingBlock>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// AFTER
|
||||||
|
pub struct ScheduleConfig {
|
||||||
|
pub day_blocks: HashMap<Weekday, Vec<ProgrammingBlock>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub enum Weekday {
|
||||||
|
Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`ProgrammingBlock` is otherwise unchanged. Block IDs remain UUIDs; each day has its own independent Vec, so the same "show" on Mon and Wed has two separate block entries (different IDs, independent continuity tracking).
|
||||||
|
|
||||||
|
### Migration (transparent, zero-downtime)
|
||||||
|
|
||||||
|
Existing `channels.schedule_config` stores `{"blocks":[...]}`. Use `#[serde(untagged)]` deserialization:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[serde(untagged)]
|
||||||
|
enum ScheduleConfigCompat {
|
||||||
|
V2(ScheduleConfig), // {"day_blocks": {"monday": [...], ...}}
|
||||||
|
V1(OldScheduleConfig), // {"blocks": [...]}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
V1→V2 conversion: clone the blocks Vec into all 7 days. The first `PUT /channels/:id` after deploy saves V2 format. Channels never touched continue to deserialize via V1 path indefinitely.
|
||||||
|
|
||||||
|
**Edge case**: if a payload has both `blocks` and `day_blocks` keys (e.g. partially migrated export), `#[serde(untagged)]` tries V2 first and succeeds — `day_blocks` is used and `blocks` is silently ignored. This is acceptable; the alternative (error on ambiguity) would break more use cases.
|
||||||
|
|
||||||
|
### ScheduleConfig helper methods
|
||||||
|
|
||||||
|
Three methods on `ScheduleConfig` must be updated:
|
||||||
|
|
||||||
|
- **`find_block_at(weekday: Weekday, time: NaiveTime) -> Option<&ProgrammingBlock>`** — searches `day_blocks[weekday]` for the block whose window contains `time`.
|
||||||
|
- **`next_block_start_after(weekday: Weekday, time: NaiveTime) -> Option<NaiveTime>`** — searches that day's vec; returns `None` if no block starts after `time` on that day (day-rollover is the caller's responsibility).
|
||||||
|
- **`earliest_block_start() -> Option<NaiveTime>`** — **iterates all days, returns the global earliest start time across the entire week**. This is the form needed by the background scheduler (which needs to know when any content starts). Empty day = no contribution; all days empty = `None`.
|
||||||
|
|
||||||
|
**Call-site update pattern for `broadcast.rs` (lines 64, 171):**
|
||||||
|
```rust
|
||||||
|
// derive weekday from slot start_at in channel timezone
|
||||||
|
let tz: chrono_tz::Tz = channel.timezone.parse().unwrap_or(chrono_tz::UTC);
|
||||||
|
let local_dt = slot.start_at.with_timezone(&tz);
|
||||||
|
let weekday = Weekday::from_chrono(local_dt.weekday()); // new From impl
|
||||||
|
let block = channel.schedule_config.find_block_at(weekday, local_dt.time());
|
||||||
|
```
|
||||||
|
|
||||||
|
The same derivation applies to `dto.rs` (`ScheduledSlotResponse::with_block_access`).
|
||||||
|
|
||||||
|
### MCP crate
|
||||||
|
|
||||||
|
`mcp/src/tools/channels.rs` manipulates `schedule_config.blocks` directly. After V2:
|
||||||
|
|
||||||
|
- The MCP `add_block` tool must accept a `day: Weekday` parameter (required). It pushes the new block to `day_blocks[day]`.
|
||||||
|
- The MCP `remove_block` tool must iterate all days' vecs (remove by block ID across all days, since block IDs are unique per entry).
|
||||||
|
- `mcp/src/server.rs` `set_schedule_config` must accept a `day_blocks` map. The old `blocks_json` string parameter is replaced with `day_blocks_json: String` (JSON object keyed by weekday name).
|
||||||
|
|
||||||
|
These are breaking changes to the MCP API — acceptable since MCP tools are internal/developer-facing.
|
||||||
|
|
||||||
|
### Generation engine
|
||||||
|
|
||||||
|
- Window: `valid_from + 7 days` (was 48h). Update `GeneratedSchedule` doc comment accordingly.
|
||||||
|
- Day iteration: already walks calendar days; now walks 7 days, looks up `day_blocks[weekday]` for each day.
|
||||||
|
- **Empty day**: if `day_blocks[weekday]` is empty or the key is absent, that day produces no slots — valid, not an error.
|
||||||
|
- Continuity (`find_last_slot_per_block`): unchanged.
|
||||||
|
|
||||||
|
### Files changed (backend)
|
||||||
|
- `domain/src/value_objects.rs` — add `Weekday` enum with `From<chrono::Weekday>` impl
|
||||||
|
- `domain/src/entities.rs` — `ScheduleConfig`, `OldScheduleConfig` compat struct, update helper method signatures, update `GeneratedSchedule` doc comment
|
||||||
|
- `domain/src/services.rs` — 7-day window, `day_blocks[weekday]` lookup per day
|
||||||
|
- `api/src/routes/channels/broadcast.rs` — update block lookups at lines 64 and 171 using weekday-derivation pattern above
|
||||||
|
- `api/src/dto.rs` — update `ScheduledSlotResponse::with_block_access` block lookup
|
||||||
|
- `mcp/src/tools/channels.rs` — `add_block` accepts `day` param; `remove_block` iterates all days
|
||||||
|
- `mcp/src/server.rs` — replace `blocks_json` with `day_blocks_json`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feature 2: Schedule History
|
||||||
|
|
||||||
|
### 2a. Config version history
|
||||||
|
|
||||||
|
Every `PUT /channels/:id` auto-snapshots the previous config before overwriting. Users can pin named checkpoints and restore any version.
|
||||||
|
|
||||||
|
**New DB migration:**
|
||||||
|
```sql
|
||||||
|
CREATE TABLE channel_config_snapshots (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
channel_id TEXT NOT NULL REFERENCES channels(id) ON DELETE CASCADE,
|
||||||
|
config_json TEXT NOT NULL,
|
||||||
|
version_num INTEGER NOT NULL,
|
||||||
|
label TEXT, -- NULL = auto-saved, non-NULL = pinned
|
||||||
|
created_at TEXT NOT NULL,
|
||||||
|
UNIQUE (channel_id, version_num)
|
||||||
|
);
|
||||||
|
CREATE INDEX idx_config_snapshots_channel ON channel_config_snapshots(channel_id, version_num DESC);
|
||||||
|
```
|
||||||
|
|
||||||
|
**`version_num` assignment**: computed inside the write transaction as `SELECT COALESCE(MAX(version_num), 0) + 1 FROM channel_config_snapshots WHERE channel_id = ?`. The transaction serializes concurrent writes naturally in SQLite (single writer). The `UNIQUE` constraint is a safety net only — no 409 is exposed to the client; the server retries within the transaction if needed (in practice impossible with SQLite's serialized writes).
|
||||||
|
|
||||||
|
**New API endpoints (all require auth + channel ownership — same auth middleware as existing channel routes):**
|
||||||
|
```
|
||||||
|
GET /channels/:id/config/history
|
||||||
|
→ [{id, version_num, label, created_at}] -- channel_id omitted (implicit from URL)
|
||||||
|
|
||||||
|
PATCH /channels/:id/config/history/:snap_id
|
||||||
|
body: {"label": "Before S3 switchover"}
|
||||||
|
→ 404 if snap_id not found or not owned by this channel
|
||||||
|
→ 200 {id, version_num, label, created_at}
|
||||||
|
|
||||||
|
POST /channels/:id/config/history/:snap_id/restore
|
||||||
|
→ snapshots current config first, then replaces channel config with target snapshot
|
||||||
|
→ 404 if snap_id not found or not owned by this channel
|
||||||
|
→ 200 {channel}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Domain + infra changes:**
|
||||||
|
- `ChannelConfigSnapshot` entity (fields: id, channel_id, config, version_num, label, created_at)
|
||||||
|
- Extend `ChannelRepository` port: `save_config_snapshot`, `list_config_snapshots`, `get_config_snapshot`, `patch_config_snapshot_label`
|
||||||
|
- `ChannelService::update_channel` calls `save_config_snapshot` before writing new config
|
||||||
|
|
||||||
|
**Files changed (backend):**
|
||||||
|
- `domain/src/entities.rs` — add `ChannelConfigSnapshot`
|
||||||
|
- `domain/src/repositories.rs` — extend `ChannelRepository` port
|
||||||
|
- `infra/src/channel_repo.rs` — implement snapshot methods
|
||||||
|
- `migrations_sqlite/YYYYMMDD_add_config_snapshots.sql`
|
||||||
|
- `api/src/routes/channels.rs` — new history endpoints + DTOs for snapshot responses
|
||||||
|
|
||||||
|
### 2b. Generated schedule audit log
|
||||||
|
|
||||||
|
**Ownership check**: `get_schedule_by_id(channel_id, gen_id)` queries `generated_schedules WHERE id = :gen_id AND channel_id = :channel_id` — the `channel_id` column is the join, so no separate channel lookup is needed.
|
||||||
|
|
||||||
|
**New API endpoints (all require auth + channel ownership):**
|
||||||
|
```
|
||||||
|
GET /channels/:id/schedule/history
|
||||||
|
→ [{id, generation, valid_from, valid_until}] ordered by generation DESC
|
||||||
|
|
||||||
|
GET /channels/:id/schedule/history/:gen_id
|
||||||
|
→ full GeneratedSchedule with slots
|
||||||
|
→ 404 if gen_id not found or channel_id mismatch
|
||||||
|
|
||||||
|
POST /channels/:id/schedule/history/:gen_id/rollback
|
||||||
|
→ 404 if gen_id not found or channel_id mismatch
|
||||||
|
→ explicit two-step delete (no DB-level cascade from playback_records to generated_schedules):
|
||||||
|
1. DELETE FROM playback_records WHERE channel_id = ? AND generation > :target_generation
|
||||||
|
2. DELETE FROM generated_schedules WHERE channel_id = ? AND generation > :target_generation
|
||||||
|
(scheduled_slots cascade via FK from generated_schedules)
|
||||||
|
→ calls generate_schedule from now
|
||||||
|
→ 200 {new_schedule}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Repository changes:**
|
||||||
|
- `list_schedule_history(channel_id)` — headers only
|
||||||
|
- `get_schedule_by_id(channel_id, gen_id)` — full with slots
|
||||||
|
- `delete_schedules_after(channel_id, generation_num)` — two-step explicit delete as above
|
||||||
|
|
||||||
|
**Files changed (backend):**
|
||||||
|
- `domain/src/repositories.rs` — extend `ScheduleRepository`
|
||||||
|
- `infra/src/schedule_repo.rs` — implement list, get-by-id, delete-after
|
||||||
|
- `api/src/routes/channels.rs` — new history and rollback endpoints
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Frontend
|
||||||
|
|
||||||
|
### Weekly grid editor (`edit-channel-sheet.tsx`)
|
||||||
|
|
||||||
|
Replace the flat block list with a tabbed weekly grid:
|
||||||
|
|
||||||
|
- 7 day tabs (Mon–Sun), each showing that day's block list
|
||||||
|
- Blocks within a day: same card UI as current (drag to reorder, edit, delete)
|
||||||
|
- "Copy to →" dropdown per tab: duplicates block entries with new UUIDs into target day(s)
|
||||||
|
- "+ Add block for [Day]" button per tab
|
||||||
|
- "🕐 Config history" button in sheet footer → opens config history panel
|
||||||
|
|
||||||
|
### Config history panel (`config-history-sheet.tsx` — new)
|
||||||
|
|
||||||
|
- List of snapshots: version_num, timestamp, label (if pinned)
|
||||||
|
- Current version highlighted
|
||||||
|
- Pin button on current version (opens label input)
|
||||||
|
- Restore button on any past version (confirm dialog)
|
||||||
|
|
||||||
|
### Schedule audit log (`schedule-history-dialog.tsx` — new)
|
||||||
|
|
||||||
|
- Lists past generations: gen#, date range
|
||||||
|
- "Rollback to here" button with confirm dialog
|
||||||
|
|
||||||
|
### Types (`lib/types.ts`)
|
||||||
|
```ts
|
||||||
|
type Weekday = 'monday' | 'tuesday' | 'wednesday' | 'thursday' | 'friday' | 'saturday' | 'sunday'
|
||||||
|
const WEEKDAYS: Weekday[] = ['monday','tuesday','wednesday','thursday','friday','saturday','sunday']
|
||||||
|
|
||||||
|
interface ScheduleConfig {
|
||||||
|
day_blocks: Record<Weekday, ProgrammingBlock[]>
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ConfigSnapshot {
|
||||||
|
id: string
|
||||||
|
version_num: number
|
||||||
|
label: string | null
|
||||||
|
created_at: string
|
||||||
|
// channel_id intentionally omitted — always accessed via /channels/:id/config/history
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ScheduleHistoryEntry {
|
||||||
|
id: string
|
||||||
|
generation: number
|
||||||
|
valid_from: string
|
||||||
|
valid_until: string
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Zod schema (`lib/schemas.ts`)
|
||||||
|
```ts
|
||||||
|
const weekdaySchema = z.enum(['monday','tuesday','wednesday','thursday','friday','saturday','sunday'])
|
||||||
|
|
||||||
|
// replace blocks: z.array(blockSchema) with:
|
||||||
|
day_blocks: z.record(weekdaySchema, z.array(blockSchema)).default(
|
||||||
|
() => Object.fromEntries(WEEKDAYS.map(d => [d, []])) as Record<Weekday, ProgrammingBlock[]>
|
||||||
|
)
|
||||||
|
// A missing day key is valid (treated as empty). The default initializes all days to [].
|
||||||
|
```
|
||||||
|
|
||||||
|
### Channel export (`lib/channel-export.ts`)
|
||||||
|
Export format after V2: `day_blocks` map as-is (no flattening). The export JSON shape mirrors `ScheduleConfig` directly. Re-import reads via the same `ScheduleConfigCompat` deserialization path, so V1 exports remain importable indefinitely.
|
||||||
|
|
||||||
|
### New hooks (`hooks/use-channels.ts`)
|
||||||
|
- `useConfigHistory(channelId)`
|
||||||
|
- `useRestoreConfig()`
|
||||||
|
- `usePinSnapshot()`
|
||||||
|
- `useScheduleHistory(channelId)`
|
||||||
|
- `useScheduleGeneration(channelId, genId)` (lazy, for detail view)
|
||||||
|
- `useRollbackSchedule()`
|
||||||
|
|
||||||
|
### Files changed (frontend)
|
||||||
|
- `lib/types.ts`
|
||||||
|
- `lib/schemas.ts`
|
||||||
|
- `lib/channel-export.ts`
|
||||||
|
- `hooks/use-channels.ts`
|
||||||
|
- `dashboard/components/edit-channel-sheet.tsx`
|
||||||
|
- `dashboard/components/config-history-sheet.tsx` (new)
|
||||||
|
- `dashboard/components/schedule-history-dialog.tsx` (new)
|
||||||
|
- `app/(main)/dashboard/page.tsx` — wire new dialog triggers
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
| Scenario | Expected |
|
||||||
|
|---|---|
|
||||||
|
| Load channel with old `{blocks:[...]}` config | Blocks appear on all 7 day tabs |
|
||||||
|
| `PUT /channels/:id` on old-format channel | Config saved as V2 `{day_blocks:{...}}`; snapshot v1 created |
|
||||||
|
| Channel with Mon+Sat blocks only → generate | Slots only on Mondays and Saturdays in 7-day window |
|
||||||
|
| Day with empty block list | No slots that day, no error |
|
||||||
|
| `PUT /channels/:id` twice | `GET /config/history` returns 2 entries with incrementing version_num |
|
||||||
|
| Pin snapshot | Label persists in history list |
|
||||||
|
| Restore snapshot | Config reverts; new snapshot created at top of history |
|
||||||
|
| `GET /schedule/history/:bad_id` | 404 |
|
||||||
|
| Generate 3 schedules → rollback to gen#1 | gen#2+3 deleted (schedules + playback_records); new generation resumes from gen#1 continuity |
|
||||||
|
| Sequential block at S4E2 → rollback → regenerate | New schedule starts at correct episode |
|
||||||
|
| Payload with both `blocks` and `day_blocks` keys | `day_blocks` used, `blocks` silently ignored |
|
||||||
|
| V1 export file re-imported after V2 deploy | Deserializes correctly via compat path |
|
||||||
1
k-tv-backend/Cargo.lock
generated
1
k-tv-backend/Cargo.lock
generated
@@ -702,6 +702,7 @@ dependencies = [
|
|||||||
"email_address",
|
"email_address",
|
||||||
"rand 0.8.5",
|
"rand 0.8.5",
|
||||||
"serde",
|
"serde",
|
||||||
|
"serde_json",
|
||||||
"thiserror 2.0.17",
|
"thiserror 2.0.17",
|
||||||
"tokio",
|
"tokio",
|
||||||
"url",
|
"url",
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ use chrono::{DateTime, Utc};
|
|||||||
use sqlx::FromRow;
|
use sqlx::FromRow;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
use domain::{AccessMode, Channel, ChannelId, DomainError, LogoPosition, RecyclePolicy, ScheduleConfig, UserId};
|
use domain::{AccessMode, Channel, ChannelId, DomainError, LogoPosition, RecyclePolicy, ScheduleConfig, ScheduleConfigCompat, UserId};
|
||||||
|
|
||||||
#[derive(Debug, FromRow)]
|
#[derive(Debug, FromRow)]
|
||||||
pub(super) struct ChannelRow {
|
pub(super) struct ChannelRow {
|
||||||
@@ -44,10 +44,11 @@ impl TryFrom<ChannelRow> for Channel {
|
|||||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid channel UUID: {}", e)))?;
|
.map_err(|e| DomainError::RepositoryError(format!("Invalid channel UUID: {}", e)))?;
|
||||||
let owner_id: UserId = Uuid::parse_str(&row.owner_id)
|
let owner_id: UserId = Uuid::parse_str(&row.owner_id)
|
||||||
.map_err(|e| DomainError::RepositoryError(format!("Invalid owner UUID: {}", e)))?;
|
.map_err(|e| DomainError::RepositoryError(format!("Invalid owner UUID: {}", e)))?;
|
||||||
let schedule_config: ScheduleConfig = serde_json::from_str(&row.schedule_config)
|
let schedule_config: ScheduleConfig = serde_json::from_str::<ScheduleConfigCompat>(&row.schedule_config)
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
DomainError::RepositoryError(format!("Invalid schedule_config JSON: {}", e))
|
DomainError::RepositoryError(format!("Invalid schedule_config JSON: {}", e))
|
||||||
})?;
|
})
|
||||||
|
.map(ScheduleConfig::from)?;
|
||||||
let recycle_policy: RecyclePolicy = serde_json::from_str(&row.recycle_policy)
|
let recycle_policy: RecyclePolicy = serde_json::from_str(&row.recycle_policy)
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
DomainError::RepositoryError(format!("Invalid recycle_policy JSON: {}", e))
|
DomainError::RepositoryError(format!("Invalid recycle_policy JSON: {}", e))
|
||||||
|
|||||||
Reference in New Issue
Block a user