13 KiB
Scheduling V2 — Design Spec
Context
The current scheduler is a 48h rolling window with a flat block list per channel. This works as MVP but has two major gaps for everyday use:
- No weekly patterns — users can't say "Monday runs X, weekends run Y"; all blocks repeat identically every day.
- No history or recovery — overwriting a channel config loses the previous setup forever; a bug that resets a sequential series (e.g. Sopranos resets from S3E4 to S1E1) has no recovery path.
This spec covers two features: weekly scheduling and schedule history.
Feature 1: Weekly Scheduling (7-day grid)
Data model
ScheduleConfig changes from a flat block list to a day-keyed map:
// BEFORE
pub struct ScheduleConfig {
pub blocks: Vec<ProgrammingBlock>,
}
// AFTER
pub struct ScheduleConfig {
pub day_blocks: HashMap<Weekday, Vec<ProgrammingBlock>>,
}
pub enum Weekday {
Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday,
}
ProgrammingBlock is otherwise unchanged. Block IDs remain UUIDs; each day has its own independent Vec, so the same "show" on Mon and Wed has two separate block entries (different IDs, independent continuity tracking).
Migration (transparent, zero-downtime)
Existing channels.schedule_config stores {"blocks":[...]}. Use #[serde(untagged)] deserialization:
#[serde(untagged)]
enum ScheduleConfigCompat {
V2(ScheduleConfig), // {"day_blocks": {"monday": [...], ...}}
V1(OldScheduleConfig), // {"blocks": [...]}
}
V1→V2 conversion: clone the blocks Vec into all 7 days. The first PUT /channels/:id after deploy saves V2 format. Channels never touched continue to deserialize via V1 path indefinitely.
Edge case: if a payload has both blocks and day_blocks keys (e.g. partially migrated export), #[serde(untagged)] tries V2 first and succeeds — day_blocks is used and blocks is silently ignored. This is acceptable; the alternative (error on ambiguity) would break more use cases.
ScheduleConfig helper methods
Three methods on ScheduleConfig must be updated:
find_block_at(weekday: Weekday, time: NaiveTime) -> Option<&ProgrammingBlock>— searchesday_blocks[weekday]for the block whose window containstime.next_block_start_after(weekday: Weekday, time: NaiveTime) -> Option<NaiveTime>— searches that day's vec; returnsNoneif no block starts aftertimeon that day (day-rollover is the caller's responsibility).earliest_block_start() -> Option<NaiveTime>— iterates all days, returns the global earliest start time across the entire week. This is the form needed by the background scheduler (which needs to know when any content starts). Empty day = no contribution; all days empty =None.
Call-site update pattern for broadcast.rs (lines 64, 171):
// derive weekday from slot start_at in channel timezone
let tz: chrono_tz::Tz = channel.timezone.parse().unwrap_or(chrono_tz::UTC);
let local_dt = slot.start_at.with_timezone(&tz);
let weekday = Weekday::from_chrono(local_dt.weekday()); // new From impl
let block = channel.schedule_config.find_block_at(weekday, local_dt.time());
The same derivation applies to dto.rs (ScheduledSlotResponse::with_block_access).
MCP crate
mcp/src/tools/channels.rs manipulates schedule_config.blocks directly. After V2:
- The MCP
add_blocktool must accept aday: Weekdayparameter (required). It pushes the new block today_blocks[day]. - The MCP
remove_blocktool must iterate all days' vecs (remove by block ID across all days, since block IDs are unique per entry). mcp/src/server.rsset_schedule_configmust accept aday_blocksmap. The oldblocks_jsonstring parameter is replaced withday_blocks_json: String(JSON object keyed by weekday name).
These are breaking changes to the MCP API — acceptable since MCP tools are internal/developer-facing.
Generation engine
- Window:
valid_from + 7 days(was 48h). UpdateGeneratedScheduledoc comment accordingly. - Day iteration: already walks calendar days; now walks 7 days, looks up
day_blocks[weekday]for each day. - Empty day: if
day_blocks[weekday]is empty or the key is absent, that day produces no slots — valid, not an error. - Continuity (
find_last_slot_per_block): unchanged.
Files changed (backend)
domain/src/value_objects.rs— addWeekdayenum withFrom<chrono::Weekday>impldomain/src/entities.rs—ScheduleConfig,OldScheduleConfigcompat struct, update helper method signatures, updateGeneratedScheduledoc commentdomain/src/services.rs— 7-day window,day_blocks[weekday]lookup per dayapi/src/routes/channels/broadcast.rs— update block lookups at lines 64 and 171 using weekday-derivation pattern aboveapi/src/dto.rs— updateScheduledSlotResponse::with_block_accessblock lookupmcp/src/tools/channels.rs—add_blockacceptsdayparam;remove_blockiterates all daysmcp/src/server.rs— replaceblocks_jsonwithday_blocks_json
Feature 2: Schedule History
2a. Config version history
Every PUT /channels/:id auto-snapshots the previous config before overwriting. Users can pin named checkpoints and restore any version.
New DB migration:
CREATE TABLE channel_config_snapshots (
id TEXT PRIMARY KEY,
channel_id TEXT NOT NULL REFERENCES channels(id) ON DELETE CASCADE,
config_json TEXT NOT NULL,
version_num INTEGER NOT NULL,
label TEXT, -- NULL = auto-saved, non-NULL = pinned
created_at TEXT NOT NULL,
UNIQUE (channel_id, version_num)
);
CREATE INDEX idx_config_snapshots_channel ON channel_config_snapshots(channel_id, version_num DESC);
version_num assignment: computed inside the write transaction as SELECT COALESCE(MAX(version_num), 0) + 1 FROM channel_config_snapshots WHERE channel_id = ?. The transaction serializes concurrent writes naturally in SQLite (single writer). The UNIQUE constraint is a safety net only — no 409 is exposed to the client; the server retries within the transaction if needed (in practice impossible with SQLite's serialized writes).
New API endpoints (all require auth + channel ownership — same auth middleware as existing channel routes):
GET /channels/:id/config/history
→ [{id, version_num, label, created_at}] -- channel_id omitted (implicit from URL)
PATCH /channels/:id/config/history/:snap_id
body: {"label": "Before S3 switchover"}
→ 404 if snap_id not found or not owned by this channel
→ 200 {id, version_num, label, created_at}
POST /channels/:id/config/history/:snap_id/restore
→ snapshots current config first, then replaces channel config with target snapshot
→ 404 if snap_id not found or not owned by this channel
→ 200 {channel}
Domain + infra changes:
ChannelConfigSnapshotentity (fields: id, channel_id, config, version_num, label, created_at)- Extend
ChannelRepositoryport:save_config_snapshot,list_config_snapshots,get_config_snapshot,patch_config_snapshot_label ChannelService::update_channelcallssave_config_snapshotbefore writing new config
Files changed (backend):
domain/src/entities.rs— addChannelConfigSnapshotdomain/src/repositories.rs— extendChannelRepositoryportinfra/src/channel_repo.rs— implement snapshot methodsmigrations_sqlite/YYYYMMDD_add_config_snapshots.sqlapi/src/routes/channels.rs— new history endpoints + DTOs for snapshot responses
2b. Generated schedule audit log
Ownership check: get_schedule_by_id(channel_id, gen_id) queries generated_schedules WHERE id = :gen_id AND channel_id = :channel_id — the channel_id column is the join, so no separate channel lookup is needed.
New API endpoints (all require auth + channel ownership):
GET /channels/:id/schedule/history
→ [{id, generation, valid_from, valid_until}] ordered by generation DESC
GET /channels/:id/schedule/history/:gen_id
→ full GeneratedSchedule with slots
→ 404 if gen_id not found or channel_id mismatch
POST /channels/:id/schedule/history/:gen_id/rollback
→ 404 if gen_id not found or channel_id mismatch
→ explicit two-step delete (no DB-level cascade from playback_records to generated_schedules):
1. DELETE FROM playback_records WHERE channel_id = ? AND generation > :target_generation
2. DELETE FROM generated_schedules WHERE channel_id = ? AND generation > :target_generation
(scheduled_slots cascade via FK from generated_schedules)
→ calls generate_schedule from now
→ 200 {new_schedule}
Repository changes:
list_schedule_history(channel_id)— headers onlyget_schedule_by_id(channel_id, gen_id)— full with slotsdelete_schedules_after(channel_id, generation_num)— two-step explicit delete as above
Files changed (backend):
domain/src/repositories.rs— extendScheduleRepositoryinfra/src/schedule_repo.rs— implement list, get-by-id, delete-afterapi/src/routes/channels.rs— new history and rollback endpoints
Frontend
Weekly grid editor (edit-channel-sheet.tsx)
Replace the flat block list with a tabbed weekly grid:
- 7 day tabs (Mon–Sun), each showing that day's block list
- Blocks within a day: same card UI as current (drag to reorder, edit, delete)
- "Copy to →" dropdown per tab: duplicates block entries with new UUIDs into target day(s)
- "+ Add block for [Day]" button per tab
- "🕐 Config history" button in sheet footer → opens config history panel
Config history panel (config-history-sheet.tsx — new)
- List of snapshots: version_num, timestamp, label (if pinned)
- Current version highlighted
- Pin button on current version (opens label input)
- Restore button on any past version (confirm dialog)
Schedule audit log (schedule-history-dialog.tsx — new)
- Lists past generations: gen#, date range
- "Rollback to here" button with confirm dialog
Types (lib/types.ts)
type Weekday = 'monday' | 'tuesday' | 'wednesday' | 'thursday' | 'friday' | 'saturday' | 'sunday'
const WEEKDAYS: Weekday[] = ['monday','tuesday','wednesday','thursday','friday','saturday','sunday']
interface ScheduleConfig {
day_blocks: Record<Weekday, ProgrammingBlock[]>
}
interface ConfigSnapshot {
id: string
version_num: number
label: string | null
created_at: string
// channel_id intentionally omitted — always accessed via /channels/:id/config/history
}
interface ScheduleHistoryEntry {
id: string
generation: number
valid_from: string
valid_until: string
}
Zod schema (lib/schemas.ts)
const weekdaySchema = z.enum(['monday','tuesday','wednesday','thursday','friday','saturday','sunday'])
// replace blocks: z.array(blockSchema) with:
day_blocks: z.record(weekdaySchema, z.array(blockSchema)).default(
() => Object.fromEntries(WEEKDAYS.map(d => [d, []])) as Record<Weekday, ProgrammingBlock[]>
)
// A missing day key is valid (treated as empty). The default initializes all days to [].
Channel export (lib/channel-export.ts)
Export format after V2: day_blocks map as-is (no flattening). The export JSON shape mirrors ScheduleConfig directly. Re-import reads via the same ScheduleConfigCompat deserialization path, so V1 exports remain importable indefinitely.
New hooks (hooks/use-channels.ts)
useConfigHistory(channelId)useRestoreConfig()usePinSnapshot()useScheduleHistory(channelId)useScheduleGeneration(channelId, genId)(lazy, for detail view)useRollbackSchedule()
Files changed (frontend)
lib/types.tslib/schemas.tslib/channel-export.tshooks/use-channels.tsdashboard/components/edit-channel-sheet.tsxdashboard/components/config-history-sheet.tsx(new)dashboard/components/schedule-history-dialog.tsx(new)app/(main)/dashboard/page.tsx— wire new dialog triggers
Verification
| Scenario | Expected |
|---|---|
Load channel with old {blocks:[...]} config |
Blocks appear on all 7 day tabs |
PUT /channels/:id on old-format channel |
Config saved as V2 {day_blocks:{...}}; snapshot v1 created |
| Channel with Mon+Sat blocks only → generate | Slots only on Mondays and Saturdays in 7-day window |
| Day with empty block list | No slots that day, no error |
PUT /channels/:id twice |
GET /config/history returns 2 entries with incrementing version_num |
| Pin snapshot | Label persists in history list |
| Restore snapshot | Config reverts; new snapshot created at top of history |
GET /schedule/history/:bad_id |
404 |
| Generate 3 schedules → rollback to gen#1 | gen#2+3 deleted (schedules + playback_records); new generation resumes from gen#1 continuity |
| Sequential block at S4E2 → rollback → regenerate | New schedule starts at correct episode |
Payload with both blocks and day_blocks keys |
day_blocks used, blocks silently ignored |
| V1 export file re-imported after V2 deploy | Deserializes correctly via compat path |