比较提交

...

8 次代码提交

作者 SHA1 备注 提交日期
cryptocommuniums-afk
495da60212 Sync relay preview changelog version 2026-03-17 15:07:56 +08:00
cryptocommuniums-afk
1adadbad8c Harden relay preview mp4 handling 2026-03-17 15:03:33 +08:00
cryptocommuniums-afk
b1752110fb Sync changelog repo version 2026-03-17 14:46:19 +08:00
cryptocommuniums-afk
0af88b3a15 Fix live camera media asset URLs 2026-03-17 14:44:18 +08:00
cryptocommuniums-afk
902bd783c9 Harden live camera viewer sync rendering 2026-03-17 14:15:14 +08:00
cryptocommuniums-afk
597f16d0b9 Fix live camera pose loading and relay buffer 2026-03-17 12:31:12 +08:00
cryptocommuniums-afk
f3f7e1982c Improve live camera relay buffering 2026-03-17 09:51:47 +08:00
cryptocommuniums-afk
63dbfd2787 fix live camera preview recovery 2026-03-17 07:39:22 +08:00
修改 9 个文件,包含 3487 行新增1245 行删除

查看文件

@@ -8,11 +8,119 @@ export type ChangeLogEntry = {
}; };
export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
{
version: "2026.03.17-live-camera-relay-mp4-hardening",
releaseDate: "2026-03-17",
repoVersion: "1adadba",
summary:
"修复实时分析 relay 预览在 Chrome `mp4` 分段下容易失效的问题,并让 live-camera 录制优先回到更稳定的 `webm`。",
features: [
"media 服务在 relay 会话收到第一段 `mp4` 时会额外保留初始化片段,后续滚动缓存即使裁掉旧分段,也能继续为 preview 重建可解码的输入源",
"relay preview 构建会跳过明显异常的小 `mp4` 分段,并优先尝试把保留的初始化片段与当前缓存拼成单一输入后再转成 `preview.webm`",
"如果 relay preview 本轮重建失败,但磁盘上仍有上一版可播放 `preview.webm`,worker 会保留旧预览继续对 viewer 提供播放,而不是直接把同步观看打成永久失败",
"live-camera 的合成录制 mime 选择已改为优先 `video/webm`,Chrome 不再默认上传 fragmented `mp4` relay 分段,从源头减少 `trex/tfhd` 类 ffmpeg 拼接失败",
],
tests: [
"cd media && go test ./...",
"pnpm check",
"pnpm build",
"部署后线上 smoke: 已确认 `https://te.hao.work/` 正在提供新构建;当前线上仍有一条补丁前启动的旧 `mp4` relay 会话在运行,因此完整的 `webm` relay 端到端验证需要在重启该实时分析会话后继续确认",
],
},
{
version: "2026.03.17-live-camera-media-asset-url",
releaseDate: "2026-03-17",
repoVersion: "0af88b3",
summary:
"修复同步观看预览地址被重复拼接 `/media` 导致的 404,观看端可以继续打开 relay 缓存视频。",
features: [
"共享的 `getMediaAssetUrl()` 现在会保留已带 `/media/` 前缀的应用内路径,不再把 `/media/assets/...` 再次拼成 `/media/media/assets/...`",
"当服务端直接返回完整 `https://...` 外链时,前端会原样使用该地址,避免对外部媒体地址做错误拼接",
"其他仍是普通相对路径的媒体资源会继续自动补齐 `/media` 前缀,因此旧调用方无需改动",
"同步观看点击“同步观看”后,请求的 preview 地址恢复为 `/media/assets/sessions/.../preview.webm`,不再因 `404 page not found` 导致无视频可播",
"线上 smoke 已确认 `https://te.hao.work/` 已切换到包含本次修复的新构建,而不是继续提供部署前的旧资源 revision",
],
tests: [
"pnpm vitest run client/src/lib/media.test.ts",
"pnpm check",
"pnpm build",
"playwright-skill 线上 smoke: 登录 `H1` 后访问 `https://te.hao.work/live-camera`,确认 viewer 实际请求 `https://te.hao.work/media/assets/sessions/.../preview.webm?...` 并返回 `200`,同时不存在 `/media/media/...` 双前缀请求",
"线上 smoke: 已确认部署前公开站点还是旧 revision;部署后 `https://te.hao.work/` 已切换到包含本次修复的新构建",
],
},
{
version: "2026.03.17-live-camera-pose-buffer-window",
releaseDate: "2026-03-17",
repoVersion: "f3f7e19+pose-buffer-window",
summary:
"修复实时分析启动时的 MediaPipe Pose 模块加载崩溃,并把多端同步缓存改为默认 2 分钟、可选 10 秒到 5 分钟。",
features: [
"live-camera 开始分析时不再直接解构 `import(\"@mediapipe/pose\")` 的返回值,而是兼容 `Pose`、`default.Pose` 和默认导出三种形态;模块缺失时会抛出明确错误,避免再次出现 `Cannot destructure property 'Pose' ... as it is undefined`",
"同步观看的 relay 缓存时长改为按会话配置,范围 10 秒到 5 分钟,默认 2 分钟;viewer 文案、徽标和设置面板都会实时显示当前缓存窗口",
"owner 端合成画布录制改为每 10 秒上传一次 relay 分片,同时继续维持每 60 秒一段的自动归档录像,因此观看端切到短缓存时不需要再等满 60 秒才出现平滑视频",
"media 服务会按各自 relay 会话的缓存窗口裁剪预览分段,并在从磁盘恢复旧会话时自动归一化缓存秒数,避免旧数据继续按固定 60 秒窗口工作",
"同步端渲染远端 recentSegments 时新增旧快照归一化,`keyFrames`、`issueSummary` 等数组字段缺失时也会自动补默认值,避免再出现 `Cannot read properties of undefined (reading 'length')`",
"同步观看界面新增“已累积 / 还需多久才能看到首段回放 / 距离目标缓存还差多少”的提示,观看端不再只显示笼统的等待文案",
"线上 smoke 已确认 `https://te.hao.work/` 已经提供本次新构建,而不是旧资源版本;首页、主样式和 `pose` 模块都已切到本次发布的最新资源 revision",
],
tests: [
"cd media && go test ./...",
"pnpm vitest run client/src/lib/liveCamera.test.ts",
"pnpm check",
"pnpm build",
"pnpm exec playwright test tests/e2e/app.spec.ts",
"playwright-skill 线上 smoke: 登录 `H1` 后访问 `https://te.hao.work/live-camera`,完成校准、启用假摄像头并点击“开始分析”,确认页面进入分析中状态、默认显示“缓存 2 分钟”、且无控制台与页面级错误",
"curl -I https://te.hao.work/,并确认首页、主样式与 `pose` 模块资源均返回 `200` 和正确 MIME",
],
},
{
version: "2026.03.17-live-camera-relay-buffer",
releaseDate: "2026-03-17",
repoVersion: "63dbfd2+relay-buffer",
summary:
"实时分析同步观看改为服务端滚动视频缓存,观看端不再轮询单帧图片;media 服务同时新增最近 60 秒缓冲和 30 分钟缓存清理。",
features: [
"live-camera owner 端的 60 秒合成录像分段现在会额外上传到 media relay 会话,观看端改为播放服务端生成的滚动 preview 视频,不再依赖 `live-frame.jpg` 单帧轮询",
"relay 会话只保留最近 60 秒分段,worker 会在新分段到达后按最新窗口重建 `preview.webm`,避免观看端继续看到旧一分钟缓存",
"超过 30 分钟无活动的 relay 会话、分段目录和公开缓存文件会自动清理,避免多端同步长期堆积无用缓存",
"实时分析 viewer 文案和占位提示同步调整为“缓冲最近 60 秒视频 / 加载缓存回放”,更贴近现在的服务端缓存播放行为",
"media preview 非归档阶段跳过 mp4 转码,Chrome 观看直接使用 webm,降低 worker 处理时延和 CPU 消耗",
],
tests: [
"cd media && go test ./...",
"pnpm vitest run client/src/lib/liveCamera.test.ts",
'pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera page exposes camera startup controls|live camera starts analysis and produces scores|live camera switches into viewer mode when another device already owns analysis|live camera recovers mojibake viewer titles before rendering|live camera no longer opens viewer peer retries when server relay is active"',
"pnpm check",
"pnpm build",
"线上 smoke: 部署后确认 `https://te.hao.work/` 已提供新构建而不是旧资源版本,`/live-camera` viewer 端进入“服务端缓存同步”路径并返回正确的 JS/CSS MIME",
],
},
{
version: "2026.03.17-live-camera-preview-recovery",
releaseDate: "2026-03-17",
repoVersion: "06b9701",
summary:
"修复实时分析页标题乱码、同步观看残留状态导致的黑屏,以及切回本机摄像头后预览无法恢复的问题。",
features: [
"runtime 标题恢复逻辑新增更严格的乱码筛除与二次 UTF-8 解码兜底,`服...`、带替换字符的脏标题现在会优先恢复为正常中文,无法恢复时会安全回退到稳定默认标题",
"同步观看退出时会完整重置 viewer 轮询、连接标记和帧版本,不再把旧 viewer 状态残留到 owner 或空闲态,避免页面继续停留在黑屏或“等待同步画面”",
"本地摄像头预览新增独立重绑流程和多次 watchdog 重试,即使浏览器在首帧时没有及时绑定 `srcObject` 或 `play()` 被短暂打断,也会自动恢复预览",
"视频区域是否显示画面改为按当前 runtime 角色分别判断,避免 viewer 的旧连接状态误导 owner 模式,导致本地没有预览时仍隐藏占位提示",
],
tests: [
"pnpm check",
"pnpm vitest run client/src/lib/liveCamera.test.ts",
'pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera"',
"pnpm build",
"线上 smoke: `curl -I https://te.hao.work/`,并检查页面源码中的 `/assets/index-*.js`、`/assets/index-*.css`、`/assets/pose-*.js` 已切换到新构建且返回正确 MIME",
],
},
{ {
version: "2026.03.16-live-camera-runtime-refresh", version: "2026.03.16-live-camera-runtime-refresh",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "8e9e491", repoVersion: "8e9e491",
summary: "修复实时分析页偶发残留在同步观看状态、标题乱码,以及摄像头预览绑定波动导致的启动失败。", summary:
"修复实时分析页偶发残留在同步观看状态、标题乱码,以及摄像头预览绑定波动导致的启动失败。",
features: [ features: [
"live-camera 在打开拍摄引导、启用摄像头、开始分析前,都会先向服务端强制刷新 runtime 状态,避免旧的 viewer 锁残留导致本机明明已释放却仍无法启动", "live-camera 在打开拍摄引导、启用摄像头、开始分析前,都会先向服务端强制刷新 runtime 状态,避免旧的 viewer 锁残留导致本机明明已释放却仍无法启动",
"同步观看标题新增乱码恢复逻辑,可自动把 UTF-8 被误按 Latin-1 显示的标题恢复成正常中文,避免出现 `服...` 一类异常标题", "同步观看标题新增乱码恢复逻辑,可自动把 UTF-8 被误按 Latin-1 显示的标题恢复成正常中文,避免出现 `服...` 一类异常标题",
@@ -20,7 +128,7 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"e2e mock 的媒体流补齐为带假视频轨道的流对象,并把 viewer 回归改为校验“服务端 relay、无 viewer-signal”行为,减少和旧 P2P 逻辑混淆", "e2e mock 的媒体流补齐为带假视频轨道的流对象,并把 viewer 回归改为校验“服务端 relay、无 viewer-signal”行为,减少和旧 P2P 逻辑混淆",
], ],
tests: [ tests: [
"pnpm exec playwright test tests/e2e/app.spec.ts --grep \"live camera page exposes camera startup controls|live camera switches into viewer mode when another device already owns analysis|live camera recovers mojibake viewer titles before rendering|live camera no longer opens viewer peer retries when server relay is active\"", 'pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera page exposes camera startup controls|live camera switches into viewer mode when another device already owns analysis|live camera recovers mojibake viewer titles before rendering|live camera no longer opens viewer peer retries when server relay is active"',
"pnpm build", "pnpm build",
"部署后线上 smoke: `https://te.hao.work/live-camera` 登录 H1 后可见空闲态“启动摄像头”入口,确认不再被残留 viewer 锁卡住;公开站点前端资源为 `assets/index-33wVjC4p.js` 与 `assets/index-tNGuStgv.css`", "部署后线上 smoke: `https://te.hao.work/live-camera` 登录 H1 后可见空闲态“启动摄像头”入口,确认不再被残留 viewer 锁卡住;公开站点前端资源为 `assets/index-33wVjC4p.js` 与 `assets/index-tNGuStgv.css`",
], ],
@@ -29,7 +137,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-live-viewer-server-relay", version: "2026.03.16-live-viewer-server-relay",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "bb46d26", repoVersion: "bb46d26",
summary: "实时分析同步观看改为由 media 服务中转帧图,不再依赖浏览器之间的 P2P 视频连接。", summary:
"实时分析同步观看改为由 media 服务中转帧图,不再依赖浏览器之间的 P2P 视频连接。",
features: [ features: [
"owner 端现在会把带骨架、关键点和虚拟形象叠层的合成画布压缩成 JPEG 并持续上传到 media 服务", "owner 端现在会把带骨架、关键点和虚拟形象叠层的合成画布压缩成 JPEG 并持续上传到 media 服务",
"viewer 端改为直接拉取 media 服务中的最新同步帧图,不再建立 WebRTC viewer peer 连接,因此跨网络和多端观看更稳定", "viewer 端改为直接拉取 media 服务中的最新同步帧图,不再建立 WebRTC viewer peer 连接,因此跨网络和多端观看更稳定",
@@ -46,7 +155,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-camera-startup-fallbacks", version: "2026.03.16-camera-startup-fallbacks",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "a211562", repoVersion: "a211562",
summary: "修复部分设备上摄像头因后置镜头约束、分辨率约束或麦克风不可用而直接启动失败的问题。", summary:
"修复部分设备上摄像头因后置镜头约束、分辨率约束或麦克风不可用而直接启动失败的问题。",
features: [ features: [
"live-camera 与 recorder 改为共用分级降级的摄像头请求流程,会在当前画质失败时自动降分辨率、降约束并回退到兼容镜头", "live-camera 与 recorder 改为共用分级降级的摄像头请求流程,会在当前画质失败时自动降分辨率、降约束并回退到兼容镜头",
"当设备不支持默认后置摄像头或当前镜头不可用时,页面会自动切换到实际可用的镜头方向,避免直接报错后卡死在未启动状态", "当设备不支持默认后置摄像头或当前镜头不可用时,页面会自动切换到实际可用的镜头方向,避免直接报错后卡死在未启动状态",
@@ -62,7 +172,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-live-analysis-viewer-full-sync", version: "2026.03.16-live-analysis-viewer-full-sync",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "922a9fb", repoVersion: "922a9fb",
summary: "多端同步观看改为按持有端快照完整渲染,另一设备可同步看到视频状态、模式、画质、虚拟形象和保存阶段信息。", summary:
"多端同步观看改为按持有端快照完整渲染,另一设备可同步看到视频状态、模式、画质、虚拟形象和保存阶段信息。",
features: [ features: [
"viewer 端现在同步显示持有端的会话标题、训练模式、设备端、拍摄视角、画质模式、虚拟形象状态和最近同步时间", "viewer 端现在同步显示持有端的会话标题、训练模式、设备端、拍摄视角、画质模式、虚拟形象状态和最近同步时间",
"同步观看时的分析阶段、保存阶段、已完成状态也会跟随主端刷新,不再只显示本地默认状态", "同步观看时的分析阶段、保存阶段、已完成状态也会跟随主端刷新,不再只显示本地默认状态",
@@ -70,7 +181,7 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"新增 viewer 同步信息卡,明确允许 1 秒级延迟,并持续显示最近心跳时间", "新增 viewer 同步信息卡,明确允许 1 秒级延迟,并持续显示最近心跳时间",
], ],
tests: [ tests: [
"pnpm exec playwright test tests/e2e/app.spec.ts --grep \"live camera switches into viewer mode|viewer stream|recorder blocks\"", 'pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera switches into viewer mode|viewer stream|recorder blocks"',
"pnpm build", "pnpm build",
"部署后线上 smoke: `https://te.hao.work/` 已提供 `assets/index-HRdM3fxq.js` 与 `assets/index-tNGuStgv.css`;同账号 H1 双端登录后,移动端 owner 可开始实时分析,桌面端 `/live-camera` 自动进入同步观看并显示主端信息、同步视频流,owner 点击结束分析后 viewer 会同步进入保存阶段", "部署后线上 smoke: `https://te.hao.work/` 已提供 `assets/index-HRdM3fxq.js` 与 `assets/index-tNGuStgv.css`;同账号 H1 双端登录后,移动端 owner 可开始实时分析,桌面端 `/live-camera` 自动进入同步观看并显示主端信息、同步视频流,owner 点击结束分析后 viewer 会同步进入保存阶段",
], ],
@@ -79,7 +190,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-live-analysis-lock-hardening", version: "2026.03.16-live-analysis-lock-hardening",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "f9db6ef", repoVersion: "f9db6ef",
summary: "修复同账号多端实时分析在旧登录态下仍可重复占用摄像头的问题,补强同步观看重试、录制页占用锁,并修复部署后启动阶段长时间 502。", summary:
"修复同账号多端实时分析在旧登录态下仍可重复占用摄像头的问题,补强同步观看重试、录制页占用锁,并修复部署后启动阶段长时间 502。",
features: [ features: [
"旧用户名登录 token 即使缺少 `sid`,现在也会按 token 本身派生唯一会话标识,不再把不同设备错误识别成同一持有端", "旧用户名登录 token 即使缺少 `sid`,现在也会按 token 本身派生唯一会话标识,不再把不同设备错误识别成同一持有端",
"同步观看模式新增 viewer 自动重试当持有端刚启动推流、viewer 首次连接返回 `viewer stream not ready` 时,会自动重连而不是一直黑屏", "同步观看模式新增 viewer 自动重试当持有端刚启动推流、viewer 首次连接返回 `viewer stream not ready` 时,会自动重连而不是一直黑屏",
@@ -91,7 +203,7 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"curl -I https://te.hao.work/", "curl -I https://te.hao.work/",
"pnpm check", "pnpm check",
"pnpm exec vitest run server/_core/sdk.test.ts server/features.test.ts", "pnpm exec vitest run server/_core/sdk.test.ts server/features.test.ts",
"pnpm exec playwright test tests/e2e/app.spec.ts --grep \"viewer mode|viewer stream|recorder blocks\"", 'pnpm exec playwright test tests/e2e/app.spec.ts --grep "viewer mode|viewer stream|recorder blocks"',
"pnpm build", "pnpm build",
"线上 smoke: H1 手机端开启实时分析后,PC 端 `/live-camera` 自动进入同步观看并显示同步画面,`/recorder` 禁止启动摄像头;结束分析后会话可正常释放", "线上 smoke: H1 手机端开启实时分析后,PC 端 `/live-camera` 自动进入同步观看并显示同步画面,`/recorder` 禁止启动摄像头;结束分析后会话可正常释放",
], ],
@@ -100,7 +212,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-live-analysis-runtime-migration", version: "2026.03.16-live-analysis-runtime-migration",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "2b72ef9", repoVersion: "2b72ef9",
summary: "修复实时分析因缺失 `live_analysis_runtime` 表导致的启动失败,并补齐迁移记录避免后续部署再次漏表。", summary:
"修复实时分析因缺失 `live_analysis_runtime` 表导致的启动失败,并补齐迁移记录避免后续部署再次漏表。",
features: [ features: [
"生产库补建 `live_analysis_runtime` 表,并补写 `__drizzle_migrations` 中缺失的 `0011_live_analysis_runtime` 记录", "生产库补建 `live_analysis_runtime` 表,并补写 `__drizzle_migrations` 中缺失的 `0011_live_analysis_runtime` 记录",
"仓库内 Drizzle migration journal 补齐 `0011_live_analysis_runtime` 条目,后续 `docker compose` 部署可正确感知该迁移", "仓库内 Drizzle migration journal 补齐 `0011_live_analysis_runtime` 条目,后续 `docker compose` 部署可正确感知该迁移",
@@ -120,7 +233,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-live-camera-multidevice-viewer", version: "2026.03.16-live-camera-multidevice-viewer",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "4e4122d", repoVersion: "4e4122d",
summary: "实时分析新增同账号多端互斥和同步观看模式,分析持有端独占摄像头,其它端只能查看同步画面与核心识别结果。", summary:
"实时分析新增同账号多端互斥和同步观看模式,分析持有端独占摄像头,其它端只能查看同步画面与核心识别结果。",
features: [ features: [
"同一账号在 `/live-camera` 进入实时分析后,会写入按用户维度的 runtime 锁,其他设备不能重复启动摄像头或分析", "同一账号在 `/live-camera` 进入实时分析后,会写入按用户维度的 runtime 锁,其他设备不能重复启动摄像头或分析",
"其他设备会自动进入“同步观看模式”,可订阅持有端的实时画面,并同步看到动作、评分、反馈、最近片段和归档段数", "其他设备会自动进入“同步观看模式”,可订阅持有端的实时画面,并同步看到动作、评分、反馈、最近片段和归档段数",
@@ -133,8 +247,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"pnpm exec vitest run server/features.test.ts", "pnpm exec vitest run server/features.test.ts",
"go test ./... && go build ./... (media)", "go test ./... && go build ./... (media)",
"pnpm build", "pnpm build",
"pnpm exec playwright test tests/e2e/app.spec.ts --grep \"live camera\"", 'pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera"',
"pnpm exec playwright test tests/e2e/app.spec.ts --grep \"recorder flow archives a session and exposes it in videos\"", 'pnpm exec playwright test tests/e2e/app.spec.ts --grep "recorder flow archives a session and exposes it in videos"',
"curl -I https://te.hao.work/live-camera", "curl -I https://te.hao.work/live-camera",
], ],
}, },
@@ -142,7 +256,8 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.16-live-analysis-overlay-archive", version: "2026.03.16-live-analysis-overlay-archive",
releaseDate: "2026-03-16", releaseDate: "2026-03-16",
repoVersion: "4fb2d09", repoVersion: "4fb2d09",
summary: "实时分析新增 60 秒自动归档录像,录制内容会保留骨架、关键点和虚拟形象叠层,并同步进入视频库。", summary:
"实时分析新增 60 秒自动归档录像,录制内容会保留骨架、关键点和虚拟形象叠层,并同步进入视频库。",
features: [ features: [
"实时分析开始后会自动录制合成画布,每 60 秒自动切段归档", "实时分析开始后会自动录制合成画布,每 60 秒自动切段归档",
"归档录像会保留原视频、骨架线、关键点和当前虚拟形象覆盖效果", "归档录像会保留原视频、骨架线、关键点和当前虚拟形象覆盖效果",
@@ -162,17 +277,15 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
version: "2026.03.15-live-analysis-leave-hint", version: "2026.03.15-live-analysis-leave-hint",
releaseDate: "2026-03-15", releaseDate: "2026-03-15",
repoVersion: "5c2dcf2", repoVersion: "5c2dcf2",
summary: "实时分析结束后增加离开提示,明确何时必须停留、何时可以安全关闭或切页。", summary:
"实时分析结束后增加离开提示,明确何时必须停留、何时可以安全关闭或切页。",
features: [ features: [
"分析进行中显示“不要关闭或切走页面”提示", "分析进行中显示“不要关闭或切走页面”提示",
"结束分析后保存阶段显示“请暂时停留当前页面”提示", "结束分析后保存阶段显示“请暂时停留当前页面”提示",
"保存成功后明确提示“现在可以关闭浏览器或切换到其他页面”", "保存成功后明确提示“现在可以关闭浏览器或切换到其他页面”",
"分析中和保存中挂接 beforeunload 提醒,减少误关页面导致的数据丢失", "分析中和保存中挂接 beforeunload 提醒,减少误关页面导致的数据丢失",
], ],
tests: [ tests: ["pnpm check", "pnpm build"],
"pnpm check",
"pnpm build",
],
}, },
{ {
version: "2026.03.15-training-generator-collapse", version: "2026.03.15-training-generator-collapse",
@@ -185,10 +298,7 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"移动端继续直接展示完整生成器,避免隐藏关键操作", "移动端继续直接展示完整生成器,避免隐藏关键操作",
"未生成计划时点击“前往生成训练计划”会自动展开并滚动到生成面板", "未生成计划时点击“前往生成训练计划”会自动展开并滚动到生成面板",
], ],
tests: [ tests: ["pnpm check", "pnpm build"],
"pnpm check",
"pnpm build",
],
}, },
{ {
version: "2026.03.15-progress-time-actions", version: "2026.03.15-progress-time-actions",
@@ -201,10 +311,7 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"展开态动作明细统一用中文动作标签展示", "展开态动作明细统一用中文动作标签展示",
"提醒页通知时间统一切换为 Asia/Shanghai", "提醒页通知时间统一切换为 Asia/Shanghai",
], ],
tests: [ tests: ["pnpm check", "pnpm build"],
"pnpm check",
"pnpm build",
],
}, },
{ {
version: "2026.03.15-session-changelog", version: "2026.03.15-session-changelog",
@@ -256,7 +363,7 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
], ],
tests: [ tests: [
"pnpm check", "pnpm check",
"pnpm exec vitest run server/features.test.ts -t \"video\\\\.\"", 'pnpm exec vitest run server/features.test.ts -t "video\\\\."',
"Playwright 真实站点完成 /videos 新增-编辑-删除全链路", "Playwright 真实站点完成 /videos 新增-编辑-删除全链路",
], ],
}, },
@@ -271,8 +378,6 @@ export const CHANGE_LOG_ENTRIES: ChangeLogEntry[] = [
"训练提醒通知", "训练提醒通知",
"通知历史管理", "通知历史管理",
], ],
tests: [ tests: ["教程库、提醒、通知相关测试通过"],
"教程库、提醒、通知相关测试通过",
],
}, },
]; ];

查看文件

@@ -1,5 +1,5 @@
import { describe, expect, it } from "vitest"; import { describe, expect, it } from "vitest";
import { formatRecordingTime, pickBitrate } from "./media"; import { formatRecordingTime, getMediaAssetUrl, pickBitrate } from "./media";
describe("media utilities", () => { describe("media utilities", () => {
it("formats recording time with minute and second padding", () => { it("formats recording time with minute and second padding", () => {
@@ -14,4 +14,16 @@ describe("media utilities", () => {
expect(pickBitrate("balanced", true)).toBe(1_400_000); expect(pickBitrate("balanced", true)).toBe(1_400_000);
expect(pickBitrate("balanced", false)).toBe(1_900_000); expect(pickBitrate("balanced", false)).toBe(1_900_000);
}); });
it("keeps already-prefixed media asset paths stable", () => {
expect(getMediaAssetUrl("/media/assets/sessions/demo/preview.webm")).toBe(
"/media/assets/sessions/demo/preview.webm"
);
expect(getMediaAssetUrl("https://cdn.example.com/demo.webm")).toBe(
"https://cdn.example.com/demo.webm"
);
expect(getMediaAssetUrl("/assets/sessions/demo/preview.webm")).toBe(
"/media/assets/sessions/demo/preview.webm"
);
});
}); });

查看文件

@@ -14,11 +14,7 @@ export type ArchiveStatus =
| "completed" | "completed"
| "failed"; | "failed";
export type PreviewStatus = export type PreviewStatus = "idle" | "processing" | "ready" | "failed";
| "idle"
| "processing"
| "ready"
| "failed";
export type MediaMarker = { export type MediaMarker = {
id: string; id: string;
@@ -33,6 +29,7 @@ export type MediaSession = {
id: string; id: string;
userId: string; userId: string;
title: string; title: string;
purpose?: "recording" | "relay";
status: MediaSessionStatus; status: MediaSessionStatus;
archiveStatus: ArchiveStatus; archiveStatus: ArchiveStatus;
previewStatus: PreviewStatus; previewStatus: PreviewStatus;
@@ -46,6 +43,7 @@ export type MediaSession = {
uploadedBytes: number; uploadedBytes: number;
previewSegments: number; previewSegments: number;
durationMs: number; durationMs: number;
relayBufferSeconds?: number;
lastError?: string; lastError?: string;
previewUpdatedAt?: string; previewUpdatedAt?: string;
streamConnected: boolean; streamConnected: boolean;
@@ -64,11 +62,14 @@ export type MediaSession = {
markers: MediaMarker[]; markers: MediaMarker[];
}; };
const MEDIA_BASE = (import.meta.env.VITE_MEDIA_BASE_URL || "/media").replace(/\/$/, ""); const MEDIA_BASE = (import.meta.env.VITE_MEDIA_BASE_URL || "/media").replace(
/\/$/,
""
);
const RETRYABLE_STATUS = new Set([502, 503, 504]); const RETRYABLE_STATUS = new Set([502, 503, 504]);
function sleep(ms: number) { function sleep(ms: number) {
return new Promise((resolve) => setTimeout(resolve, ms)); return new Promise(resolve => setTimeout(resolve, ms));
} }
async function request<T>(path: string, init?: RequestInit): Promise<T> { async function request<T>(path: string, init?: RequestInit): Promise<T> {
@@ -79,7 +80,11 @@ async function request<T>(path: string, init?: RequestInit): Promise<T> {
const response = await fetch(`${MEDIA_BASE}${path}`, init); const response = await fetch(`${MEDIA_BASE}${path}`, init);
if (!response.ok) { if (!response.ok) {
const errorBody = await response.json().catch(() => ({})); const errorBody = await response.json().catch(() => ({}));
const error = new Error(errorBody.error || errorBody.message || `Media service error (${response.status})`); const error = new Error(
errorBody.error ||
errorBody.message ||
`Media service error (${response.status})`
);
if (RETRYABLE_STATUS.has(response.status) && attempt < 2) { if (RETRYABLE_STATUS.has(response.status) && attempt < 2) {
lastError = error; lastError = error;
await sleep(400 * (attempt + 1)); await sleep(400 * (attempt + 1));
@@ -89,7 +94,8 @@ async function request<T>(path: string, init?: RequestInit): Promise<T> {
} }
return response.json() as Promise<T>; return response.json() as Promise<T>;
} catch (error) { } catch (error) {
lastError = error instanceof Error ? error : new Error("Media request failed"); lastError =
error instanceof Error ? error : new Error("Media request failed");
if (attempt < 2) { if (attempt < 2) {
await sleep(400 * (attempt + 1)); await sleep(400 * (attempt + 1));
continue; continue;
@@ -109,6 +115,8 @@ export async function createMediaSession(payload: {
qualityPreset: string; qualityPreset: string;
facingMode: string; facingMode: string;
deviceKind: string; deviceKind: string;
purpose?: "recording" | "relay";
relayBufferSeconds?: number;
}) { }) {
return request<{ session: MediaSession }>("/sessions", { return request<{ session: MediaSession }>("/sessions", {
method: "POST", method: "POST",
@@ -117,28 +125,43 @@ export async function createMediaSession(payload: {
}); });
} }
export async function signalMediaSession(sessionId: string, payload: { sdp: string; type: string }) { export async function signalMediaSession(
return request<{ sdp: string; type: string }>(`/sessions/${sessionId}/signal`, { sessionId: string,
method: "POST", payload: { sdp: string; type: string }
headers: { "Content-Type": "application/json" }, ) {
body: JSON.stringify(payload), return request<{ sdp: string; type: string }>(
}); `/sessions/${sessionId}/signal`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
}
);
} }
export async function signalMediaViewerSession(sessionId: string, payload: { sdp: string; type: string }) { export async function signalMediaViewerSession(
return request<{ viewerId: string; sdp: string; type: string }>(`/sessions/${sessionId}/viewer-signal`, { sessionId: string,
method: "POST", payload: { sdp: string; type: string }
headers: { "Content-Type": "application/json" }, ) {
body: JSON.stringify(payload), return request<{ viewerId: string; sdp: string; type: string }>(
}); `/sessions/${sessionId}/viewer-signal`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
}
);
} }
export async function uploadMediaLiveFrame(sessionId: string, blob: Blob) { export async function uploadMediaLiveFrame(sessionId: string, blob: Blob) {
return request<{ session: MediaSession }>(`/sessions/${sessionId}/live-frame`, { return request<{ session: MediaSession }>(
method: "POST", `/sessions/${sessionId}/live-frame`,
headers: { "Content-Type": blob.type || "image/jpeg" }, {
body: blob, method: "POST",
}); headers: { "Content-Type": blob.type || "image/jpeg" },
body: blob,
}
);
} }
export async function uploadMediaSegment( export async function uploadMediaSegment(
@@ -159,7 +182,12 @@ export async function uploadMediaSegment(
export async function createMediaMarker( export async function createMediaMarker(
sessionId: string, sessionId: string,
payload: { type: string; label: string; timestampMs: number; confidence?: number } payload: {
type: string;
label: string;
timestampMs: number;
confidence?: number;
}
) { ) {
return request<{ session: MediaSession }>(`/sessions/${sessionId}/markers`, { return request<{ session: MediaSession }>(`/sessions/${sessionId}/markers`, {
method: "POST", method: "POST",
@@ -184,6 +212,12 @@ export async function getMediaSession(sessionId: string) {
} }
export function getMediaAssetUrl(path: string) { export function getMediaAssetUrl(path: string) {
if (/^https?:\/\//i.test(path)) {
return path;
}
if (path.startsWith(`${MEDIA_BASE}/`)) {
return path;
}
return `${MEDIA_BASE}${path.startsWith("/") ? path : `/${path}`}`; return `${MEDIA_BASE}${path.startsWith("/") ? path : `/${path}`}`;
} }
@@ -201,7 +235,11 @@ export function pickRecorderMimeType() {
"video/webm;codecs=h264,opus", "video/webm;codecs=h264,opus",
"video/webm", "video/webm",
]; ];
return candidates.find((candidate) => window.MediaRecorder?.isTypeSupported(candidate)) || "video/webm"; return (
candidates.find(candidate =>
window.MediaRecorder?.isTypeSupported(candidate)
) || "video/webm"
);
} }
export function pickBitrate(preset: string, isMobile: boolean) { export function pickBitrate(preset: string, isMobile: boolean) {

文件差异内容过多而无法显示 加载差异

查看文件

@@ -1,5 +1,155 @@
# Tennis Training Hub - 变更日志 # Tennis Training Hub - 变更日志
## 2026.03.17-live-camera-relay-mp4-hardening (2026-03-17)
### 功能更新
- 修复实时分析 relay 预览在 Chrome `mp4` 分段下容易失效的问题;media 服务现在会在 relay 会话收到第一段 `mp4` 时额外保留初始化片段,供后续滚动 preview 重建使用
- relay preview 构建会跳过明显异常的小 `mp4` 分段,并优先把初始化片段和当前缓存合成单一输入后再转成 `preview.webm`,降低 `trex/tfhd` 缺失导致的 ffmpeg 失败率
- 如果 relay preview 本轮重建失败,但磁盘上仍有上一版可播放 `preview.webm`,worker 会保留旧预览继续服务 viewer,而不是直接把同步观看打成永久失败
- `live-camera` 合成录制的 mime 选择已经改成优先 `video/webm`;Chrome 不再默认优先上传 fragmented `mp4` relay 分段,从源头减少 `concat failed``previewStatus=failed`
### 测试
- `cd media && go test ./...`
- `pnpm check`
- `pnpm build`
- 部署后线上 smoke已确认 `https://te.hao.work/` 正在提供新构建;当前线上仍有一条补丁前启动的旧 `mp4` relay 会话在运行,因此完整的 `webm` relay 端到端验证需要在重启该实时分析会话后继续确认
### 线上 smoke
- 已确认公开站点已切到包含此修复的新资源 revision
- 当前线上仍有一条补丁前启动的旧 `mp4` relay 会话在运行,它会继续暴露旧分段问题;重新开始一条新的实时分析会话后,再继续验证 relay 分段格式、preview 更新稳定性和 viewer 播放状态
### 仓库版本
- `1adadba`
## 2026.03.17-live-camera-media-asset-url (2026-03-17)
### 功能更新
- 修复同步观看预览地址重复拼接 `/media` 的问题;当前端收到 `/media/assets/...` 这类已完整的应用内媒体路径时,会直接使用原值,不再错误请求 `/media/media/assets/...`
- 当前端收到完整的 `https://...` 外部媒体地址时,也会保持原样,避免把外链错误改写成站内 media 路径
- 其他仍是普通相对路径的媒体资源会继续自动补齐 `/media` 前缀,因此原有依赖相对路径的调用链不需要调整
- `/live-camera` 点击“同步观看”后,请求的缓存视频地址恢复为 `/media/assets/sessions/.../preview.webm`,不再因 `404 page not found` 导致无视频可播
### 测试
- `pnpm vitest run client/src/lib/media.test.ts`
- `pnpm check`
- `pnpm build`
- `playwright-skill` 线上 smoke登录 `H1` 后访问 `https://te.hao.work/live-camera`,确认 viewer 实际请求 `https://te.hao.work/media/assets/sessions/.../preview.webm?...` 并返回 `200`,同时不存在 `/media/media/...` 双前缀请求
- `curl -I https://te.hao.work/`
- `curl -I https://te.hao.work/assets/index-*.js`
- `curl -I https://te.hao.work/assets/index-*.css`
### 线上 smoke
- 部署前确认公开站点仍在旧资源 revision,尚未提供本次修复
- 部署完成后,`https://te.hao.work/` 已切到本次新构建,而不是继续提供部署前的旧资源 revision
- `/live-camera` 的同步观看请求地址已恢复为 `/media/assets/sessions/.../preview.webm`,Playwright 真实浏览器验证拿到的 preview 请求状态为 `200`
- 已确认不存在 `/media/media/assets/...` 双重前缀请求
### 仓库版本
- `0af88b3`
## 2026.03.17-live-camera-pose-buffer-window (2026-03-17)
### 功能更新
- 修复 `/live-camera` 开始分析时报错 `Cannot destructure property 'Pose' ... as it is undefined` 的问题;MediaPipe Pose 动态加载现在兼容 `Pose``default.Pose` 和默认导出三种模块形态
- 多端同步观看的 relay 缓存窗口改为按会话配置,默认 `2` 分钟,可选最短 `10` 秒、最长 `5` 分钟;viewer 页面、徽标和设置卡都会同步显示当前缓存时长
- owner 端分析录制在继续保持“每 `60` 秒自动归档”之外,会额外每 `10` 秒上传一次 relay 分片,因此短缓存模式下其他端不需要等待整整 `60` 秒才看到平滑同步视频
- media 服务会按各自 relay 会话的缓存秒数裁剪 preview 分段;从磁盘恢复旧 relay 会话时也会自动归一化到合法范围,避免旧会话继续沿用固定 `60` 秒窗口
- 同步端渲染远端 `recentSegments` 时新增旧快照归一化;即使历史快照缺少 `keyFrames``issueSummary` 等数组字段,也会自动补默认值,不再触发 `Cannot read properties of undefined (reading 'length')`
- 同步观看界面新增“已累积多少缓存、预计还需多久才能看到首段回放、距离目标缓存还差多少”的提示,观看端等待阶段会给出更明确的可观察时间说明
- 线上 smoke 已确认 `https://te.hao.work/` 正在提供本次新构建,而不是旧资源版本;当前公开站点资源 revision 为 `assets/index-CYpJPG0R.js``assets/index-BHHHsAWc.css``assets/pose-C93FSit6.js`
### 测试
- `cd media && go test ./...`
- `pnpm vitest run client/src/lib/liveCamera.test.ts`
- `pnpm check`
- `pnpm build`
- `pnpm exec playwright test tests/e2e/app.spec.ts`
- `playwright-skill` 线上 smoke登录 `H1` 后访问 `https://te.hao.work/live-camera`,完成校准、启用假摄像头并点击“开始分析”,确认页面进入分析中状态、默认显示“缓存 2 分钟”,且无控制台与页面级错误
- `curl -I https://te.hao.work/`
- `curl -I https://te.hao.work/assets/index-CYpJPG0R.js`
- `curl -I https://te.hao.work/assets/index-BHHHsAWc.css`
- `curl -I https://te.hao.work/assets/pose-C93FSit6.js`
### 线上 smoke
- `https://te.hao.work/` 已切换到本次新构建,而不是旧资源版本
- 当前公开站点前端资源 revision`assets/index-CYpJPG0R.js``assets/index-BHHHsAWc.css``assets/pose-C93FSit6.js`
- 已确认首页、主 JS、主 CSS 与 `pose` 模块均返回 `200`,且 MIME 分别为 `text/html``application/javascript``text/css``application/javascript`
- 真实浏览器验证已通过:登录 `H1` 后进入 `/live-camera`,能够完成校准、启用摄像头并点击“开始分析”;页面会进入“分析进行中”状态,默认显示“缓存 2 分钟”,且未再出现 `Pose` 模块解构异常
### 仓库版本
- `f3f7e19+pose-buffer-window`
## 2026.03.17-live-camera-relay-buffer (2026-03-17)
### 功能更新
- `/live-camera` 的同步观看改为播放 media 服务生成的滚动缓存视频,不再轮询 `live-frame.jpg` 单帧图片,因此观看端的画面会按最近 60 秒缓存视频平滑播放
- owner 端每个 60 秒的合成录像分段现在会额外上传到 `relay` 会话,worker 会在收到新分段后自动重建最近窗口的 `preview.webm`
- `relay` 会话只保留最近 60 秒视频分段,旧分段会从会话元数据和磁盘同步清理,避免观看端继续读到旧一分钟之前的缓存
- media worker 会自动清理超过 30 分钟无活动的 relay 会话、分段目录和公开缓存文件,降低磁盘堆积风险
- viewer 页面文案、加载提示和按钮文案已同步更新为“缓存视频 / 缓存回放”语义;预览阶段跳过 mp4 转码,Chrome 直接使用 webm,降低处理时延
### 测试
- `cd media && go test ./...`
- `pnpm vitest run client/src/lib/liveCamera.test.ts`
- `pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera page exposes camera startup controls|live camera starts analysis and produces scores|live camera switches into viewer mode when another device already owns analysis|live camera recovers mojibake viewer titles before rendering|live camera no longer opens viewer peer retries when server relay is active"`
- `pnpm check`
- `pnpm build`
- 线上 smoke部署后确认 `https://te.hao.work/` 已提供新构建而不是旧资源版本,`/live-camera` viewer 端进入“服务端缓存同步”路径,首页与资源文件返回正确 MIME
### 线上 smoke
- 部署完成后已确认 `https://te.hao.work/` 提供的是本次新构建,而不是旧资源版本
- `https://te.hao.work/live-camera` 的 viewer 端会走“服务端缓存同步”路径,不再请求旧的 `live-frame.jpg` 单帧同步
- 首页、主 JS、主 CSS 与 `pose` 模块均返回 `200` 和正确 MIME,未再出现脚本/样式被回退成 `text/html` 的问题
### 仓库版本
- `63dbfd2+relay-buffer`
## 2026.03.17-live-camera-preview-recovery (2026-03-17)
### 功能更新
- `/live-camera` 的 runtime 标题恢复逻辑新增更严格的乱码筛除与二次 UTF-8 解码兜底,`服...` 这类异常标题会优先恢复为正常中文;无法恢复时会自动回退到稳定默认标题,避免继续显示脏字符串
- 同步观看退出时会完整重置 viewer 轮询、连接标记和帧版本,不再把旧的 viewer 状态带回 owner 或空闲态,修复退出同步后仍黑屏、仍显示“等待同步画面”的问题
- 本地摄像头预览增加独立重绑流程和多次 watchdog 重试,即使浏览器首帧没有及时绑定 `srcObject``play()` 被短暂中断,也会继续自动恢复本地预览
- 视频区域是否显示画面改为按当前 runtime 角色分别判断,避免 viewer 旧连接状态误导 owner 模式,导致本地没有预览时仍错误隐藏占位提示
### 测试
- `pnpm check`
- `pnpm vitest run client/src/lib/liveCamera.test.ts`
- `pnpm exec playwright test tests/e2e/app.spec.ts --grep "live camera"`
- `pnpm build`
- 线上 smoke`curl -I https://te.hao.work/`
- 线上 smoke`curl -I https://te.hao.work/assets/index-BJ7rV3xe.js`
- 线上 smoke`curl -I https://te.hao.work/assets/index-tNGuStgv.css`
- 线上 smoke`curl -I https://te.hao.work/assets/pose-CZKsH31a.js`
### 线上 smoke
- `https://te.hao.work/` 已切换到本次新构建
- 当前公开站点前端资源 revision`assets/index-BJ7rV3xe.js``assets/index-tNGuStgv.css``assets/pose-CZKsH31a.js`
- 已确认 `index``css``pose` 模块均返回 `200`,且 MIME 分别为 `application/javascript``text/css``application/javascript`,不再出现此前的模块脚本和样式被当成 `text/html` 返回的问题
### 仓库版本
- `06b9701`
## 2026.03.16-live-camera-runtime-refresh (2026-03-16) ## 2026.03.16-live-camera-runtime-refresh (2026-03-16)
### 功能更新 ### 功能更新

查看文件

@@ -53,6 +53,20 @@ const (
PreviewFailed PreviewStatus = "failed" PreviewFailed PreviewStatus = "failed"
) )
type SessionPurpose string
const (
PurposeRecording SessionPurpose = "recording"
PurposeRelay SessionPurpose = "relay"
)
const (
defaultRelayBufferSeconds = 120
minRelayBufferSeconds = 10
maxRelayBufferSeconds = 300
relayCacheTTL = 30 * time.Minute
)
type PlaybackInfo struct { type PlaybackInfo struct {
WebMURL string `json:"webmUrl,omitempty"` WebMURL string `json:"webmUrl,omitempty"`
MP4URL string `json:"mp4Url,omitempty"` MP4URL string `json:"mp4Url,omitempty"`
@@ -81,35 +95,38 @@ type Marker struct {
} }
type Session struct { type Session struct {
ID string `json:"id"` ID string `json:"id"`
UserID string `json:"userId"` UserID string `json:"userId"`
Title string `json:"title"` Title string `json:"title"`
Status SessionStatus `json:"status"` Purpose SessionPurpose `json:"purpose"`
ArchiveStatus ArchiveStatus `json:"archiveStatus"` Status SessionStatus `json:"status"`
PreviewStatus PreviewStatus `json:"previewStatus"` ArchiveStatus ArchiveStatus `json:"archiveStatus"`
Format string `json:"format"` PreviewStatus PreviewStatus `json:"previewStatus"`
MimeType string `json:"mimeType"` Format string `json:"format"`
QualityPreset string `json:"qualityPreset"` MimeType string `json:"mimeType"`
FacingMode string `json:"facingMode"` QualityPreset string `json:"qualityPreset"`
DeviceKind string `json:"deviceKind"` FacingMode string `json:"facingMode"`
ReconnectCount int `json:"reconnectCount"` DeviceKind string `json:"deviceKind"`
UploadedSegments int `json:"uploadedSegments"` ReconnectCount int `json:"reconnectCount"`
UploadedBytes int64 `json:"uploadedBytes"` UploadedSegments int `json:"uploadedSegments"`
PreviewSegments int `json:"previewSegments"` UploadedBytes int64 `json:"uploadedBytes"`
DurationMS int64 `json:"durationMs"` PreviewSegments int `json:"previewSegments"`
LastError string `json:"lastError,omitempty"` DurationMS int64 `json:"durationMs"`
CreatedAt string `json:"createdAt"` RelayBufferSeconds int `json:"relayBufferSeconds"`
UpdatedAt string `json:"updatedAt"` LastError string `json:"lastError,omitempty"`
FinalizedAt string `json:"finalizedAt,omitempty"` CreatedAt string `json:"createdAt"`
PreviewUpdatedAt string `json:"previewUpdatedAt,omitempty"` UpdatedAt string `json:"updatedAt"`
StreamConnected bool `json:"streamConnected"` FinalizedAt string `json:"finalizedAt,omitempty"`
LastStreamAt string `json:"lastStreamAt,omitempty"` PreviewUpdatedAt string `json:"previewUpdatedAt,omitempty"`
ViewerCount int `json:"viewerCount"` RelayInitFilename string `json:"relayInitFilename,omitempty"`
LiveFrameURL string `json:"liveFrameUrl,omitempty"` StreamConnected bool `json:"streamConnected"`
LiveFrameUpdated string `json:"liveFrameUpdatedAt,omitempty"` LastStreamAt string `json:"lastStreamAt,omitempty"`
Playback PlaybackInfo `json:"playback"` ViewerCount int `json:"viewerCount"`
Segments []SegmentMeta `json:"segments"` LiveFrameURL string `json:"liveFrameUrl,omitempty"`
Markers []Marker `json:"markers"` LiveFrameUpdated string `json:"liveFrameUpdatedAt,omitempty"`
Playback PlaybackInfo `json:"playback"`
Segments []SegmentMeta `json:"segments"`
Markers []Marker `json:"markers"`
} }
func (s *Session) recomputeAggregates() { func (s *Session) recomputeAggregates() {
@@ -127,13 +144,15 @@ func (s *Session) recomputeAggregates() {
} }
type CreateSessionRequest struct { type CreateSessionRequest struct {
UserID string `json:"userId"` UserID string `json:"userId"`
Title string `json:"title"` Title string `json:"title"`
Format string `json:"format"` Format string `json:"format"`
MimeType string `json:"mimeType"` MimeType string `json:"mimeType"`
QualityPreset string `json:"qualityPreset"` QualityPreset string `json:"qualityPreset"`
FacingMode string `json:"facingMode"` FacingMode string `json:"facingMode"`
DeviceKind string `json:"deviceKind"` DeviceKind string `json:"deviceKind"`
Purpose string `json:"purpose"`
RelayBufferSeconds int `json:"relayBufferSeconds"`
} }
type SignalRequest struct { type SignalRequest struct {
@@ -157,10 +176,10 @@ type sessionStore struct {
rootDir string rootDir string
public string public string
mu sync.RWMutex mu sync.RWMutex
sessions map[string]*Session sessions map[string]*Session
peers map[string]*webrtc.PeerConnection peers map[string]*webrtc.PeerConnection
viewerPeers map[string]map[string]*webrtc.PeerConnection viewerPeers map[string]map[string]*webrtc.PeerConnection
videoTracks map[string]*webrtc.TrackLocalStaticRTP videoTracks map[string]*webrtc.TrackLocalStaticRTP
} }
func newSessionStore(rootDir string) (*sessionStore, error) { func newSessionStore(rootDir string) (*sessionStore, error) {
@@ -213,6 +232,15 @@ func (s *sessionStore) refreshFromDisk() error {
if err != nil { if err != nil {
return err return err
} }
for _, session := range sessions {
if session.Purpose == "" {
session.Purpose = PurposeRecording
}
if session.Purpose == PurposeRelay {
session.RelayBufferSeconds = normalizeRelayBufferSeconds(session.RelayBufferSeconds)
}
session.recomputeAggregates()
}
s.mu.Lock() s.mu.Lock()
defer s.mu.Unlock() defer s.mu.Unlock()
s.sessions = sessions s.sessions = sessions
@@ -231,6 +259,10 @@ func (s *sessionStore) publicDir(id string) string {
return filepath.Join(s.public, "sessions", id) return filepath.Join(s.public, "sessions", id)
} }
func (s *sessionStore) relayInitPath(id string) string {
return filepath.Join(s.sessionDir(id), "relay-init.mp4")
}
func (s *sessionStore) liveFramePath(id string) string { func (s *sessionStore) liveFramePath(id string) string {
return filepath.Join(s.publicDir(id), "live-frame.jpg") return filepath.Join(s.publicDir(id), "live-frame.jpg")
} }
@@ -261,22 +293,29 @@ func cloneSession(session *Session) *Session {
func (s *sessionStore) createSession(input CreateSessionRequest) (*Session, error) { func (s *sessionStore) createSession(input CreateSessionRequest) (*Session, error) {
now := time.Now().UTC().Format(time.RFC3339) now := time.Now().UTC().Format(time.RFC3339)
purpose := SessionPurpose(defaultString(input.Purpose, string(PurposeRecording)))
relayBufferSeconds := 0
if purpose == PurposeRelay {
relayBufferSeconds = normalizeRelayBufferSeconds(input.RelayBufferSeconds)
}
session := &Session{ session := &Session{
ID: randomID(), ID: randomID(),
UserID: strings.TrimSpace(input.UserID), UserID: strings.TrimSpace(input.UserID),
Title: strings.TrimSpace(input.Title), Title: strings.TrimSpace(input.Title),
Status: StatusCreated, Purpose: purpose,
ArchiveStatus: ArchiveIdle, Status: StatusCreated,
PreviewStatus: PreviewIdle, ArchiveStatus: ArchiveIdle,
Format: defaultString(input.Format, "webm"), PreviewStatus: PreviewIdle,
MimeType: defaultString(input.MimeType, "video/webm"), Format: defaultString(input.Format, "webm"),
QualityPreset: defaultString(input.QualityPreset, "balanced"), MimeType: defaultString(input.MimeType, "video/webm"),
FacingMode: defaultString(input.FacingMode, "environment"), QualityPreset: defaultString(input.QualityPreset, "balanced"),
DeviceKind: defaultString(input.DeviceKind, "desktop"), FacingMode: defaultString(input.FacingMode, "environment"),
CreatedAt: now, DeviceKind: defaultString(input.DeviceKind, "desktop"),
UpdatedAt: now, RelayBufferSeconds: relayBufferSeconds,
Segments: []SegmentMeta{}, CreatedAt: now,
Markers: []Marker{}, UpdatedAt: now,
Segments: []SegmentMeta{},
Markers: []Marker{},
} }
s.mu.Lock() s.mu.Lock()
defer s.mu.Unlock() defer s.mu.Unlock()
@@ -290,6 +329,123 @@ func (s *sessionStore) createSession(input CreateSessionRequest) (*Session, erro
return cloneSession(session), nil return cloneSession(session), nil
} }
func normalizeRelayBufferSeconds(value int) int {
if value <= 0 {
return defaultRelayBufferSeconds
}
if value < minRelayBufferSeconds {
return minRelayBufferSeconds
}
if value > maxRelayBufferSeconds {
return maxRelayBufferSeconds
}
return value
}
func relayPreviewWindowForSession(session *Session) time.Duration {
return time.Duration(normalizeRelayBufferSeconds(session.RelayBufferSeconds)) * time.Second
}
func parseSessionTime(values ...string) time.Time {
for _, value := range values {
if strings.TrimSpace(value) == "" {
continue
}
if parsed, err := time.Parse(time.RFC3339, value); err == nil {
return parsed
}
}
return time.Time{}
}
func sortSegmentsBySequence(segments []SegmentMeta) {
sort.Slice(segments, func(i, j int) bool {
return segments[i].Sequence < segments[j].Sequence
})
}
func maxInt64(value int64, minimum int64) int64 {
if value < minimum {
return minimum
}
return value
}
func trimSegmentsToDuration(segments []SegmentMeta, maxDuration time.Duration) (kept []SegmentMeta, removed []SegmentMeta) {
if len(segments) == 0 {
return []SegmentMeta{}, []SegmentMeta{}
}
limitMS := maxDuration.Milliseconds()
total := int64(0)
startIndex := len(segments) - 1
for index := len(segments) - 1; index >= 0; index-- {
total += maxInt64(segments[index].DurationMS, 1)
startIndex = index
if total >= limitMS {
break
}
}
kept = append([]SegmentMeta(nil), segments[startIndex:]...)
removed = append([]SegmentMeta(nil), segments[:startIndex]...)
return kept, removed
}
func sessionNeedsPreview(session *Session) bool {
if len(session.Segments) == 0 {
return false
}
if session.PreviewStatus == PreviewProcessing {
return false
}
if session.PreviewStatus != PreviewReady || session.PreviewSegments < len(session.Segments) {
return true
}
previewUpdatedAt := parseSessionTime(session.PreviewUpdatedAt)
if previewUpdatedAt.IsZero() {
return true
}
for _, segment := range session.Segments {
uploadedAt := parseSessionTime(segment.UploadedAt)
if !uploadedAt.IsZero() && uploadedAt.After(previewUpdatedAt) {
return true
}
}
return false
}
func (s *sessionStore) pruneExpiredRelaySessions(maxAge time.Duration, now time.Time) error {
s.mu.Lock()
defer s.mu.Unlock()
for id, session := range s.sessions {
if session.Purpose != PurposeRelay {
continue
}
lastActivity := parseSessionTime(session.UpdatedAt, session.LastStreamAt, session.LiveFrameUpdated, session.CreatedAt)
if lastActivity.IsZero() || now.Sub(lastActivity) < maxAge {
continue
}
delete(s.sessions, id)
delete(s.peers, id)
delete(s.viewerPeers, id)
delete(s.videoTracks, id)
if err := os.RemoveAll(s.sessionDir(id)); err != nil && !errors.Is(err, os.ErrNotExist) {
return err
}
if err := os.RemoveAll(s.publicDir(id)); err != nil && !errors.Is(err, os.ErrNotExist) {
return err
}
}
return nil
}
func (s *sessionStore) getSession(id string) (*Session, error) { func (s *sessionStore) getSession(id string) (*Session, error) {
s.mu.RLock() s.mu.RLock()
defer s.mu.RUnlock() defer s.mu.RUnlock()
@@ -415,7 +571,7 @@ func (s *sessionStore) listProcessableSessions() []*Session {
items = append(items, cloneSession(session)) items = append(items, cloneSession(session))
continue continue
} }
if session.PreviewSegments < len(session.Segments) && session.PreviewStatus != PreviewProcessing { if sessionNeedsPreview(session) {
items = append(items, cloneSession(session)) items = append(items, cloneSession(session))
} }
} }
@@ -822,6 +978,8 @@ func (m *mediaServer) handleSegmentUpload(sessionID string, w http.ResponseWrite
return return
} }
removedSegments := []SegmentMeta{}
persistRelayInit := false
session, err := m.store.updateSession(sessionID, func(session *Session) error { session, err := m.store.updateSession(sessionID, func(session *Session) error {
meta := SegmentMeta{ meta := SegmentMeta{
Sequence: sequence, Sequence: sequence,
@@ -842,9 +1000,16 @@ func (m *mediaServer) handleSegmentUpload(sessionID string, w http.ResponseWrite
if !found { if !found {
session.Segments = append(session.Segments, meta) session.Segments = append(session.Segments, meta)
} }
sort.Slice(session.Segments, func(i, j int) bool { if session.Purpose == PurposeRelay && extension == "mp4" && session.RelayInitFilename == "" && sequence <= 1 {
return session.Segments[i].Sequence < session.Segments[j].Sequence session.RelayInitFilename = filename
}) persistRelayInit = true
}
sortSegmentsBySequence(session.Segments)
if session.Purpose == PurposeRelay {
var kept []SegmentMeta
kept, removedSegments = trimSegmentsToDuration(session.Segments, relayPreviewWindowForSession(session))
session.Segments = kept
}
session.Status = StatusRecording session.Status = StatusRecording
session.LastError = "" session.LastError = ""
return nil return nil
@@ -853,6 +1018,17 @@ func (m *mediaServer) handleSegmentUpload(sessionID string, w http.ResponseWrite
writeError(w, http.StatusNotFound, err.Error()) writeError(w, http.StatusNotFound, err.Error())
return return
} }
if persistRelayInit {
if copyErr := copyFile(segmentPath, m.store.relayInitPath(sessionID)); copyErr != nil {
log.Printf("failed to persist relay init segment for %s: %v", sessionID, copyErr)
}
}
for _, segment := range removedSegments {
segmentPath := filepath.Join(m.store.segmentsDir(sessionID), segment.Filename)
if removeErr := os.Remove(segmentPath); removeErr != nil && !errors.Is(removeErr, os.ErrNotExist) {
log.Printf("failed to remove pruned relay segment %s: %v", segmentPath, removeErr)
}
}
writeJSON(w, http.StatusAccepted, map[string]any{"session": session}) writeJSON(w, http.StatusAccepted, map[string]any{"session": session})
} }
@@ -919,6 +1095,9 @@ func runWorkerLoop(ctx context.Context, store *sessionStore, interval time.Durat
log.Printf("[worker] failed to refresh session store: %v", err) log.Printf("[worker] failed to refresh session store: %v", err)
continue continue
} }
if err := store.pruneExpiredRelaySessions(relayCacheTTL, time.Now().UTC()); err != nil {
log.Printf("[worker] failed to prune relay cache: %v", err)
}
sessions := store.listProcessableSessions() sessions := store.listProcessableSessions()
for _, session := range sessions { for _, session := range sessions {
if err := processSession(store, session.ID); err != nil { if err := processSession(store, session.ID); err != nil {
@@ -939,7 +1118,7 @@ func processSession(store *sessionStore, sessionID string) error {
return processFinalArchive(store, sessionID) return processFinalArchive(store, sessionID)
} }
if current.PreviewSegments < len(current.Segments) { if sessionNeedsPreview(current) {
return processRollingPreview(store, sessionID) return processRollingPreview(store, sessionID)
} }
@@ -1009,38 +1188,86 @@ func buildPlaybackArtifacts(store *sessionStore, session *Session, finalize bool
outputMP4 := filepath.Join(publicDir, baseName+".mp4") outputMP4 := filepath.Join(publicDir, baseName+".mp4")
listFile := filepath.Join(store.sessionDir(sessionID), "concat.txt") listFile := filepath.Join(store.sessionDir(sessionID), "concat.txt")
validSegments := make([]SegmentMeta, 0, len(session.Segments))
inputs := make([]string, 0, len(session.Segments)) inputs := make([]string, 0, len(session.Segments))
sort.Slice(session.Segments, func(i, j int) bool { sortSegmentsBySequence(session.Segments)
return session.Segments[i].Sequence < session.Segments[j].Sequence
})
for _, segment := range session.Segments { for _, segment := range session.Segments {
inputs = append(inputs, filepath.Join(store.segmentsDir(sessionID), segment.Filename)) inputPath := filepath.Join(store.segmentsDir(sessionID), segment.Filename)
} info, statErr := os.Stat(inputPath)
if err := writeConcatList(listFile, inputs); err != nil { if statErr != nil {
return markProcessingError(store, sessionID, err, finalize) continue
}
if len(inputs) == 1 {
body, copyErr := os.ReadFile(inputs[0])
if copyErr != nil {
return markProcessingError(store, sessionID, copyErr, finalize)
} }
if writeErr := os.WriteFile(outputWebM, body, 0o644); writeErr != nil { if shouldSkipSegment(segment, info.Size()) {
return markProcessingError(store, sessionID, writeErr, finalize) continue
}
validSegments = append(validSegments, segment)
inputs = append(inputs, inputPath)
}
if len(inputs) == 0 {
return markProcessingError(store, sessionID, errors.New("no valid uploaded segments found"), finalize)
}
if !finalize && session.Purpose == PurposeRelay && usesMP4Segments(validSegments) {
mergedInput, cleanup, mergeErr := buildRelayMP4Source(store, session, validSegments, inputs)
if cleanup != nil {
defer cleanup()
}
if mergeErr == nil {
transcodeErr := runFFmpeg(
"-y",
"-i",
mergedInput,
"-c:v",
"libvpx-vp9",
"-b:v",
"1800k",
"-c:a",
"libopus",
outputWebM,
)
if transcodeErr == nil {
goto finalizePlayback
}
mergeErr = transcodeErr
}
if err := writeConcatList(listFile, inputs); err != nil {
return markProcessingError(store, sessionID, err, finalize)
} }
} else {
copyErr := runFFmpeg("-y", "-f", "concat", "-safe", "0", "-i", listFile, "-c", "copy", outputWebM) copyErr := runFFmpeg("-y", "-f", "concat", "-safe", "0", "-i", listFile, "-c", "copy", outputWebM)
if copyErr != nil { if copyErr != nil {
reencodeErr := runFFmpeg("-y", "-f", "concat", "-safe", "0", "-i", listFile, "-c:v", "libvpx-vp9", "-b:v", "1800k", "-c:a", "libopus", outputWebM) reencodeErr := runFFmpeg("-y", "-f", "concat", "-safe", "0", "-i", listFile, "-c:v", "libvpx-vp9", "-b:v", "1800k", "-c:a", "libopus", outputWebM)
if reencodeErr != nil { if reencodeErr != nil {
return markProcessingError(store, sessionID, fmt.Errorf("concat failed: %w / %v", copyErr, reencodeErr), finalize) return markProcessingError(store, sessionID, fmt.Errorf("relay mp4 preview failed: %w / %v / %v", mergeErr, copyErr, reencodeErr), finalize)
}
}
} else {
if err := writeConcatList(listFile, inputs); err != nil {
return markProcessingError(store, sessionID, err, finalize)
}
if len(inputs) == 1 {
body, copyErr := os.ReadFile(inputs[0])
if copyErr != nil {
return markProcessingError(store, sessionID, copyErr, finalize)
}
if writeErr := os.WriteFile(outputWebM, body, 0o644); writeErr != nil {
return markProcessingError(store, sessionID, writeErr, finalize)
}
} else {
copyErr := runFFmpeg("-y", "-f", "concat", "-safe", "0", "-i", listFile, "-c", "copy", outputWebM)
if copyErr != nil {
reencodeErr := runFFmpeg("-y", "-f", "concat", "-safe", "0", "-i", listFile, "-c:v", "libvpx-vp9", "-b:v", "1800k", "-c:a", "libopus", outputWebM)
if reencodeErr != nil {
return markProcessingError(store, sessionID, fmt.Errorf("concat failed: %w / %v", copyErr, reencodeErr), finalize)
}
} }
} }
} }
mp4Err := runFFmpeg("-y", "-i", outputWebM, "-c:v", "libx264", "-preset", "veryfast", "-crf", "28", "-c:a", "aac", "-movflags", "+faststart", outputMP4) finalizePlayback:
if mp4Err != nil { if finalize {
log.Printf("[worker] mp4 archive generation failed for %s: %v", sessionID, mp4Err) mp4Err := runFFmpeg("-y", "-i", outputWebM, "-c:v", "libx264", "-preset", "veryfast", "-crf", "28", "-c:a", "aac", "-movflags", "+faststart", outputMP4)
if mp4Err != nil {
log.Printf("[worker] mp4 archive generation failed for %s: %v", sessionID, mp4Err)
}
} }
webmInfo, webmStatErr := os.Stat(outputWebM) webmInfo, webmStatErr := os.Stat(outputWebM)
@@ -1049,18 +1276,20 @@ func buildPlaybackArtifacts(store *sessionStore, session *Session, finalize bool
} }
var mp4Size int64 var mp4Size int64
var mp4URL string var mp4URL string
if info, statErr := os.Stat(outputMP4); statErr == nil {
mp4Size = info.Size()
mp4URL = fmt.Sprintf("/media/assets/sessions/%s/recording.mp4", sessionID)
}
previewURL := fmt.Sprintf("/media/assets/sessions/%s/%s.webm", sessionID, baseName) previewURL := fmt.Sprintf("/media/assets/sessions/%s/%s.webm", sessionID, baseName)
if mp4URL != "" { if finalize {
previewURL = mp4URL if info, statErr := os.Stat(outputMP4); statErr == nil {
mp4Size = info.Size()
mp4URL = fmt.Sprintf("/media/assets/sessions/%s/recording.mp4", sessionID)
}
if mp4URL != "" {
previewURL = mp4URL
}
} }
_, updateErr := store.updateSession(sessionID, func(session *Session) error { _, updateErr := store.updateSession(sessionID, func(session *Session) error {
session.Playback.PreviewURL = previewURL session.Playback.PreviewURL = previewURL
session.PreviewSegments = len(inputs) session.PreviewSegments = len(validSegments)
session.PreviewUpdatedAt = time.Now().UTC().Format(time.RFC3339) session.PreviewUpdatedAt = time.Now().UTC().Format(time.RFC3339)
session.PreviewStatus = PreviewReady session.PreviewStatus = PreviewReady
session.LastError = "" session.LastError = ""
@@ -1083,6 +1312,15 @@ func buildPlaybackArtifacts(store *sessionStore, session *Session, finalize bool
func markProcessingError(store *sessionStore, sessionID string, err error, finalize bool) error { func markProcessingError(store *sessionStore, sessionID string, err error, finalize bool) error {
_, _ = store.updateSession(sessionID, func(session *Session) error { _, _ = store.updateSession(sessionID, func(session *Session) error {
if !finalize {
previewPath := filepath.Join(store.publicDir(sessionID), "preview.webm")
if info, statErr := os.Stat(previewPath); statErr == nil && info.Size() > 0 {
session.PreviewStatus = PreviewReady
session.Playback.PreviewURL = fmt.Sprintf("/media/assets/sessions/%s/preview.webm", sessionID)
session.LastError = err.Error()
return nil
}
}
session.PreviewStatus = PreviewFailed session.PreviewStatus = PreviewFailed
if finalize { if finalize {
session.ArchiveStatus = ArchiveFailed session.ArchiveStatus = ArchiveFailed
@@ -1102,6 +1340,78 @@ func writeConcatList(path string, inputs []string) error {
return os.WriteFile(path, []byte(strings.Join(lines, "\n")), 0o644) return os.WriteFile(path, []byte(strings.Join(lines, "\n")), 0o644)
} }
func usesMP4Segments(segments []SegmentMeta) bool {
for _, segment := range segments {
if strings.HasSuffix(strings.ToLower(segment.Filename), ".mp4") || strings.Contains(strings.ToLower(segment.ContentType), "mp4") {
return true
}
}
return false
}
func shouldSkipSegment(segment SegmentMeta, sizeBytes int64) bool {
if sizeBytes <= 0 {
return true
}
if strings.HasSuffix(strings.ToLower(segment.Filename), ".mp4") && sizeBytes < 4096 {
return true
}
return false
}
func buildRelayMP4Source(store *sessionStore, session *Session, segments []SegmentMeta, inputs []string) (string, func(), error) {
sourceFiles := make([]string, 0, len(inputs)+1)
initPath := store.relayInitPath(session.ID)
if session.RelayInitFilename != "" && len(segments) > 0 && segments[0].Filename != session.RelayInitFilename {
if info, err := os.Stat(initPath); err == nil && info.Size() > 0 {
sourceFiles = append(sourceFiles, initPath)
}
}
sourceFiles = append(sourceFiles, inputs...)
if len(sourceFiles) == 0 {
return "", nil, errors.New("no relay mp4 source segments found")
}
mergedPath := filepath.Join(store.sessionDir(session.ID), "relay-preview-source.mp4")
output, err := os.Create(mergedPath)
if err != nil {
return "", nil, err
}
defer output.Close()
for _, source := range sourceFiles {
input, openErr := os.Open(source)
if openErr != nil {
return "", nil, openErr
}
if _, copyErr := io.Copy(output, input); copyErr != nil {
input.Close()
return "", nil, copyErr
}
if closeErr := input.Close(); closeErr != nil {
return "", nil, closeErr
}
}
return mergedPath, func() {
_ = os.Remove(mergedPath)
}, nil
}
func copyFile(source string, target string) error {
input, err := os.Open(source)
if err != nil {
return err
}
defer input.Close()
output, err := os.Create(target)
if err != nil {
return err
}
defer output.Close()
if _, err := io.Copy(output, input); err != nil {
return err
}
return output.Close()
}
func runFFmpeg(args ...string) error { func runFFmpeg(args ...string) error {
cmd := exec.Command("ffmpeg", args...) cmd := exec.Command("ffmpeg", args...)
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()

查看文件

@@ -2,12 +2,16 @@ package main
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"os" "os"
"path/filepath" "path/filepath"
"strconv"
"strings" "strings"
"testing" "testing"
"time"
) )
func TestMediaHealthAndSessionLifecycle(t *testing.T) { func TestMediaHealthAndSessionLifecycle(t *testing.T) {
@@ -320,3 +324,310 @@ func TestLiveFrameUploadPublishesRelayFrame(t *testing.T) {
t.Fatalf("unexpected live frame content: %q", string(body)) t.Fatalf("unexpected live frame content: %q", string(body))
} }
} }
func TestRelaySegmentUploadKeepsOnlyLatestMinute(t *testing.T) {
store, err := newSessionStore(t.TempDir())
if err != nil {
t.Fatalf("newSessionStore: %v", err)
}
server := newMediaServer(store)
session, err := store.createSession(CreateSessionRequest{UserID: "1", Title: "Relay Buffer", Purpose: "relay", RelayBufferSeconds: 60})
if err != nil {
t.Fatalf("createSession: %v", err)
}
for sequence := 0; sequence < 3; sequence += 1 {
req := httptest.NewRequest(http.MethodPost, "/media/sessions/"+session.ID+"/segments?sequence="+strconv.Itoa(sequence)+"&durationMs=30000", strings.NewReader("segment"))
req.Header.Set("Content-Type", "video/webm")
res := httptest.NewRecorder()
server.routes().ServeHTTP(res, req)
if res.Code != http.StatusAccepted {
t.Fatalf("expected segment upload 202 for sequence %d, got %d", sequence, res.Code)
}
}
current, err := store.getSession(session.ID)
if err != nil {
t.Fatalf("getSession: %v", err)
}
if current.Purpose != PurposeRelay {
t.Fatalf("expected relay purpose, got %s", current.Purpose)
}
if len(current.Segments) != 2 {
t.Fatalf("expected latest 2 relay segments to remain, got %d", len(current.Segments))
}
if current.Segments[0].Sequence != 1 || current.Segments[1].Sequence != 2 {
t.Fatalf("expected relay segments 1 and 2 to remain, got %#v", current.Segments)
}
if _, err := os.Stat(filepath.Join(store.segmentsDir(session.ID), "000000.webm")); !errors.Is(err, os.ErrNotExist) {
t.Fatalf("expected earliest relay segment to be pruned from disk, got %v", err)
}
}
func TestProcessRelayPreviewPublishesBufferedWebM(t *testing.T) {
tempDir := t.TempDir()
store, err := newSessionStore(tempDir)
if err != nil {
t.Fatalf("newSessionStore: %v", err)
}
session, err := store.createSession(CreateSessionRequest{UserID: "1", Title: "Relay Preview", Purpose: "relay", RelayBufferSeconds: 60})
if err != nil {
t.Fatalf("createSession: %v", err)
}
if err := os.WriteFile(filepath.Join(store.segmentsDir(session.ID), "000000.webm"), []byte("segment"), 0o644); err != nil {
t.Fatalf("write segment: %v", err)
}
if _, err := store.updateSession(session.ID, func(current *Session) error {
current.Segments = append(current.Segments, SegmentMeta{
Sequence: 0,
Filename: "000000.webm",
DurationMS: 60000,
SizeBytes: 7,
ContentType: "video/webm",
})
current.Purpose = PurposeRelay
return nil
}); err != nil {
t.Fatalf("updateSession: %v", err)
}
if err := processRollingPreview(store, session.ID); err != nil {
t.Fatalf("processRollingPreview: %v", err)
}
current, err := store.getSession(session.ID)
if err != nil {
t.Fatalf("getSession: %v", err)
}
if current.Playback.PreviewURL == "" || !strings.HasSuffix(current.Playback.PreviewURL, "/preview.webm") {
t.Fatalf("expected relay preview webm url, got %#v", current.Playback)
}
if current.Playback.MP4URL != "" {
t.Fatalf("expected relay preview to skip mp4 generation, got %#v", current.Playback)
}
}
func TestHandleSegmentUploadPersistsRelayMP4InitSegment(t *testing.T) {
store, err := newSessionStore(t.TempDir())
if err != nil {
t.Fatalf("newSessionStore: %v", err)
}
server := newMediaServer(store)
session, err := store.createSession(CreateSessionRequest{UserID: "1", Title: "Relay MP4", Purpose: "relay", RelayBufferSeconds: 120})
if err != nil {
t.Fatalf("createSession: %v", err)
}
req := httptest.NewRequest(http.MethodPost, "/media/sessions/"+session.ID+"/segments?sequence=1&durationMs=10000", strings.NewReader("mp4-init"))
req.Header.Set("Content-Type", "video/mp4;codecs=avc1")
res := httptest.NewRecorder()
server.routes().ServeHTTP(res, req)
if res.Code != http.StatusAccepted {
t.Fatalf("expected segment upload 202, got %d", res.Code)
}
current, err := store.getSession(session.ID)
if err != nil {
t.Fatalf("getSession: %v", err)
}
if current.RelayInitFilename != "000001.mp4" {
t.Fatalf("expected relay init filename to be recorded, got %q", current.RelayInitFilename)
}
body, err := os.ReadFile(store.relayInitPath(session.ID))
if err != nil {
t.Fatalf("read relay init: %v", err)
}
if string(body) != "mp4-init" {
t.Fatalf("unexpected relay init contents: %q", string(body))
}
}
func TestProcessRelayPreviewUsesPersistedInitForMP4Fragments(t *testing.T) {
tempDir := t.TempDir()
store, err := newSessionStore(tempDir)
if err != nil {
t.Fatalf("newSessionStore: %v", err)
}
session, err := store.createSession(CreateSessionRequest{UserID: "1", Title: "Relay MP4 Preview", Purpose: "relay", RelayBufferSeconds: 120})
if err != nil {
t.Fatalf("createSession: %v", err)
}
if err := os.WriteFile(store.relayInitPath(session.ID), []byte(strings.Repeat("i", 6000)), 0o644); err != nil {
t.Fatalf("write relay init: %v", err)
}
if err := os.WriteFile(filepath.Join(store.segmentsDir(session.ID), "000082.mp4"), []byte(strings.Repeat("a", 6000)), 0o644); err != nil {
t.Fatalf("write segment 82: %v", err)
}
if err := os.WriteFile(filepath.Join(store.segmentsDir(session.ID), "000083.mp4"), []byte(strings.Repeat("b", 6000)), 0o644); err != nil {
t.Fatalf("write segment 83: %v", err)
}
if _, err := store.updateSession(session.ID, func(current *Session) error {
current.Purpose = PurposeRelay
current.RelayInitFilename = "000001.mp4"
current.Segments = []SegmentMeta{
{
Sequence: 82,
Filename: "000082.mp4",
DurationMS: 10000,
SizeBytes: 6000,
ContentType: "video/mp4;codecs=avc1",
},
{
Sequence: 83,
Filename: "000083.mp4",
DurationMS: 10000,
SizeBytes: 6000,
ContentType: "video/mp4;codecs=avc1",
},
}
return nil
}); err != nil {
t.Fatalf("updateSession: %v", err)
}
fakeFFmpeg := filepath.Join(tempDir, "ffmpeg")
script := "#!/bin/sh\ninput=''\noutput=''\nprev=''\nfor arg in \"$@\"; do\n if [ \"$prev\" = '-i' ]; then input=\"$arg\"; fi\n prev=\"$arg\"\n output=\"$arg\"\ndone\nif [ -n \"$input\" ] && [ -f \"$input\" ]; then cp \"$input\" \"$output\"; else : > \"$output\"; fi\n"
if err := os.WriteFile(fakeFFmpeg, []byte(script), 0o755); err != nil {
t.Fatalf("write fake ffmpeg: %v", err)
}
t.Setenv("PATH", tempDir+string(os.PathListSeparator)+os.Getenv("PATH"))
if err := processRollingPreview(store, session.ID); err != nil {
t.Fatalf("processRollingPreview: %v", err)
}
current, err := store.getSession(session.ID)
if err != nil {
t.Fatalf("getSession: %v", err)
}
if current.PreviewStatus != PreviewReady {
t.Fatalf("expected preview ready, got %s", current.PreviewStatus)
}
if current.Playback.PreviewURL == "" {
t.Fatalf("expected preview url to be populated")
}
}
func TestProcessRelayPreviewKeepsPreviousPreviewOnFailure(t *testing.T) {
tempDir := t.TempDir()
store, err := newSessionStore(tempDir)
if err != nil {
t.Fatalf("newSessionStore: %v", err)
}
session, err := store.createSession(CreateSessionRequest{UserID: "1", Title: "Relay Existing Preview", Purpose: "relay", RelayBufferSeconds: 120})
if err != nil {
t.Fatalf("createSession: %v", err)
}
if err := os.MkdirAll(store.publicDir(session.ID), 0o755); err != nil {
t.Fatalf("mkdir public dir: %v", err)
}
if err := os.WriteFile(filepath.Join(store.publicDir(session.ID), "preview.webm"), []byte("existing-preview"), 0o644); err != nil {
t.Fatalf("write preview: %v", err)
}
if err := os.WriteFile(filepath.Join(store.segmentsDir(session.ID), "000001.webm"), []byte("segment-one"), 0o644); err != nil {
t.Fatalf("write segment 1: %v", err)
}
if err := os.WriteFile(filepath.Join(store.segmentsDir(session.ID), "000002.webm"), []byte("segment-two"), 0o644); err != nil {
t.Fatalf("write segment 2: %v", err)
}
if _, err := store.updateSession(session.ID, func(current *Session) error {
current.Purpose = PurposeRelay
current.PreviewStatus = PreviewReady
current.Playback.PreviewURL = fmt.Sprintf("/media/assets/sessions/%s/preview.webm", session.ID)
current.Segments = []SegmentMeta{
{
Sequence: 1,
Filename: "000001.webm",
DurationMS: 10000,
SizeBytes: int64(len("segment-one")),
ContentType: "video/webm",
},
{
Sequence: 2,
Filename: "000002.webm",
DurationMS: 10000,
SizeBytes: int64(len("segment-two")),
ContentType: "video/webm",
},
}
return nil
}); err != nil {
t.Fatalf("updateSession: %v", err)
}
fakeFFmpeg := filepath.Join(tempDir, "ffmpeg")
script := "#!/bin/sh\nexit 1\n"
if err := os.WriteFile(fakeFFmpeg, []byte(script), 0o755); err != nil {
t.Fatalf("write fake ffmpeg: %v", err)
}
t.Setenv("PATH", tempDir+string(os.PathListSeparator)+os.Getenv("PATH"))
if err := processRollingPreview(store, session.ID); err == nil {
t.Fatalf("expected processRollingPreview to surface failure")
}
current, err := store.getSession(session.ID)
if err != nil {
t.Fatalf("getSession: %v", err)
}
if current.PreviewStatus != PreviewReady {
t.Fatalf("expected previous preview to remain ready, got %s", current.PreviewStatus)
}
if current.Playback.PreviewURL == "" {
t.Fatalf("expected preview url to remain available")
}
if current.LastError == "" {
t.Fatalf("expected last error to be recorded")
}
}
func TestPruneExpiredRelaySessionsRemovesOldCache(t *testing.T) {
store, err := newSessionStore(t.TempDir())
if err != nil {
t.Fatalf("newSessionStore: %v", err)
}
session, err := store.createSession(CreateSessionRequest{UserID: "1", Title: "Old Relay", Purpose: "relay", RelayBufferSeconds: 60})
if err != nil {
t.Fatalf("createSession: %v", err)
}
if err := os.WriteFile(filepath.Join(store.segmentsDir(session.ID), "000000.webm"), []byte("segment"), 0o644); err != nil {
t.Fatalf("write segment: %v", err)
}
if err := os.MkdirAll(store.publicDir(session.ID), 0o755); err != nil {
t.Fatalf("mkdir public dir: %v", err)
}
if err := os.WriteFile(filepath.Join(store.publicDir(session.ID), "preview.webm"), []byte("preview"), 0o644); err != nil {
t.Fatalf("write preview: %v", err)
}
store.mu.Lock()
store.sessions[session.ID].Purpose = PurposeRelay
store.sessions[session.ID].UpdatedAt = time.Now().UTC().Add(-31 * time.Minute).Format(time.RFC3339)
store.mu.Unlock()
if err := store.pruneExpiredRelaySessions(relayCacheTTL, time.Now().UTC()); err != nil {
t.Fatalf("pruneExpiredRelaySessions: %v", err)
}
if _, err := store.getSession(session.ID); err == nil {
t.Fatalf("expected relay session to be removed from store")
}
if _, err := os.Stat(store.sessionDir(session.ID)); !errors.Is(err, os.ErrNotExist) {
t.Fatalf("expected relay session directory to be removed, got %v", err)
}
if _, err := os.Stat(store.publicDir(session.ID)); !errors.Is(err, os.ErrNotExist) {
t.Fatalf("expected relay public directory to be removed, got %v", err)
}
}

查看文件

@@ -22,7 +22,9 @@ test("training page shows plan generation flow", async ({ page }) => {
await page.goto("/training"); await page.goto("/training");
await expect(page.getByTestId("training-title")).toBeVisible(); await expect(page.getByTestId("training-title")).toBeVisible();
const generateButton = page.getByRole("button", { name: "生成训练计划" }).last(); const generateButton = page
.getByRole("button", { name: "生成训练计划" })
.last();
await expect(generateButton).toBeVisible(); await expect(generateButton).toBeVisible();
await generateButton.click(); await generateButton.click();
await expect(page).toHaveURL(/\/training$/); await expect(page).toHaveURL(/\/training$/);
@@ -68,23 +70,82 @@ test("live camera starts analysis and produces scores", async ({ page }) => {
await expect(page.getByTestId("live-camera-score-overall")).toBeVisible(); await expect(page.getByTestId("live-camera-score-overall")).toBeVisible();
}); });
test("live camera switches into viewer mode when another device already owns analysis", async ({ page }) => { test("live camera switches into viewer mode when another device already owns analysis", async ({
page,
}) => {
await installAppMocks(page, { authenticated: true, liveViewerMode: true }); await installAppMocks(page, { authenticated: true, liveViewerMode: true });
await page.goto("/live-camera"); await page.goto("/live-camera");
await expect(page.getByText("同步观看模式")).toBeVisible(); await expect(page.getByText("同步观看模式")).toBeVisible();
await expect(page.getByText(/同步观看|重新同步/).first()).toBeVisible(); await expect(page.getByText(/同步观看|重新同步/).first()).toBeVisible();
await expect(page.getByText("当前设备已锁定为观看模式")).toBeVisible(); await expect(page.getByText("当前设备已锁定为观看模式")).toBeVisible();
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText("其他设备实时分析"); await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText(
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText("移动端"); "其他设备实时分析"
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText("均衡模式"); );
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText("猩猩"); await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText(
"移动端"
);
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText(
"均衡模式"
);
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText(
"已累积"
);
await expect(page.getByTestId("live-camera-viewer-sync-card")).toContainText(
"猩猩"
);
await expect(page.getByTestId("live-camera-score-overall")).toBeVisible(); await expect(page.getByTestId("live-camera-score-overall")).toBeVisible();
}); });
test("live camera recovers mojibake viewer titles before rendering", async ({ page }) => { test("live camera viewer tolerates legacy segments and shows remaining buffer hint", async ({
const state = await installAppMocks(page, { authenticated: true, liveViewerMode: true }); page,
const mojibakeTitle = Buffer.from("服务端同步烟雾测试", "utf8").toString("latin1"); }) => {
const state = await installAppMocks(page, {
authenticated: true,
liveViewerMode: true,
});
if (state.liveRuntime.runtimeSession?.snapshot) {
state.liveRuntime.runtimeSession.snapshot.recentSegments = [
{
actionType: "forehand",
isUnknown: false,
startMs: 1200,
endMs: 3600,
durationMs: 2400,
confidenceAvg: 0.82,
score: 81,
peakScore: 86,
frameCount: 18,
} as any,
];
}
if (state.mediaSession) {
state.mediaSession.durationMs = 4_000;
state.mediaSession.playback.previewUrl = undefined;
}
await page.goto("/live-camera");
await expect(page.getByText("同步观看模式")).toBeVisible();
await expect(
page
.getByTestId("live-camera-viewer-sync-card")
.getByText(/预计还需 6 秒 才会出现首段可观看回放/)
).toBeVisible();
await expect(page.getByText("关键帧 0")).toBeVisible();
});
test("live camera recovers mojibake viewer titles before rendering", async ({
page,
}) => {
const state = await installAppMocks(page, {
authenticated: true,
liveViewerMode: true,
});
const mojibakeTitle = Buffer.from("服务端同步烟雾测试", "utf8").toString(
"latin1"
);
if (state.liveRuntime.runtimeSession) { if (state.liveRuntime.runtimeSession) {
state.liveRuntime.runtimeSession.title = mojibakeTitle; state.liveRuntime.runtimeSession.title = mojibakeTitle;
state.liveRuntime.runtimeSession.snapshot = { state.liveRuntime.runtimeSession.snapshot = {
@@ -94,11 +155,15 @@ test("live camera recovers mojibake viewer titles before rendering", async ({ pa
} }
await page.goto("/live-camera"); await page.goto("/live-camera");
await expect(page.getByRole("heading", { name: "服务端同步烟雾测试" })).toBeVisible(); await expect(
page.getByRole("heading", { name: "服务端同步烟雾测试" })
).toBeVisible();
await expect(page.getByText(mojibakeTitle)).toHaveCount(0); await expect(page.getByText(mojibakeTitle)).toHaveCount(0);
}); });
test("live camera no longer opens viewer peer retries when server relay is active", async ({ page }) => { test("live camera no longer opens viewer peer retries when server relay is active", async ({
page,
}) => {
const state = await installAppMocks(page, { const state = await installAppMocks(page, {
authenticated: true, authenticated: true,
liveViewerMode: true, liveViewerMode: true,
@@ -109,10 +174,12 @@ test("live camera no longer opens viewer peer retries when server relay is activ
await expect(page.getByText("同步观看模式")).toBeVisible(); await expect(page.getByText("同步观看模式")).toBeVisible();
await expect.poll(() => state.viewerSignalConflictRemaining).toBe(1); await expect.poll(() => state.viewerSignalConflictRemaining).toBe(1);
await expect.poll(() => state.mediaSession?.viewerCount ?? 0).toBe(0); await expect.poll(() => state.mediaSession?.viewerCount ?? 0).toBe(0);
await expect(page.locator('img[alt="同步中的实时分析画面"]')).toBeVisible(); await expect(page.getByTestId("live-camera-viewer-video")).toBeVisible();
}); });
test("live camera archives overlay videos into the library after analysis stops", async ({ page }) => { test("live camera archives overlay videos into the library after analysis stops", async ({
page,
}) => {
await installAppMocks(page, { authenticated: true, videos: [] }); await installAppMocks(page, { authenticated: true, videos: [] });
await page.goto("/live-camera"); await page.goto("/live-camera");
@@ -126,7 +193,9 @@ test("live camera archives overlay videos into the library after analysis stops"
await expect(page.getByTestId("live-camera-score-overall")).toBeVisible(); await expect(page.getByTestId("live-camera-score-overall")).toBeVisible();
await page.getByRole("button", { name: "结束分析" }).click(); await page.getByRole("button", { name: "结束分析" }).click();
await expect(page.getByText("分析结果已保存")).toBeVisible({ timeout: 8_000 }); await expect(page.getByText("分析结果已保存")).toBeVisible({
timeout: 8_000,
});
await page.goto("/videos"); await page.goto("/videos");
await expect(page.getByTestId("video-card")).toHaveCount(1); await expect(page.getByTestId("video-card")).toHaveCount(1);
@@ -134,7 +203,9 @@ test("live camera archives overlay videos into the library after analysis stops"
await expect(page.getByText("实时分析").first()).toBeVisible(); await expect(page.getByText("实时分析").first()).toBeVisible();
}); });
test("recorder flow archives a session and exposes it in videos", async ({ page }) => { test("recorder flow archives a session and exposes it in videos", async ({
page,
}) => {
await installAppMocks(page, { authenticated: true, videos: [] }); await installAppMocks(page, { authenticated: true, videos: [] });
await page.setViewportSize({ width: 390, height: 844 }); await page.setViewportSize({ width: 390, height: 844 });
@@ -145,7 +216,9 @@ test("recorder flow archives a session and exposes it in videos", async ({ page
await expect(focusShell).toBeVisible(); await expect(focusShell).toBeVisible();
await focusShell.getByTestId("recorder-start-camera-button").click(); await focusShell.getByTestId("recorder-start-camera-button").click();
await expect(focusShell.getByTestId("recorder-start-recording-button")).toBeVisible(); await expect(
focusShell.getByTestId("recorder-start-recording-button")
).toBeVisible();
await focusShell.getByTestId("recorder-start-recording-button").click(); await focusShell.getByTestId("recorder-start-recording-button").click();
await expect(focusShell.getByTestId("recorder-marker-button")).toBeVisible(); await expect(focusShell.getByTestId("recorder-marker-button")).toBeVisible();
@@ -154,17 +227,23 @@ test("recorder flow archives a session and exposes it in videos", async ({ page
await expect(page.getByText("手动标记")).toBeVisible(); await expect(page.getByText("手动标记")).toBeVisible();
await focusShell.getByTestId("recorder-finish-button").click(); await focusShell.getByTestId("recorder-finish-button").click();
await expect(focusShell.getByTestId("recorder-reset-button")).toBeVisible({ timeout: 8_000 }); await expect(focusShell.getByTestId("recorder-reset-button")).toBeVisible({
timeout: 8_000,
});
await page.goto("/videos"); await page.goto("/videos");
await expect(page.getByTestId("video-card")).toHaveCount(1); await expect(page.getByTestId("video-card")).toHaveCount(1);
await expect(page.getByText("E2E 录制")).toBeVisible(); await expect(page.getByText("E2E 录制")).toBeVisible();
}); });
test("recorder blocks local camera when another device owns live analysis", async ({ page }) => { test("recorder blocks local camera when another device owns live analysis", async ({
page,
}) => {
await installAppMocks(page, { authenticated: true, liveViewerMode: true }); await installAppMocks(page, { authenticated: true, liveViewerMode: true });
await page.goto("/recorder"); await page.goto("/recorder");
await expect(page.getByText("当前账号已有其他设备正在实时分析")).toBeVisible(); await expect(
page.getByText("当前账号已有其他设备正在实时分析")
).toBeVisible();
await expect(page.getByTestId("recorder-start-camera-button")).toBeDisabled(); await expect(page.getByTestId("recorder-start-camera-button")).toBeDisabled();
}); });

查看文件

@@ -37,8 +37,10 @@ type MockMediaSession = {
id: string; id: string;
userId: string; userId: string;
title: string; title: string;
purpose?: "recording" | "relay";
status: string; status: string;
archiveStatus: string; archiveStatus: string;
previewStatus?: string;
format: string; format: string;
mimeType: string; mimeType: string;
qualityPreset: string; qualityPreset: string;
@@ -48,6 +50,8 @@ type MockMediaSession = {
uploadedSegments: number; uploadedSegments: number;
uploadedBytes: number; uploadedBytes: number;
durationMs: number; durationMs: number;
relayBufferSeconds?: number;
previewUpdatedAt?: string;
streamConnected: boolean; streamConnected: boolean;
viewerCount?: number; viewerCount?: number;
playback: { playback: {
@@ -255,42 +259,62 @@ async function readTrpcInput(route: Route, operationIndex: number) {
if (!postData) return null; if (!postData) return null;
const parsed = JSON.parse(postData); const parsed = JSON.parse(postData);
return parsed?.json ?? parsed?.[operationIndex]?.json ?? parsed?.[String(operationIndex)]?.json ?? null; return (
parsed?.json ??
parsed?.[operationIndex]?.json ??
parsed?.[String(operationIndex)]?.json ??
null
);
} }
function buildMediaSession(user: MockUser, title: string): MockMediaSession { function buildMediaSession(
user: MockUser,
title: string,
purpose: "recording" | "relay" = "recording"
): MockMediaSession {
return { return {
id: "session-e2e", id: "session-e2e",
userId: String(user.id), userId: String(user.id),
title, title,
status: "created", purpose,
status: purpose === "relay" ? "recording" : "created",
archiveStatus: "idle", archiveStatus: "idle",
previewStatus: purpose === "relay" ? "ready" : "idle",
format: "webm", format: "webm",
mimeType: "video/webm", mimeType: "video/webm",
qualityPreset: "balanced", qualityPreset: "balanced",
facingMode: "environment", facingMode: "environment",
deviceKind: "mobile", deviceKind: "mobile",
reconnectCount: 0, reconnectCount: 0,
uploadedSegments: 0, uploadedSegments: purpose === "relay" ? 1 : 0,
uploadedBytes: 0, uploadedBytes: purpose === "relay" ? 1_280_000 : 0,
durationMs: 0, durationMs: purpose === "relay" ? 60_000 : 0,
relayBufferSeconds: purpose === "relay" ? 120 : undefined,
previewUpdatedAt: purpose === "relay" ? nowIso() : undefined,
streamConnected: true, streamConnected: true,
playback: { playback: {
ready: false, ready: purpose !== "relay",
previewUrl:
purpose === "relay"
? "/media/assets/sessions/session-e2e/preview.webm"
: undefined,
}, },
markers: [], markers: [],
}; };
} }
function createTask(state: MockAppState, input: { function createTask(
type: string; state: MockAppState,
title: string; input: {
status?: string; type: string;
progress?: number; title: string;
message?: string; status?: string;
result?: any; progress?: number;
error?: string | null; message?: string;
}) { result?: any;
error?: string | null;
}
) {
const task = { const task = {
id: `task-${state.nextTaskId++}`, id: `task-${state.nextTaskId++}`,
userId: state.user.id, userId: state.user.id,
@@ -304,7 +328,8 @@ function createTask(state: MockAppState, input: {
attempts: input.status === "failed" ? 2 : 1, attempts: input.status === "failed" ? 2 : 1,
maxAttempts: input.type === "media_finalize" ? 90 : 3, maxAttempts: input.type === "media_finalize" ? 90 : 3,
startedAt: nowIso(), startedAt: nowIso(),
completedAt: input.status === "queued" || input.status === "running" ? null : nowIso(), completedAt:
input.status === "queued" || input.status === "running" ? null : nowIso(),
createdAt: nowIso(), createdAt: nowIso(),
updatedAt: nowIso(), updatedAt: nowIso(),
}; };
@@ -323,297 +348,332 @@ async function fulfillJson(route: Route, body: unknown) {
async function handleTrpc(route: Route, state: MockAppState) { async function handleTrpc(route: Route, state: MockAppState) {
const url = new URL(route.request().url()); const url = new URL(route.request().url());
const operations = url.pathname.replace("/api/trpc/", "").split(","); const operations = url.pathname.replace("/api/trpc/", "").split(",");
const results = await Promise.all(operations.map(async (operation, operationIndex) => { const results = await Promise.all(
switch (operation) { operations.map(async (operation, operationIndex) => {
case "auth.me": switch (operation) {
if (state.authenticated && state.authMeNullResponsesAfterLogin > 0) { case "auth.me":
state.authMeNullResponsesAfterLogin -= 1; if (state.authenticated && state.authMeNullResponsesAfterLogin > 0) {
return trpcResult(null); state.authMeNullResponsesAfterLogin -= 1;
return trpcResult(null);
}
return trpcResult(state.authenticated ? state.user : null);
case "auth.loginWithUsername":
state.authenticated = true;
return trpcResult({ user: state.user, isNew: false });
case "profile.stats":
return trpcResult(buildStats(state.user));
case "profile.update": {
const input = await readTrpcInput(route, operationIndex);
state.user = {
...state.user,
...input,
updatedAt: nowIso(),
manualNtrpCapturedAt:
input?.manualNtrpRating !== undefined
? input.manualNtrpRating == null
? null
: nowIso()
: state.user.manualNtrpCapturedAt,
};
return trpcResult({ success: true });
} }
return trpcResult(state.authenticated ? state.user : null); case "plan.active":
case "auth.loginWithUsername": return trpcResult(state.activePlan);
state.authenticated = true; case "plan.list":
return trpcResult({ user: state.user, isNew: false }); return trpcResult(state.activePlan ? [state.activePlan] : []);
case "profile.stats": case "plan.generate": {
return trpcResult(buildStats(state.user)); const input = await readTrpcInput(route, operationIndex);
case "profile.update": { const durationDays = Number(input?.durationDays ?? 7);
const input = await readTrpcInput(route, operationIndex); const skillLevel = input?.skillLevel ?? state.user.skillLevel;
state.user = { state.activePlan = {
...state.user, id: 200,
...input, title: `${state.user.name} 的训练计划`,
updatedAt: nowIso(), skillLevel,
manualNtrpCapturedAt: durationDays,
input?.manualNtrpRating !== undefined version: 1,
? input.manualNtrpRating == null adjustmentNotes: null,
? null exercises: [
: nowIso() {
: state.user.manualNtrpCapturedAt, day: 1,
}; name: "正手影子挥拍",
return trpcResult({ success: true }); category: "影子挥拍",
} duration: 15,
case "plan.active": description: "练习完整引拍和收拍动作。",
return trpcResult(state.activePlan); tips: "保持重心稳定,击球点在身体前侧。",
case "plan.list": sets: 3,
return trpcResult(state.activePlan ? [state.activePlan] : []); reps: 12,
case "plan.generate": { },
const input = await readTrpcInput(route, operationIndex); {
const durationDays = Number(input?.durationDays ?? 7); day: 1,
const skillLevel = input?.skillLevel ?? state.user.skillLevel; name: "交叉步移动",
state.activePlan = { category: "脚步移动",
id: 200, duration: 12,
title: `${state.user.name} 的训练计划`, description: "强化启动和回位节奏。",
skillLevel, tips: "每次移动后快速回到准备姿势。",
durationDays, sets: 4,
version: 1, reps: 10,
adjustmentNotes: null, },
exercises: [ ],
{ };
day: 1, return trpcResult({
name: "正手影子挥拍", taskId: createTask(state, {
category: "影子挥拍", type: "training_plan_generate",
duration: 15, title: `${durationDays}天训练计划生成`,
description: "练习完整引拍和收拍动作。", result: {
tips: "保持重心稳定,击球点在身体前侧。", kind: "training_plan_generate",
sets: 3, planId: state.activePlan.id,
reps: 12, plan: state.activePlan,
}, },
{ }).id,
day: 1, });
name: "交叉步移动", }
category: "脚步移动", case "plan.adjust":
duration: 12, return trpcResult({
description: "强化启动和回位节奏。", taskId: createTask(state, {
tips: "每次移动后快速回到准备姿势。", type: "training_plan_adjust",
sets: 4, title: "训练计划调整",
reps: 10, result: {
}, kind: "training_plan_adjust",
], adjustmentNotes: "已根据最近分析结果调整训练重点。",
}; },
return trpcResult({ }).id,
taskId: createTask(state, { });
type: "training_plan_generate", case "video.list":
title: `${durationDays}天训练计划生成`, return trpcResult(state.videos);
result: { case "video.upload": {
kind: "training_plan_generate", const input = await readTrpcInput(route, operationIndex);
planId: state.activePlan.id, const video = {
plan: state.activePlan, id: state.nextVideoId++,
}, title: input?.title || `实时分析录像 ${state.nextVideoId}`,
}).id, url: `/uploads/${state.nextVideoId}.${input?.format || "webm"}`,
}); format: input?.format || "webm",
} fileSize: input?.fileSize || 1024 * 1024,
case "plan.adjust": duration: input?.duration || 60,
return trpcResult({ exerciseType: input?.exerciseType || "live_analysis",
taskId: createTask(state, { analysisStatus: "completed",
type: "training_plan_adjust", createdAt: nowIso(),
title: "训练计划调整", };
result: { state.videos = [video, ...state.videos];
kind: "training_plan_adjust", return trpcResult({ videoId: video.id, url: video.url });
adjustmentNotes: "已根据最近分析结果调整训练重点。", }
}, case "analysis.list":
}).id, return trpcResult(state.analyses);
}); case "analysis.liveSessionList":
case "video.list": return trpcResult([]);
return trpcResult(state.videos); case "analysis.runtimeGet":
case "video.upload": {
const input = await readTrpcInput(route, operationIndex);
const video = {
id: state.nextVideoId++,
title: input?.title || `实时分析录像 ${state.nextVideoId}`,
url: `/uploads/${state.nextVideoId}.${input?.format || "webm"}`,
format: input?.format || "webm",
fileSize: input?.fileSize || 1024 * 1024,
duration: input?.duration || 60,
exerciseType: input?.exerciseType || "live_analysis",
analysisStatus: "completed",
createdAt: nowIso(),
};
state.videos = [video, ...state.videos];
return trpcResult({ videoId: video.id, url: video.url });
}
case "analysis.list":
return trpcResult(state.analyses);
case "analysis.liveSessionList":
return trpcResult([]);
case "analysis.runtimeGet":
return trpcResult(state.liveRuntime);
case "analysis.runtimeAcquire":
if (state.liveRuntime.runtimeSession?.status === "active" && state.liveRuntime.role === "viewer") {
return trpcResult(state.liveRuntime); return trpcResult(state.liveRuntime);
} case "analysis.runtimeAcquire":
state.liveRuntime = { if (
role: "owner", state.liveRuntime.runtimeSession?.status === "active" &&
runtimeSession: { state.liveRuntime.role === "viewer"
id: 501, ) {
title: "实时分析 正手", return trpcResult(state.liveRuntime);
sessionMode: "practice", }
mediaSessionId: state.mediaSession?.id || null, state.liveRuntime = {
status: "active", role: "owner",
startedAt: nowIso(), runtimeSession: {
endedAt: null, id: 501,
lastHeartbeatAt: nowIso(), title: "实时分析 正手",
snapshot: { sessionMode: "practice",
phase: "analyzing", mediaSessionId: state.mediaSession?.id || null,
currentAction: "forehand", status: "active",
rawAction: "forehand", startedAt: nowIso(),
visibleSegments: 1, endedAt: null,
unknownSegments: 0, lastHeartbeatAt: nowIso(),
durationMs: 1500, snapshot: {
feedback: ["节奏稳定"], phase: "analyzing",
}, currentAction: "forehand",
}, rawAction: "forehand",
}; visibleSegments: 1,
return trpcResult(state.liveRuntime); unknownSegments: 0,
case "analysis.runtimeHeartbeat": { durationMs: 1500,
const input = await readTrpcInput(route, operationIndex); feedback: ["节奏稳定"],
if (state.liveRuntime.runtimeSession) {
state.liveRuntime.runtimeSession = {
...state.liveRuntime.runtimeSession,
mediaSessionId: input?.mediaSessionId ?? state.liveRuntime.runtimeSession.mediaSessionId,
snapshot: input?.snapshot ?? state.liveRuntime.runtimeSession.snapshot,
lastHeartbeatAt: nowIso(),
};
}
return trpcResult(state.liveRuntime);
}
case "analysis.runtimeRelease":
state.liveRuntime = { role: "idle", runtimeSession: null };
return trpcResult({ success: true, runtimeSession: null });
case "analysis.liveSessionSave":
return trpcResult({ sessionId: 1, trainingRecordId: 1 });
case "task.list":
return trpcResult(state.tasks);
case "task.get": {
const rawInput = url.searchParams.get("input");
const parsedInput = rawInput ? JSON.parse(rawInput) : {};
const taskId = parsedInput.json?.taskId || parsedInput[0]?.json?.taskId;
return trpcResult(state.tasks.find((task) => task.id === taskId) || null);
}
case "task.retry": {
const rawInput = url.searchParams.get("input");
const parsedInput = rawInput ? JSON.parse(rawInput) : {};
const taskId = parsedInput.json?.taskId || parsedInput[0]?.json?.taskId;
const task = state.tasks.find((item) => item.id === taskId);
if (task) {
task.status = "succeeded";
task.progress = 100;
task.error = null;
task.message = "任务执行完成";
}
return trpcResult({ task });
}
case "task.createMediaFinalize": {
if (state.mediaSession) {
state.mediaSession.status = "archived";
state.mediaSession.archiveStatus = "completed";
state.mediaSession.playback = {
ready: true,
webmUrl: "/media/assets/sessions/session-e2e/recording.webm",
mp4Url: "/media/assets/sessions/session-e2e/recording.mp4",
webmSize: 2_400_000,
mp4Size: 1_800_000,
previewUrl: "/media/assets/sessions/session-e2e/recording.webm",
};
state.videos = [
{
id: state.nextVideoId++,
title: state.mediaSession.title,
url: state.mediaSession.playback.webmUrl,
format: "webm",
fileSize: state.mediaSession.playback.webmSize,
exerciseType: "recording",
analysisStatus: "completed",
createdAt: nowIso(),
},
...state.videos,
];
}
return trpcResult({
taskId: createTask(state, {
type: "media_finalize",
title: "录制归档",
result: {
kind: "media_finalize",
sessionId: state.mediaSession?.id,
videoId: state.videos[0]?.id,
url: state.videos[0]?.url,
},
}).id,
});
}
case "analysis.getCorrections":
return trpcResult({
taskId: createTask(state, {
type: "pose_correction_multimodal",
title: "动作纠正",
result: {
corrections: "## 动作概览\n整体节奏稳定,建议继续优化击球点前置。",
report: {
priorityFixes: [
{
title: "击球点前置",
why: "击球点略靠后会影响挥拍连贯性。",
howToPractice: "每组 8 次影子挥拍,刻意在身体前侧完成触球动作。",
successMetric: "连续 3 组都能稳定在身体前侧完成挥拍。",
},
],
}, },
}, },
}).id, };
}); return trpcResult(state.liveRuntime);
case "video.registerExternal": case "analysis.runtimeHeartbeat": {
if (state.mediaSession?.playback.webmUrl || state.mediaSession?.playback.mp4Url) { const input = await readTrpcInput(route, operationIndex);
state.videos = [ if (state.liveRuntime.runtimeSession) {
{ state.liveRuntime.runtimeSession = {
id: state.nextVideoId++, ...state.liveRuntime.runtimeSession,
title: state.mediaSession.title, mediaSessionId:
url: state.mediaSession.playback.webmUrl || state.mediaSession.playback.mp4Url, input?.mediaSessionId ??
format: "webm", state.liveRuntime.runtimeSession.mediaSessionId,
fileSize: state.mediaSession.playback.webmSize || 1024 * 1024, snapshot:
exerciseType: "recording", input?.snapshot ?? state.liveRuntime.runtimeSession.snapshot,
analysisStatus: "completed", lastHeartbeatAt: nowIso(),
createdAt: nowIso(), };
}, }
...state.videos, return trpcResult(state.liveRuntime);
];
} }
return trpcResult({ videoId: state.nextVideoId, url: state.mediaSession?.playback.webmUrl }); case "analysis.runtimeRelease":
case "achievement.list": state.liveRuntime = { role: "idle", runtimeSession: null };
return trpcResult(buildStats(state.user).achievements); return trpcResult({ success: true, runtimeSession: null });
case "rating.current": case "analysis.liveSessionSave":
return trpcResult({ return trpcResult({ sessionId: 1, trainingRecordId: 1 });
rating: state.user.ntrpRating, case "task.list":
latestSnapshot: buildStats(state.user).latestNtrpSnapshot, return trpcResult(state.tasks);
}); case "task.get": {
case "rating.history": const rawInput = url.searchParams.get("input");
return trpcResult([ const parsedInput = rawInput ? JSON.parse(rawInput) : {};
{ const taskId =
id: 1, parsedInput.json?.taskId || parsedInput[0]?.json?.taskId;
rating: 2.4, return trpcResult(
triggerType: "daily", state.tasks.find(task => task.id === taskId) || null
createdAt: nowIso(), );
dimensionScores: { }
poseAccuracy: 72, case "task.retry": {
strokeConsistency: 70, const rawInput = url.searchParams.get("input");
footwork: 66, const parsedInput = rawInput ? JSON.parse(rawInput) : {};
fluidity: 69, const taskId =
timing: 68, parsedInput.json?.taskId || parsedInput[0]?.json?.taskId;
matchReadiness: 60, const task = state.tasks.find(item => item.id === taskId);
}, if (task) {
sourceSummary: { analyses: 1, liveSessions: 0, totalEffectiveActions: 12, totalPk: 0, activeDays: 1 }, task.status = "succeeded";
}, task.progress = 100;
{ task.error = null;
id: 2, task.message = "任务执行完成";
}
return trpcResult({ task });
}
case "task.createMediaFinalize": {
if (state.mediaSession) {
state.mediaSession.status = "archived";
state.mediaSession.archiveStatus = "completed";
state.mediaSession.playback = {
ready: true,
webmUrl: "/media/assets/sessions/session-e2e/recording.webm",
mp4Url: "/media/assets/sessions/session-e2e/recording.mp4",
webmSize: 2_400_000,
mp4Size: 1_800_000,
previewUrl: "/media/assets/sessions/session-e2e/recording.webm",
};
state.videos = [
{
id: state.nextVideoId++,
title: state.mediaSession.title,
url: state.mediaSession.playback.webmUrl,
format: "webm",
fileSize: state.mediaSession.playback.webmSize,
exerciseType: "recording",
analysisStatus: "completed",
createdAt: nowIso(),
},
...state.videos,
];
}
return trpcResult({
taskId: createTask(state, {
type: "media_finalize",
title: "录制归档",
result: {
kind: "media_finalize",
sessionId: state.mediaSession?.id,
videoId: state.videos[0]?.id,
url: state.videos[0]?.url,
},
}).id,
});
}
case "analysis.getCorrections":
return trpcResult({
taskId: createTask(state, {
type: "pose_correction_multimodal",
title: "动作纠正",
result: {
corrections:
"## 动作概览\n整体节奏稳定,建议继续优化击球点前置。",
report: {
priorityFixes: [
{
title: "击球点前置",
why: "击球点略靠后会影响挥拍连贯性。",
howToPractice:
"每组 8 次影子挥拍,刻意在身体前侧完成触球动作。",
successMetric: "连续 3 组都能稳定在身体前侧完成挥拍。",
},
],
},
},
}).id,
});
case "video.registerExternal":
if (
state.mediaSession?.playback.webmUrl ||
state.mediaSession?.playback.mp4Url
) {
state.videos = [
{
id: state.nextVideoId++,
title: state.mediaSession.title,
url:
state.mediaSession.playback.webmUrl ||
state.mediaSession.playback.mp4Url,
format: "webm",
fileSize: state.mediaSession.playback.webmSize || 1024 * 1024,
exerciseType: "recording",
analysisStatus: "completed",
createdAt: nowIso(),
},
...state.videos,
];
}
return trpcResult({
videoId: state.nextVideoId,
url: state.mediaSession?.playback.webmUrl,
});
case "achievement.list":
return trpcResult(buildStats(state.user).achievements);
case "rating.current":
return trpcResult({
rating: state.user.ntrpRating, rating: state.user.ntrpRating,
triggerType: "daily", latestSnapshot: buildStats(state.user).latestNtrpSnapshot,
createdAt: nowIso(), });
dimensionScores: buildStats(state.user).latestNtrpSnapshot.dimensionScores, case "rating.history":
sourceSummary: { analyses: 2, liveSessions: 1, totalEffectiveActions: 36, totalPk: 0, activeDays: 2 }, return trpcResult([
}, {
]); id: 1,
default: rating: 2.4,
return trpcResult(null); triggerType: "daily",
} createdAt: nowIso(),
})); dimensionScores: {
poseAccuracy: 72,
strokeConsistency: 70,
footwork: 66,
fluidity: 69,
timing: 68,
matchReadiness: 60,
},
sourceSummary: {
analyses: 1,
liveSessions: 0,
totalEffectiveActions: 12,
totalPk: 0,
activeDays: 1,
},
},
{
id: 2,
rating: state.user.ntrpRating,
triggerType: "daily",
createdAt: nowIso(),
dimensionScores: buildStats(state.user).latestNtrpSnapshot
.dimensionScores,
sourceSummary: {
analyses: 2,
liveSessions: 1,
totalEffectiveActions: 36,
totalPk: 0,
activeDays: 2,
},
},
]);
default:
return trpcResult(null);
}
})
);
await fulfillJson(route, results); await fulfillJson(route, results);
} }
@@ -649,7 +709,11 @@ async function handleMedia(route: Route, state: MockAppState) {
return; return;
} }
state.mediaSession.viewerCount = (state.mediaSession.viewerCount || 0) + 1; state.mediaSession.viewerCount = (state.mediaSession.viewerCount || 0) + 1;
await fulfillJson(route, { viewerId: `viewer-${state.mediaSession.viewerCount}`, type: "answer", sdp: "mock-answer" }); await fulfillJson(route, {
viewerId: `viewer-${state.mediaSession.viewerCount}`,
type: "answer",
sdp: "mock-answer",
});
return; return;
} }
@@ -689,16 +753,31 @@ async function handleMedia(route: Route, state: MockAppState) {
} }
if (path === `/media/sessions/${state.mediaSession.id}`) { if (path === `/media/sessions/${state.mediaSession.id}`) {
state.mediaSession.status = "archived"; if (state.mediaSession.purpose === "relay") {
state.mediaSession.archiveStatus = "completed"; state.mediaSession.previewStatus = state.mediaSession.playback.previewUrl
state.mediaSession.playback = { ? "ready"
ready: true, : "processing";
webmUrl: "/media/assets/sessions/session-e2e/recording.webm", state.mediaSession.previewUpdatedAt = nowIso();
mp4Url: "/media/assets/sessions/session-e2e/recording.mp4", state.mediaSession.playback = {
webmSize: 2_400_000, ready: Boolean(state.mediaSession.playback.previewUrl),
mp4Size: 1_800_000, webmUrl:
previewUrl: "/media/assets/sessions/session-e2e/recording.webm", state.mediaSession.playback.webmUrl ??
}; "/media/assets/sessions/session-e2e/preview.webm",
webmSize: state.mediaSession.playback.webmSize ?? 1_800_000,
previewUrl: state.mediaSession.playback.previewUrl,
};
} else {
state.mediaSession.status = "archived";
state.mediaSession.archiveStatus = "completed";
state.mediaSession.playback = {
ready: true,
webmUrl: "/media/assets/sessions/session-e2e/recording.webm",
mp4Url: "/media/assets/sessions/session-e2e/recording.mp4",
webmSize: 2_400_000,
mp4Size: 1_800_000,
previewUrl: "/media/assets/sessions/session-e2e/recording.webm",
};
}
await fulfillJson(route, { session: state.mediaSession }); await fulfillJson(route, { session: state.mediaSession });
return; return;
} }
@@ -727,7 +806,13 @@ export async function installAppMocks(
viewerSignalConflictOnce?: boolean; viewerSignalConflictOnce?: boolean;
} }
) { ) {
const seededViewerSession = options?.liveViewerMode ? buildMediaSession(buildUser(options?.userName), "其他设备实时分析") : null; const seededViewerSession = options?.liveViewerMode
? buildMediaSession(
buildUser(options?.userName),
"其他设备实时分析",
"relay"
)
: null;
const state: MockAppState = { const state: MockAppState = {
authenticated: options?.authenticated ?? false, authenticated: options?.authenticated ?? false,
user: buildUser(options?.userName), user: buildUser(options?.userName),
@@ -779,6 +864,7 @@ export async function installAppMocks(
title: "其他设备实时分析", title: "其他设备实时分析",
sessionMode: "practice", sessionMode: "practice",
qualityPreset: "balanced", qualityPreset: "balanced",
relayBufferSeconds: 120,
facingMode: "environment", facingMode: "environment",
deviceKind: "mobile", deviceKind: "mobile",
avatarEnabled: true, avatarEnabled: true,
@@ -940,7 +1026,11 @@ export async function installAppMocks(
setOptions() {} setOptions() {}
onResults(callback: (results: { poseLandmarks: ReturnType<typeof buildFakeLandmarks> }) => void) { onResults(
callback: (results: {
poseLandmarks: ReturnType<typeof buildFakeLandmarks>;
}) => void
) {
this.callback = callback; this.callback = callback;
} }
@@ -964,10 +1054,14 @@ export async function installAppMocks(
Object.defineProperty(HTMLMediaElement.prototype, "srcObject", { Object.defineProperty(HTMLMediaElement.prototype, "srcObject", {
configurable: true, configurable: true,
get() { get() {
return (this as HTMLMediaElement & { __srcObject?: MediaStream }).__srcObject ?? null; return (
(this as HTMLMediaElement & { __srcObject?: MediaStream })
.__srcObject ?? null
);
}, },
set(value) { set(value) {
(this as HTMLMediaElement & { __srcObject?: MediaStream }).__srcObject = value as MediaStream; (this as HTMLMediaElement & { __srcObject?: MediaStream }).__srcObject =
value as MediaStream;
}, },
}); });
@@ -997,7 +1091,11 @@ export async function installAppMocks(
if (this.state !== "recording") return; if (this.state !== "recording") return;
const event = new Event("dataavailable") as Event & { data?: Blob }; const event = new Event("dataavailable") as Event & { data?: Blob };
event.data = new Blob(["segment"], { type: this.mimeType }); event.data = new Blob(["segment"], { type: this.mimeType });
const handler = (this as unknown as { ondataavailable?: (evt: Event & { data?: Blob }) => void }).ondataavailable; const handler = (
this as unknown as {
ondataavailable?: (evt: Event & { data?: Blob }) => void;
}
).ondataavailable;
handler?.(event); handler?.(event);
this.dispatchEvent(event); this.dispatchEvent(event);
} }
@@ -1061,10 +1159,21 @@ export async function installAppMocks(
Object.defineProperty(navigator, "mediaDevices", { Object.defineProperty(navigator, "mediaDevices", {
configurable: true, configurable: true,
value: { value: {
getUserMedia: async (constraints?: { audio?: unknown }) => createFakeMediaStream(Boolean(constraints?.audio)), getUserMedia: async (constraints?: { audio?: unknown }) =>
createFakeMediaStream(Boolean(constraints?.audio)),
enumerateDevices: async () => [ enumerateDevices: async () => [
{ deviceId: "cam-1", kind: "videoinput", label: "Front Camera", groupId: "g1" }, {
{ deviceId: "cam-2", kind: "videoinput", label: "Back Camera", groupId: "g1" }, deviceId: "cam-1",
kind: "videoinput",
label: "Front Camera",
groupId: "g1",
},
{
deviceId: "cam-2",
kind: "videoinput",
label: "Back Camera",
groupId: "g1",
},
], ],
addEventListener: () => undefined, addEventListener: () => undefined,
removeEventListener: () => undefined, removeEventListener: () => undefined,
@@ -1072,8 +1181,8 @@ export async function installAppMocks(
}); });
}); });
await page.route("**/api/trpc/**", (route) => handleTrpc(route, state)); await page.route("**/api/trpc/**", route => handleTrpc(route, state));
await page.route("**/media/**", (route) => handleMedia(route, state)); await page.route("**/media/**", route => handleMedia(route, state));
return state; return state;
} }