feat: auto LLM feedback runner + problem link + 5xx retry

- Add SubmissionFeedbackRunner: async background queue for auto LLM feedback
- Enqueue feedback generation after each submission in submitProblem()
- Register runner in main.cc with CSP_FEEDBACK_AUTO_RUN env var
- Add problem_title to GET /api/v1/submissions/{id} response
- Frontend: clickable problem link on submission detail page
- Enhance LLM prompt with richer analysis dimensions
- Add 5xx/connection error retry (max 5 attempts) in Python LLM script

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
这个提交包含在:
cryptocommuniums-afk
2026-02-16 15:13:35 +08:00
父节点 bc2e085c70
当前提交 7860414ae5
修改 37 个文件,包含 312 行新增5343 行删除

查看文件

@@ -0,0 +1,41 @@
#pragma once
#include "csp/db/sqlite_db.h"
#include <cstddef>
#include <cstdint>
#include <deque>
#include <mutex>
#include <string>
namespace csp::services {
class SubmissionFeedbackRunner {
public:
static SubmissionFeedbackRunner& Instance();
void Configure(std::string db_path);
/// Enqueue a submission for async feedback generation.
bool Enqueue(int64_t submission_id);
/// Scan DB for submissions without feedback and enqueue them.
void AutoStartIfEnabled(db::SqliteDb& db);
size_t PendingCount() const;
private:
SubmissionFeedbackRunner() = default;
void StartWorkerIfNeededLocked();
void WorkerLoop();
void RecoverPendingLocked();
std::string db_path_;
mutable std::mutex mu_;
std::deque<int64_t> queue_;
size_t pending_jobs_ = 0;
bool worker_running_ = false;
bool recovered_ = false;
};
} // namespace csp::services