An LLM-powered system that transcribes, scores, and reviews interview recordings to generate structured, actionable feedback across every round of a job search.
Each interview is scored across configurable dimensions tailored to the round type (recruiter screen, hiring manager, technical deep dive, exec final), with talk-time ratio analysis, sentiment trajectory tracking, and automatic question extraction that builds a searchable bank across all interviews. Transcription runs a dual-path pipeline with NVIDIA Parakeet TDT and NeMo Sortformer on a GCP spot T4 for GPU-accelerated diarized transcription with word-level timestamps. Reviews route through Claude API or locally-hosted Scout, turning every recorded conversation into structured, actionable feedback, identifying exactly where momentum built, where it dipped, and what to adjust for the next round.
Overall Analysis — cross-company metrics and AI-generated assessment across all interviewsProcess Insights — advance/develop/pass recommendation with dimension scores and coachingReview Assessment — AI-generated performance analysis with PDF and transcript exportsSentiment Trajectory — energy tracking over time with labeled momentum phasesTranscript Playback — diarized audio with speaker labels, timestamps, and key momentsFollow-up Email — AI-drafted thank-you email tailored to interview content
Example Reports
First page of each report.
Page 1
Overall Interview Analytics
Aggregated performance report across all companies and interview rounds — dimension scores, pattern analysis, and cross-process coaching insights distilled into a single exportable document.
Page 1
Call Review
Per-interview breakdown with rubric scores, strengths, areas for improvement, and AI-identified key moments — generated from diarized transcripts with sentiment and talk-time analysis.
Page 1
Account Intelligence Brief
End-to-end process assessment with advance/develop/pass recommendation, dimension score trends across rounds, and strategic coaching for the next stage of the interview loop.
Prediction Market Arbitrage Platform
A fully automated arbitrage engine that captures pricing discrepancies between Polymarket and Kalshi prediction markets.
Built in Rust for low-latency execution across a dual-VPS architecture with an Amsterdam node running the core arb engine and a NYC node acting as a lightweight gRPC executor proxy for minimal round-trip to Kalshi's servers. Infrastructure provisioned with Pulumi, nodes connected via WireGuard, and a Telegram bot for remote monitoring and control. Targeting ~5% monthly returns on deployed capital.
Dashboard — live signals, trades, capital allocation, and market pairs with real-time spread trackingExecution stats, capital efficiency, and cost of ownership breakdownTelegram Bot — real-time alerts, trade commands, and engine control via mobile
To Do
Expanding into market making, quoting both sides of the book to capture spread as a standalone profit source beyond cross-platform arbitrage.
Write Like Me
In progress
A model-agnostic system that captures and reproduces your personal writing voice across any LLM.
Ingests a tagged corpus organized by register, distills it into a structured style guide, and uses embedding-based few-shot retrieval at generation time to match tone and voice. Supabase backs the corpus store and vector search, handling document metadata, register tags, and embedding lookups in one layer. Exposed as an MCP server so any compatible client can call your voice as a tool. Building toward a custom fine-tuned model using PyTorch and QLoRA, quantizing an open-source base model to 4-bit precision and training a lightweight low-rank adapter on the corpus to internalize style patterns directly in model weights, producing a small, swappable adapter rather than a full retrained model.