What I'm Working On

Side projects I've built.

LLM

Interview Performance Tracker

An LLM-powered system that transcribes, scores, and reviews interview recordings to generate structured, actionable feedback across every round of a job search.

Each interview is scored across configurable dimensions tailored to the round type (recruiter screen, hiring manager, technical deep dive, exec final), with talk-time ratio analysis, sentiment trajectory tracking, and automatic question extraction that builds a searchable bank across all interviews. Transcription runs a dual-path pipeline with NVIDIA Parakeet TDT and NeMo Sortformer on a GCP spot T4 for GPU-accelerated diarized transcription with word-level timestamps. Reviews route through Claude API or locally-hosted Scout, turning every recorded conversation into structured, actionable feedback, identifying exactly where momentum built, where it dipped, and what to adjust for the next round.

Python GCP NeMo LLM GPU Inference Diarization Speech-to-Text
Screenshots
Overall Analysis — cross-company metrics and AI-generated assessment across all interviews
Overall Analysis — cross-company metrics and AI-generated assessment across all interviews
Process Insights — advance/develop/pass recommendation with dimension scores and coaching
Process Insights — advance/develop/pass recommendation with dimension scores and coaching
Review Assessment — AI-generated performance analysis with PDF and transcript exports
Review Assessment — AI-generated performance analysis with PDF and transcript exports
Sentiment Trajectory — energy tracking over time with labeled momentum phases
Sentiment Trajectory — energy tracking over time with labeled momentum phases
Transcript Playback — diarized audio with speaker labels, timestamps, and key moments
Transcript Playback — diarized audio with speaker labels, timestamps, and key moments
Follow-up Email — AI-drafted thank-you email tailored to interview content
Follow-up Email — AI-drafted thank-you email tailored to interview content
Example Reports

First page of each report.

Overall Interview Analytics Page 1
Overall Interview Analytics

Aggregated performance report across all companies and interview rounds — dimension scores, pattern analysis, and cross-process coaching insights distilled into a single exportable document.

Call Review Page 1
Call Review

Per-interview breakdown with rubric scores, strengths, areas for improvement, and AI-identified key moments — generated from diarized transcripts with sentiment and talk-time analysis.

Account Intelligence Brief Page 1
Account Intelligence Brief

End-to-end process assessment with advance/develop/pass recommendation, dimension score trends across rounds, and strategic coaching for the next stage of the interview loop.

Prediction Market Arbitrage Platform

A fully automated arbitrage engine that captures pricing discrepancies between Polymarket and Kalshi prediction markets.

Built in Rust for low-latency execution across a dual-VPS architecture with an Amsterdam node running the core arb engine and a NYC node acting as a lightweight gRPC executor proxy for minimal round-trip to Kalshi's servers. Infrastructure provisioned with Pulumi, nodes connected via WireGuard, and a Telegram bot for remote monitoring and control. Targeting ~5% monthly returns on deployed capital.

Rust gRPC Prediction Markets Low-Latency Real-Time Streaming Telegram Bot
Screenshots
Dashboard — live signals, trades, capital allocation, and market pairs with real-time spread tracking
Dashboard — live signals, trades, capital allocation, and market pairs with real-time spread tracking
Execution stats, capital efficiency, and cost of ownership breakdown
Execution stats, capital efficiency, and cost of ownership breakdown
Telegram Bot — real-time alerts, trade commands, and engine control via mobile
Telegram Bot — real-time alerts, trade commands, and engine control via mobile
To Do
  • Expanding into market making, quoting both sides of the book to capture spread as a standalone profit source beyond cross-platform arbitrage.
MCP

Write Like Me

In progress

A model-agnostic system that captures and reproduces your personal writing voice across any LLM.

Ingests a tagged corpus organized by register, distills it into a structured style guide, and uses embedding-based few-shot retrieval at generation time to match tone and voice. Supabase backs the corpus store and vector search, handling document metadata, register tags, and embedding lookups in one layer. Exposed as an MCP server so any compatible client can call your voice as a tool. Building toward a custom fine-tuned model using PyTorch and QLoRA, quantizing an open-source base model to 4-bit precision and training a lightweight low-rank adapter on the corpus to internalize style patterns directly in model weights, producing a small, swappable adapter rather than a full retrained model.

Python PyTorch MCP RAG Supabase