About
Experience
ProPeers
Founding Engineer
July 2025 – Present · Delhi, India · Remote
- ▹Led the launch of Roadmap AI, a fully personalized learning assistant powered by RAG (Retrieval-Augmented Generation), OpenAI’s
text-embedding-ada-002, Chroma Vector DB, and Modal for real-time, scalable inference. - ▹Architected a self-learning dynamic RAG pipeline:
[JSON → Embedding → Chroma DB → Query Context Retrieval → Prompt Masking → Model → Nested JSON Output] - ▹Dynamically decides whether to retrieve existing context or generate a roadmap from scratch, enabling zero-friction personalization for every user query.
- ▹Injects prompt templates based on match confidence and automatically re-embeds new data into the vector store making the system truly adaptive and self-updating.
- ▹Integrated MCP (Modular Content Pipeline) to process and vectorize 100+ roadmaps, enabling semantic search and structured AI roadmap generation.
- ▹Engineered a Model Context Protocol (MCP) to standardize context injection for the model combining retrieved chunks, user metadata, prompt masks, and query scaffolding ensuring consistent and accurate outputs at sub-second latency.
- ▹Developed token-based access with one-time/monthly/yearly tiers, including real-time token usage tracking, speed controls, and upsell modals for premium upgrades.
- ▹Achieved <1s latency for AI responses at scale, improving retention and enabling smooth, conversational AskAI interactions.
- ▹Built an AI-powered DSA Code Editor supporting Run/Submit/Save, tightly integrated with Roadmap AI and backed by gpt-3.5-turbo, o3-mini, and o1 models for contextual code assistance.
- ▹Enhanced AskAI with contextual node + discussion integration, improving answer relevance and surfacing smarter suggestions.
- ▹Resulted in 3x higher roadmap completions, reduced user drop-offs, and transformed the platform into a self-evolving AI-first learning ecosystem.
SDE - 1
July 2024 – July 2025 · Delhi, India · Remote
- ▹Built and scaled the flagship "Roadmaps" feature, delivering 100+ curated learning paths across DSA, Development, and System Design used by 100K+ users. Improved personalization and relevance, while reducing API response time from 2.1s to < 300ms, resulting in a 7x faster experience and 40% higher user engagement.
- ▹Worked on complex APIs to reduce processing time and improved tab switching experience for smoother navigation
- ▹Developed and integrated the "AskAI + Discussion Forum", an intelligent peer-programming assistant where users can interact with AI to solve DSA/Dev doubts and collaborate with others enabling on-demand doubt resolution and community learning.
- ▹Engineered a Session Recording Bot using Python, Selenium, and headless Azure VMs with deep link automation automating session joining and recording, cutting down 100% of manual effort and improving reliability.
- ▹Optimized 150+ APIs by implementing advanced caching layers, async processing, and API pipelines, reducing backend latency by up to 70% and improving system throughput.
- ▹Reduced core web vitals TBT, LCP, and FCP from 4.4s to 990ms through advanced frontend optimizations (SSR, dynamic imports, lazy-loading APIs), significantly boosting UX for 15K+ monthly active users.
- ▹Led the end-to-end performance overhaul of the platform, focusing on smoother tab-switching experiences, minimal downtime, and blazing-fast navigation across the app.
- ▹Migrated MongoDB from Atlas to self-hosted replica sets, wrote automated backup & recovery scripts, set up VMs, and integrated cron-based backups to Azure Blob, ensuring data durability and cost-efficiency.
- ▹Set up real-time monitoring and alerting with Prometheus and Grafana, ensuring system health, proactive issue resolution, and enhanced DevOps visibility.
- ▹Deployed scalable CI/CD pipelines using Azure, GitLab, and Vercel, ensuring zero-downtime deployments and faster iteration cycles across teams.
- ▹Handled end-to-end production deployment and scaling for a system serving 15K+ users, maintaining high availability, fault tolerance, and robust performance at scale.
Cloud Conduction
Junior Software Engineer
Jan 2024 – June 2024 · USA, · Remote
- ▹Built an AI-powered chat application from the ground up using React and .NET, improving frontend efficiency by 60% and backend performance by 30%, delivering a highly responsive user experience.
- ▹Integrated and optimized AI model responses, reducing latency from 1.86s to 1.2s (35% faster) through strategic API design, caching, and performance tuning.
- ▹Designed scalable cloud architecture on Microsoft Azure for AI workloads, improving system throughput by 10% while significantly reducing infrastructure costs via autoscaling and resource optimization.
- ▹Developed modern, responsive UI components in React that improved user engagement metrics by 25%, including better retention and interaction rates.
- ▹Implemented secure, scalable API gateways in .NET Core, capable of handling 500+ concurrent requests with 99.9% uptime, supporting production-level reliability.
- ▹Led the implementation of new features using the MERN stack, cutting down development time by 40%, and accelerating product iteration cycles.
- ▹Established CI/CD pipelines (Azure DevOps & GitHub Actions), reducing deployment failures by 75% and enabling faster, automated releases.
- ▹Conducted in-depth code reviews and optimization, reducing technical debt by 30%, standardizing best practices across teams, and improving maintainability.
- ▹Owned and managed the complete project lifecycle, from initial system design and dev planning to production deployment, server setup, and post-launch support.
Impactful Work As a ( INDIVIDUAL CONTRIBUTOR )
INDIVIDUAL CONTRIBUTOR
- ▹Engineered production-grade AI platform serving 100K+ users with personalized learning roadmaps, articles, and practice questions using sophisticated RAG architecture
- ▹Built intelligent RAG system with Azure OpenAI embeddings and ChromaDB, achieving <1s response times through optimized vector operations and topic-aware filtering
- ▹Implemented self-learning architecture where AI-generated content automatically enhances knowledge base, creating continuous improvement loop through automated vector updates
- ▹Developed real-time intent classification with 4 customization types (NEW_SUBADMAP, ADD_TOPICS, PROJECT, REGENERATE) and progress-preserving content merging
- ▹Architected multi-model AI orchestration with MCP-compliant prompts and dynamic context injection based on user proficiency, difficulty, and learning goals
- ▹Created enterprise-grade security with multi-layer validation, content safety analysis, technical relevance scoring, and AI-powered verification for edge cases
- ▹Designed scalable token economy with tiered allocation, operation-based costing (Creation: 2, Customization: 4), and graceful limit enforcement
- ▹Optimized database performance with comprehensive indexing, efficient session-based queries, and Redis caching for user progress tracking
- ▹Implemented resilient fallback strategies ensuring 100% availability with graceful degradation when RAG retrieval fails or sparse queries occur
- ▹Delivered 3x improvement in completion rates through intelligent personalization, real-time progress tracking, and adaptive content generation
- ▹Built comprehensive progress tracking with real-time state synchronization, bookmarking, notes management, and cross-device persistence
- ▹Established continuous deployment pipeline with production monitoring, comprehensive logging, error handling, and health check systems
System Architecture
Select a view to see the architecture flow
- ▹Developed a full-stack AI code evaluation system using Retrieval-Augmented Generation (RAG), Model Context Protocol (MCP), and intelligent prompt engineering for contextual code validation.
- ▹Built comprehensive language detection engine with regex patterns and anti-patterns for Python, Java, C++, JavaScript to prevent language mismatches and ensure code integrity.
- ▹Implemented multi-AI model orchestration with Azure OpenAI (o3-mini, o1, gpt-35-turbo) for different use cases: high accuracy, reasoning, and fast response scenarios.
- ▹Designed dual-layer response parsing: JSON-first extraction with markdown fallback to handle both structured and unstructured AI responses reliably.
- ▹Created MCP-compliant prompt system with strict formatting requirements for consistent AI evaluations and structured verdict generation.
- ▹Integrated automatic progress tracking with MongoDB (TodoItem, Topic, Subroadmap) to connect code submissions with learning curriculum and auto-complete milestones.
- ▹Built RAG pipeline with ChromaDB using text-embedding-ada-002 for semantic search of roadmap data, enhancing AI context with learning objectives.
- ▹Implemented production-grade error handling with COMPILATION_ERROR, RUNTIME_ERROR, and VALIDATION_ERROR types with detailed user feedback.
- ▹Developed structured verdict system returning comprehensive JSON: { verdict, passedCases, testCases, complexity, explanation, suggestedFix }.
- ▹Achieved 99% evaluation accuracy through AI-powered validation without traditional compilers, focusing on logic and approach understanding.
- ▹Enabled real-time progress updates via axios calls to updateUserTodoItem API when code passes evaluation in submission mode.
- ▹Built environment-aware configuration with separate development (testapi.propeers.in) and production (api.propeers.in) endpoints.
- ▹Scaled to handle multiple programming languages with intelligent pattern matching and confidence-based language detection.
- ▹Goal: Replace traditional coding judges with AI intelligence for educational code evaluation with human-like feedback
System Architecture
Select a view to see the architecture flow
- ▹Built a dynamic conversational assistant to resolve developer doubts contextually via community threads and AI insight.
- ▹Implemented threaded conversations, follow-up suggestions, and user-personalized interaction trees.
- ▹Used MCP (Model Context Protocol) prompts to blend user question, system role, and learning history into single message arrays.
- ▹Integrated token-based usage control with limit enforcement (9 free tokens/user) and tracking using MongoDB.
- ▹Designed to run without RAG answers are LLM-native and constructed through structured prompt layering alone.
- ▹Developed resource-aware context processing detecting roadmap/article/practice contexts for tailored responses.
- ▹Implemented dynamic model selection between O3Mini and O1 based on question complexity and type.
- ▹Built payload normalization system ensuring consistent structure across different resource types.
- ▹Created specialized prompt generators: generateSystemPrompt for generic resources and roadmapAIChatSessionPrompt for roadmap contexts.
- ▹Enabled automatic code formatting with autoWrapCode and formatO1Response for clean markdown and code blocks.
- ▹Delivered 3x engagement and 2x resolution speed through clean formatting (code + explanation), model-switching (O3Mini/O1), and chat memory.
- ▹Integrated with community discussion forum for collaborative learning and knowledge sharing.
- ▹Supported contextual node integration for smarter, more relevant answers based on learning progress.
- ▹Implemented real-time session management with MongoDB storage for chat history, tokens, and metadata.
System Architecture
Select a view to see the architecture flow
- ▹Engineered an AI-integrated code editor using Monaco, seamlessly tied into CodeLLM and AskAI pipelines.
- ▹Supported live verdicts, multi-language (C++, Java, Python) switching, and dynamic prompts based on user activity.
- ▹Embedded AI-based feedback inline within the editor via backend event sync and code stream capture.
- ▹Delivered interactive IDE-like experience with <40ms event lag, boosting engagement and retention by 40%.
- ▹Tight integration with RoadmapAI and CodeLLM for contextual assistance
- ▹Real-time code validation and suggestions during typing
System Architecture
Select a view to see the architecture flow
- ▹Refactored and optimized over 150 core APIs (Editor, Roadmap, AskAI, Profile) for high-throughput performance.
- ▹Reduced average response latency from 2.2s → 300ms through async queues, parallel batches, and Redis caching.
- ▹Introduced pagination layers, ElasticSearch indexing, and horizontal load balancing to maintain SLA under scale.
- ▹Achieved 70% backend performance boost and improved Core Web Vitals (TTFB, LCP, FCP) across all pages.
- ▹Load tested to 10K RPM 99.95% uptime sustained with zero cold-starts using warmed cloud functions.
- ▹Implemented advanced caching strategies and async processing
- ▹Enhanced frontend performance through SSR, dynamic imports, and lazy-loading
System Architecture
Select a view to see the architecture flow
Problem Solving & DSA
Key Highlights
LeetCode
1879+ (Top 5% Worldwide)
1400+ solved
4⭐ Problem Solving
GeeksForGeeks
Institute Rank 1 & Global Rank 98
1300+ Solved solved
6⭐ Problem Solving
InterviewBit
1854+ (Master)
560+ Solved solved
Rank: Global Rank 13
CodeStudio
1854+ (Specialist)
2000+ solved
Rank: Global Rank 130
6⭐ Problem Solving
HackerRank
6⭐ Problem Solving
300+ solved
Rank: Rank 52
HackerEarth
1260+ Top 10%
200+ solved
Rank: Rank 101
5⭐ Python/Java
Technical Skills
AI/ML
Frontend Development
Backend Development
Cloud & DevOps
Databases
Programming Languages
Tools
Education
Sage University Indore
B.Tech in Computer Science
2020 – 2024 · MP, India
CGPA: 8.5/10