
AI Engineer
Full Job Description
Prodigy Health Group is seeking an experienced AI Engineer to join their innovative team in Delhi, India. This on-site position focuses on building AI-powered product features and enhancing internal developer workflows on top of their existing MERN-stack platform. The ideal candidate will have 2 to 5 years of experience and a salary package of 8 to 12 LPA.
You will be instrumental in designing and implementing reliable AI systems end-to-end, including prompt engineering, evaluation, Retrieval-Augmented Generation (RAG), agents, workflows, deployment, monitoring, and iteration. The role involves taking AI-native engineering to the next level within both the product and the engineering team's development processes, leveraging modern AI coding tools and internal workflows.
About CarePro
CarePro is dedicated to building intelligent software solutions that optimize healthcare provider operations and improve patient care. Their platform provides a comprehensive 'One Patient, One View' approach, enhancing patient conversion, revenue, and service delivery by offering a 360° view of the patient journey. CarePro combines technology, data, and healthcare workflows to deliver measurable impact.
The core platform is built on MERN, and the focus is now on integrating AI features that provide tangible user benefits. This role is for individuals who are driven to ship production-grade AI systems and own outcomes.
Your Responsibilities
- Ship impactful AI features for CarePro that address real user pain points.
- Own RAG pipelines end-to-end: from ingestion and retrieval to outputs, evaluations, and iteration.
- Develop AI features using structured outputs and deterministic fallbacks for enhanced reliability.
- Monitor and continuously improve AI quality, latency, and cost on a weekly basis.
- Enhance engineering velocity by improving AI workflows, similar to tools like Cursor, Claude, and CodeRabbit.
What We're Looking For
- Strong engineering fundamentals, including API design, debugging, testing, and clean architecture.
- Comfort with ambiguity and a fast-paced execution environment.
- A preference for simplicity, utilizing agents only when they provide significant value.
- A dedication to measurable quality, employing evaluation harnesses and regression tests.
Must-Haves
- Proven experience shipping at least one AI/LLM feature or building one to production-grade standards.
- Solid understanding and practical experience with RAG, embeddings, and retrieval strategies.
- Proficiency in Node.js/TypeScript; familiarity with the MERN stack is highly desirable.
Key Responsibilities in Detail
Build AI Features into the Platform
- Design and deliver AI-driven capabilities such as assistants, copilots, smart search, summarization, recommendations, extraction, and classification.
- Implement LLM orchestration patterns, including tool calling, structured outputs, multi-step workflows, and agent loops where appropriate.
- Develop and maintain robust RAG pipelines covering chunking, embeddings, vector database integration, retrieval strategies, reranking, and citation generation.
Production-Grade AI Engineering
- Create comprehensive evaluation frameworks (offline and online), including quality scoring, regression tests, golden datasets, and A/B tests.
- Implement robust guardrails, such as prompt hardening, safety rules, PII redaction, rate limiting, and fallback logic.
- Track performance metrics including latency, cost, accuracy, and failure modes through effective observability.
Improve Engineering Workflows with AI
- Enhance internal AI workflows for developers, covering areas like PR review automation, test generation, refactoring assistants, documentation generation, and linting/quality gates.
- Standardize best practices for using AI-assisted coding and review tools (e.g., Cursor, Claude, CodeRabbit) within a consistent engineering system.
- Build reusable templates and internal libraries for prompts, tools, and evaluation harnesses.
Collaborate Across Teams
- Partner with product and engineering teams to translate business problems into effective AI system designs.
- Clearly communicate technical tradeoffs, such as accuracy vs. cost vs. latency, and build vs. buy decisions.
Required Qualifications
- Strong software engineering fundamentals, including clean architecture, API design, testing, and performance optimization.
- Experience shipping 1-2 AI/LLM features into production environments.
- Solid knowledge of:
- Prompting patterns and structured outputs (JSON schemas, function/tool calling).
- RAG design and tuning (chunking, retrieval, reranking).
- Evaluation methodologies (automated and human-in-the-loop).
- Hands-on development in Node.js/TypeScript, with the ability to work effectively with React and MongoDB systems.
- Familiarity with Git workflows, code review culture, and CI/CD practices.
Preferred Qualifications (Nice to Have)
- Experience with vector databases (e.g., Pinecone, Weaviate, Milvus, FAISS) or Elastic hybrid search.
- Experience with orchestration frameworks like LangGraph, LangChain, LlamaIndex, or equivalent custom systems.
- Background in ML fundamentals, including embeddings, similarity search, classification, and fine-tuning concepts.
- Experience with cloud deployment (AWS/GCP/Azure), Docker, and Kubernetes (optional).
- Experience setting up cost monitoring and optimization for LLM usage.
Tech Stack
- Platform: MERN (MongoDB, Express, React, Node.js)
- AI Stack (Examples): OpenAI/Anthropic models, embeddings + vector DB, orchestration frameworks, evaluation harnesses.
- Dev Workflow Tools: Cursor, Claude Code, CodeRabbit, and other AI-assisted coding/review tools.