About me

I'm a passionate Software Engineer (May 2026 graduate) with a strong foundation in AI/ML and full-stack development. Currently pursuing my Bachelor's in Computer Science at Manipal University Jaipur with a minor in Cloud Computing.

My expertise lies in building intelligent systems that solve real-world problems through the intersection of artificial intelligence and robust software engineering. I specialize in computer vision, natural language processing, and scalable web applications, always focusing on creating impactful solutions.

What I'm doing

  • AI/ML icon

    AI/ML Development

    Building intelligent systems using Computer Vision, NLP, LangChain, and modern ML frameworks like OpenCV and Scikit-learn.

    I specialize in developing sophisticated AI and Machine Learning models to tackle complex challenges. My experience includes implementing real-time Computer Vision solutions with OpenCV for applications like facial recognition and object detection. I'm also proficient in Natural Language Processing (NLP) and leveraging frameworks like LangChain to build advanced conversational AI and text analysis tools. My toolkit includes Scikit-learn, TensorFlow, and PyTorch for creating predictive models and deep learning architectures.

  • Full Stack Development icon

    Full Stack Development

    Creating robust web applications using React, Next.js, FastAPI, Flask, and PostgreSQL with modern architecture patterns.

    I architect and build end-to-end web solutions. On the frontend, I use modern libraries like React and Next.js to create responsive and interactive user interfaces. For the backend, I leverage powerful Python frameworks such as FastAPI and Flask to build fast, scalable, and secure APIs. I manage data persistence with relational databases like PostgreSQL, ensuring data integrity and efficient querying. My focus is on writing clean, maintainable code following modern architecture patterns.

  • Cloud Solutions icon

    Cloud & DevOps

    Deploying scalable applications on AWS using EC2, S3, Docker, and Kubernetes with focus on reliability and performance.

    I have hands-on experience in deploying and managing applications in the cloud, primarily with Amazon Web Services (AWS). I utilize core services like EC2 for virtual servers and S3 for object storage. To ensure consistency and scalability, I containerize applications using Docker and orchestrate them with Kubernetes (K8s). This DevOps approach allows me to build robust CI/CD pipelines, automate deployments, and maintain high availability and performance for all my projects.

  • System Design icon

    System Design

    Architecting scalable, distributed systems with expertise in data structures, algorithms, and modern software patterns.

    My foundation in computer science principles allows me to design systems that are not only functional but also scalable and resilient. I have a deep understanding of data structures and algorithms, which I apply to solve complex problems efficiently. I am experienced in architecting distributed systems, considering factors like load balancing, database sharding, caching strategies, and message queues. I follow modern software design patterns to ensure that the systems I build are modular, testable, and easy to evolve over time.

Leadership & Impact

  • IEEE GRSS Team

    IEEE GRSS Leadership

    As General Secretary of IEEE GRSS at Manipal University Jaipur, I led a 15-member team to organize 12+ technical events for 500+ participants. Boosted chapter engagement by 40% through strategic planning and maintained a GitHub repository with 20+ AI/ML and full-stack projects.

  • Open Source Contributor

    Open Source Impact

    Active contributor to open-source projects with focus on AI/ML and web development. Created educational content on modern web development and AI, mentoring members of the student community and sharing knowledge through practical implementations.

Core Technologies

Java Python Machine Learning AWS React FastAPI Docker PostgreSQL Node.js MongoDB

Resume

Download CV

Experience

General Secretary - IEEE GRSS

Oct 2022 – Jun 2025

Leading a 15-member team to organize technical events, hackathons, and workshops. Boosted chapter engagement by 40% and maintained project repositories.

Software Developer Intern - Stick & Dot

Jul 2025 – Sep 2025

Built AI-powered presentation generator using GPT-4 API and LangChain, reducing slide creation time from 2 hours to 25 minutes for standard presentations. Designed React frontend and FastAPI backend serving 500+ registered users with average response time under 150ms.

AI/ML Developer Intern - InLighnX Global Pvt. Ltd.

May 2025 – Jun 2025

Developed sentiment analysis and facial recognition models using TensorFlow and OpenCV, achieving 95% accuracy on validation dataset compared to 78% baseline system accuracy. Deployed models on AWS EC2 and Lambda, reducing average response time from 800ms to 480ms and maintaining 99.9% uptime during testing period. Integrated Gemini AI GenAI APIs into NLP pipeline, reducing false positive rate from 23% to 8% in content classification tasks.

Open Source Contributor

2023 – Present

Contributing to AI/ML and web development projects. Creating educational content and mentoring community members.

Education

Bachelor of Technology in Computer Science

2022 – 2026

Manipal University Jaipur | Minor in Cloud Computing
Current CGPA: 8.5/10

Technical Expertise

AI/ML, Data Structures & Algorithms, and Software Development

Machine Learning Deep Learning Neural Networks Computer Vision Natural Language Processing PyTorch TensorFlow Keras OpenCV Scikit-learn Pandas NumPy Data Structures Algorithms Dynamic Programming Graph Theory Tree Algorithms Sorting & Searching Java Python C++ Object-Oriented Programming System Design
LangChain Generative AI Large Language Models AI Model Training Feature Engineering Model Optimization JavaScript TypeScript React Node.js FastAPI Spring Boot PostgreSQL MongoDB Redis Docker Kubernetes AWS GCP Git Linux CI/CD Microservices RESTful APIs Database Design Cloud Computing Problem Solving Code Optimization Software Architecture

Portfolio

Project Journey

  • DiaryApp Development Journey
    Full Stack Project

    Building DiaryApp: A Modern Digital Journal Platform 📝✨

    The complete journey of creating a secure, feature-rich digital journaling platform with React 18+, TypeScript, and Supabase - from concept to enterprise-grade application.

    🌟 The Vision: In a world of digital overwhelm, I wanted to create a sanctuary for personal reflection. DiaryApp was born from the idea that digital journaling should be beautiful, secure, and insightful - more than just a note-taking app.

    🎯 Core Objectives Achieved:
    Digital Journaling: Secure platform with rich text editing and formatting
    Mood Tracking: 8 different mood options with emotional pattern analysis
    Goal Management: Personal objectives tracking with progress monitoring
    Data Analytics: Comprehensive insights into personal growth and habits
    Privacy First: Enterprise-grade security with Supabase backend

    🛠️ Technology Stack Deep Dive:
    Frontend: React 18+ with hooks and concurrent features, TypeScript for type safety, Vite for lightning-fast builds, React Router for SPA navigation
    Backend & Database: Supabase PostgreSQL with real-time capabilities, Row Level Security for data isolation, built-in auth with social login
    UI & Styling: CSS3 with flexbox/grid, CSS Modules for scoped styling, responsive mobile-first design
    Analytics: Chart.js for beautiful visualizations, custom statistical analysis, date utilities for calendar features

    🚀 Key Technical Achievements:
    Real-time Sync: Implemented instant data synchronization across all sessions using Supabase subscriptions
    Smart Analytics: Built mood trend visualization showing emotional patterns over time with 95% accuracy
    Security Architecture: End-to-end encryption for sensitive data with GDPR compliance
    Performance: Auto-save functionality preventing data loss, offline support with sync when online

    💡 Development Challenges & Solutions:
    Challenge 1: Managing complex state for real-time features - Solved with React Context API and custom hooks
    Challenge 2: Implementing secure authentication flow - Leveraged Supabase Auth with session management
    Challenge 3: Building responsive calendar interface - Created custom date utilities and CSS Grid layouts
    Challenge 4: Optimizing chart performance - Implemented data memoization and lazy loading

    📊 Impact & Results:
    User Experience: 98% user satisfaction with the writing interface
    Performance: Sub-200ms load times with optimized bundle sizes
    Security: Zero security incidents with enterprise-grade data protection
    Analytics: Users report 40% better mood awareness through tracking features

    🔮 Future Enhancements: Planning AI-powered writing suggestions, collaborative journaling features, and advanced mental health insights. This project showcases my ability to build complex, user-centric applications with modern technologies while maintaining security and performance standards.

  • Smart Attendance System Dashboard
    AI/ML Project

    Building Smart Attendance System with Computer Vision

    My journey creating an AI-powered attendance system using facial recognition, solving real classroom problems with cutting-edge technology.

    The Challenge: During my second year, I noticed how much time professors wasted taking manual attendance. Students would mark present for absent friends, and the whole process disrupted learning flow. I thought, "What if technology could solve this?"

    The Building Process: I started with OpenCV for real-time face detection, then integrated a custom-trained facial recognition model using Python's face_recognition library. The biggest challenge was handling lighting variations and multiple faces in frame simultaneously.

    Technologies Used:
    Backend: Python, Flask, OpenCV, face_recognition library
    Database: PostgreSQL for student records and attendance logs
    Frontend: React.js with real-time video streaming
    ML Pipeline: Custom face encoding with 128-dimensional vectors
    Deployment: Docker containers on AWS EC2

    The Story: The breakthrough moment came at 2 AM when I finally solved the multi-face detection accuracy issue. I realized I needed to implement face landmark detection for better angle recognition. Testing it the next day in our college lab with 30+ students gave me a 94% accuracy rate!

    Impact & Learning: This project taught me that real-world AI applications need to handle edge cases gracefully. The system now processes attendance for 500+ students daily, saving 15 minutes per class. Professors love the automatic reports, and students appreciate the seamless experience. It sparked my passion for computer vision applications in education.

  • AI Clinic Management Interface
    Full Stack

    AI-Powered Clinic Management Revolution

    Creating an intelligent healthcare management system that predicts patient needs and automates clinic operations using advanced ML algorithms.

    The Inspiration: After visiting a local clinic and witnessing 3-hour wait times with poor resource allocation, I realized healthcare needed smart automation. Why couldn't AI predict busy hours and optimize doctor scheduling?

    Development Journey: I built this during my summer break, spending 12 hours daily for 6 weeks. The most complex part was developing the predictive algorithm that analyzes historical patient data, weather patterns, and local events to forecast clinic traffic.

    Technical Architecture:
    Backend API: FastAPI with async operations for handling concurrent requests
    AI Engine: Scikit-learn for patient flow prediction, NLP for symptom analysis
    Database: MongoDB for flexible patient records, Redis for caching
    Frontend: Next.js with real-time dashboards using Socket.io
    Cloud: AWS Lambda for serverless functions, S3 for medical document storage

    The Eureka Moment: Week 4 was when everything clicked. I integrated a symptom checker using NLP that could pre-categorize patients by urgency. Testing with dummy data showed 89% accuracy in predicting which patients needed immediate attention versus routine checkups.

    Real-World Impact: The system now helps clinics reduce wait times by 60% through smart scheduling. The predictive analytics help staff prepare for busy periods, and patients get SMS notifications about optimal visit times. It's being piloted in 3 local clinics with amazing feedback from both doctors and patients.

  • NLP and AI Processing
    AI/ML

    Natural Language Processing: From Basics To Advanced

    Explore the evolution of NLP, from foundational text processing to advanced transformer models powering today's AI applications.

    Natural Language Processing (NLP) is a rapidly evolving field at the intersection of linguistics, computer science, and artificial intelligence. This article takes you on a journey from the foundational concepts of NLP to the latest advancements powering today's intelligent systems.

    Foundations: NLP began with rule-based systems and statistical models for tasks like tokenization, stemming, and part-of-speech tagging. Early breakthroughs included the development of n-gram models and the introduction of vector space representations for text.

    Modern Techniques: The rise of machine learning brought supervised and unsupervised approaches to NLP. Word embeddings (Word2Vec, GloVe) enabled computers to understand semantic relationships. Sequence models like RNNs and LSTMs improved language modeling and translation.

    Transformers & Beyond: The introduction of the Transformer architecture revolutionized NLP. Models like BERT, GPT, and T5 achieved state-of-the-art results in tasks such as question answering, summarization, and sentiment analysis. Transfer learning and pre-trained language models have made it easier to build powerful NLP applications with limited data.

    Applications: NLP powers chatbots, virtual assistants, sentiment analysis tools, and automated content generation. It's used in healthcare for clinical text mining, in finance for fraud detection, and in customer service for intelligent support systems.

    Future Directions: Research is focused on making NLP models more interpretable, reducing bias, and enabling multilingual understanding. As NLP continues to advance, it will unlock new possibilities for human-computer interaction and knowledge discovery.

  • AI Model Deployment Architecture
    AI/ML

    Deploying AI Models At Scale: Best Practices

    Learn proven strategies for deploying, monitoring, and maintaining AI models in scalable, production-ready environments.

    Deploying AI models at scale is a complex challenge that requires careful planning, robust infrastructure, and a deep understanding of both software engineering and machine learning. This article outlines best practices for taking your models from the lab to production.

    Infrastructure: Use containerization (Docker, Kubernetes) to ensure consistency and scalability. Leverage cloud platforms (AWS, GCP, Azure) for elastic compute resources and managed services.

    Model Serving: Choose the right serving framework (TensorFlow Serving, TorchServe, FastAPI) based on your use case. Implement REST or gRPC APIs for easy integration with other systems. Use load balancers to distribute traffic and ensure high availability.

    Monitoring & Maintenance: Monitor model performance, latency, and resource usage in real time. Set up automated alerts for anomalies. Regularly retrain models with fresh data to prevent drift and maintain accuracy.

    Security & Compliance: Protect sensitive data with encryption and access controls. Ensure compliance with regulations (GDPR, HIPAA) by implementing audit trails and data anonymization.

    Conclusion: Successful AI deployment is about more than just code—it's about building reliable, scalable systems that deliver value in the real world. By following these best practices, you can ensure your AI models make a meaningful impact at scale.

  • Deep Learning GPU Computing
    Deep Learning

    Optimizing Deep Learning Workflows With GPUs And Containers

    Maximize deep learning performance and efficiency by leveraging modern GPUs and containerization best practices.

    Deep learning has revolutionized AI, but training large models can be resource-intensive and time-consuming. Optimizing workflows with GPUs and containers is essential for efficient experimentation and deployment.

    Why GPUs? GPUs accelerate matrix operations, making them ideal for deep learning. Frameworks like TensorFlow and PyTorch offer native GPU support, enabling faster training and inference.

    Using Containers: Docker containers provide a portable environment for deep learning projects. Use NVIDIA Docker to access GPU resources inside containers. This ensures consistency across development, testing, and production.

    Workflow Optimization:
    Data Pipelines: Use parallel data loading and augmentation to keep GPUs busy.
    Mixed Precision Training: Reduce memory usage and speed up training with float16 operations.
    Distributed Training: Scale up with multiple GPUs or nodes using Horovod or PyTorch DDP.
    Monitoring: Track GPU utilization, memory, and temperature with tools like nvidia-smi and Prometheus.

    Best Practices: Keep container images lean, use version control for code and data, and automate experiments with MLflow or Weights & Biases.

    Conclusion: By leveraging GPUs and containers, you can accelerate deep learning workflows, improve reproducibility, and deliver results faster. Mastering these tools is a must for any AI practitioner.

  • Cloud AI Architecture
    Cloud AI

    Real-World Applications Of AI In Cloud Environments

    See how AI is transforming industries through scalable, secure, and innovative cloud-based solutions.

    Cloud computing has democratized access to powerful AI tools, enabling organizations of all sizes to innovate and scale rapidly. This article explores real-world applications of AI in cloud environments and the benefits they bring.

    Industry Use Cases:
    Healthcare: AI-powered diagnostics, patient monitoring, and personalized treatment plans.
    Finance: Fraud detection, algorithmic trading, and risk assessment.
    Retail: Demand forecasting, recommendation engines, and customer sentiment analysis.
    Manufacturing: Predictive maintenance, quality control, and supply chain optimization.

    Cloud AI Services: Platforms like AWS, Azure, and Google Cloud offer managed AI services for vision, language, and analytics. These services accelerate development and reduce infrastructure overhead.

    Benefits: Cloud AI enables rapid prototyping, elastic scaling, and global deployment. It lowers costs, improves reliability, and fosters innovation.

    Challenges: Data privacy, security, and compliance remain critical concerns. Organizations must implement robust governance and monitoring to ensure responsible AI use.

    Conclusion: The fusion of AI and cloud computing is transforming industries and unlocking new possibilities. By embracing cloud AI, businesses can stay competitive and deliver smarter, more impactful solutions.

Contact

Contact Form