enterprisecompletedJul 2024 – Aug 2025

Enterprise Virtual Assistant

Conversational AI that eliminated 320+ hours of manual work per quarter

Role: Software Development Engineer IDuration: 14 months

Overview

The storage team was drowning in manual data entry — hundreds of hours per quarter spent on repetitive metadata extraction. I built a conversational AI system that understood natural language queries, extracted metadata automatically, and created storage records. It went from a proof-of-concept nobody believed in to a production system that changed how the team worked.

The Problem

The storage management team at JPMorgan Chase spent over 320 hours every quarter on manual data entry — extracting metadata from incoming requests, validating storage configurations, and creating records in provisioning systems. This repetitive work pulled senior engineers away from architectural decisions and system optimization.

Previous automation attempts had stalled at the proof-of-concept stage. The workflows were complex, involving natural language understanding of varied request formats, integration with multiple backend systems, and strict compliance requirements inherent to financial services.

The Approach

I designed a conversational AI system using RASA NLU as the core natural language understanding engine. The assistant could interpret user queries in natural language, extract relevant metadata (storage type, capacity, environment, team), and automatically create records in downstream provisioning systems.

The architecture involved a RASA NLU pipeline for intent classification and entity extraction, Java microservices for business logic and validation, SQL databases for record storage, and AWS infrastructure for deployment and scaling. I built a React frontend that gave users a familiar chat interface while the backend handled complex multi-step workflows.

Integration with Kafka enabled real-time streaming of events between the assistant and existing enterprise systems, ensuring data consistency and auditability — non-negotiable requirements in financial services.

Key Decisions

Choosing RASA over cloud NLU services was deliberate — JPMorgan Chase required on-premise deployment for data sensitivity. RASA's open-source nature allowed full control over the model training pipeline and deployment infrastructure.

I implemented a hybrid approach for entity extraction: rule-based patterns for structured fields (like storage sizes and environment names) combined with ML-based extraction for free-form descriptions. This gave us high precision on known formats while gracefully handling novel inputs.

The microservices architecture was designed for independent scaling — the NLU pipeline could scale separately from the record creation service, allowing us to handle burst traffic during quarterly provisioning cycles.

Impact

The system eliminated 320+ hours of manual data entry per quarter, reduced storage provisioning time by 40%, and achieved 99.9% uptime SLA compliance. Beyond the metrics, the assistant changed the team's workflow — senior engineers could focus on architecture and optimization instead of data entry.

The success of this project established a pattern for conversational AI adoption across adjacent teams, and the architecture became a reference implementation for other chatbot initiatives at JPMC.

Lessons Learned

Production NLU systems need graceful degradation — when the model isn't confident, the system should ask clarifying questions rather than guess. I implemented a confidence threshold system that routed low-confidence queries to a human review queue.

Enterprise chatbot adoption is as much about change management as technology. Working closely with end users during development — not just at launch — was essential for building trust in the system.

Technology Stack

RASA NLUPythonJavaSQLAWSKafkaReact