use case
AI-driven Chat

Enterprise AI Platform Evolution

This case presents the development of an AI-driven chat platform for The World Bank, designed to securely integrate AI into internal workflows and reduce reliance on external tools. Over a three-year period, the product evolved from a basic MVP into a multi-layered AI workspace, enabling document interaction, knowledge retrieval, and access to reusable AI applications within a unified environment.

Problem Statement

  • Sensitive Data Risk
    Employees started using external AI tools in their daily work, creating potential risks around sharing confidential World Bank data outside secure environments.
  • Growing Demand for AI
    Employees began exploring how AI could support their workflows, showing a clear interest in integrating AI into everyday tasks.
  • Demand for Unified AI Apps Layer
    AI solutions started to emerge across different teams, but remained isolated, highlighting the need for a unified space to access and reuse them.
  • Demand for Integrated Document Access
    Employees needed a way to work with internal documents in a secure environment, with the ability to explore and interact with enterprise knowledge directly through chat.

Product Vision

  • New Interaction Model
    We are introducing a conversational layer that redefines how employees interact with institutional knowledge — moving from fragmented search across systems to a single, natural language entry point.
  • Unified Knowledge Experience
    Institutional knowledge becomes accessible through a unified AI layer that can understand context, retrieve relevant information, and synthesize insights across multiple internal sources.
  • Trust-First AI Environment
    The system is designed with embedded governance, ensuring that sensitive data, access rights, and responsible AI principles are enforced at every step of interaction.
  • Organizational Capability Shift
    The product enables a shift from manual information discovery to AI-assisted decision-making, improving speed, consistency, and quality of work across the organization.

MVP

MVP introduced a simple conversational interface to explore how employees interact with AI in a secure environment. Users could ask questions, receive AI-generated responses, based on a limited set of internal sources, including curated collections of documents and reports, projects.


The product was built using OpenAI model deployed via Microsoft AzureDevOps, ensuring that sensitive institutional data was not exposed or reused for external model training.

MVP UX Research
MVP was rolled out to a small group of internal teams and collected feedback through internal platforms like MS Viva Engage, as well as direct outreach by UX researchers. In parallel, moderated usability sessions were conducted via MS Teams video calls, where users were given simple tasks and observed during their first interaction with the product. This helped us capture both user feedback and real behaviour, forming the foundation for our key learnings.

Key Learnings

MVP → Product 1st Iteration

  • 1. Prompting
    Users experienced difficulties structuring prompts, which led to inconsistent output quality.
    → This led to the introduction of a prompt library designed to help users quickly select and reuse effective prompts instead of writing them from scratch.

  • 2. Onboarding / Walkthrough
    First-time users were often unclear about product capabilities and interaction patterns.
    → A guided onboarding experience was designed to help users quickly understand key functionalities and usage flow.
  • 3. File & Image Input
    There was a strong expectation for the ability to upload and analyze files and images directly within the chat.
    → This led to prioritization of multimodal input capabilities to support document- and image-based workflows.
  • 4. Conversation History
    Users faced challenges in locating and reusing previous conversations and outputs.
    → This led to a redesign of conversation history into a more structured and searchable knowledge memory layer.
  • 6. Knowledge Filtering
    Users needed the ability to narrow responses to specific domains and internal datasets.
    → This resulted in the introduction of structured knowledge filtering based on organizational context such as projects and datasets.
  • 7. Feedback Loop
    Although users were willing to provide feedback, the mechanism for doing so was not intuitive or easily accessible.
    → This led to embedding contextual feedback mechanisms directly within the chat experience.
  • 8. Multiple LLMs
    Users expressed interest in using different AI models depending on task type and desired output style.
    → This led to the exploration of a flexible model layer allowing users to switch between different AI providers within the same chat experience.
  • 9. Translation Needs
    Users frequently worked with documents and workflows in multiple languages, which required constant switching to external translation tools.
    → This led to the consideration of in-chat translation capabilities to reduce reliance on external services.
Main MVP Insight:
AI was expected to support end-to-end work processes rather than serve purely as a conversational assistant.

This insight informed a strategic shift toward building a unified AI productivity layer across enterprise knowledge and workflows.

Product 1st Iteration

Overview

Building on MVP insights, the product evolved from a conversational AI tool into a structured AI workspace for interacting with enterprise knowledge. The focus shifted toward usability, scalability of workflows, and deeper integration with documents and organizational context.


The system continues to be built on OpenAI models deployed via Microsoft Azure DevOps, ensuring secure handling of sensitive institutional data without exposure or use for external model training. In this iteration, the model layer was expanded with Google Gemini, alongside existing OpenAI capabilities, and enriched with Google Search integration to complement internal document-based retrieval with external public information when needed.

Key Learnings

1st Iteration → 2nd Iteration

  • 1. Personalization & Trust Layer
    In order to increase trust in platform interactions, users needed a more personalized, identity-aware experience within the system.
    → This led to the introduction of a trusted identity layer enabling automatic system-aware user recognition through secure employee profile integration.
    → Additionally, users were addressed by name on the home page, improving perceived trust, readability, and overall interaction confidence within the platform.
  • 2. Multi-Document AI Workspace (“Spaces”)
    In the previous iteration, users had limited ability to work with more than one document within a single chat session.
    → This led to the introduction of Multi-Document AI Workspace (“Spaces”), where users can create dedicated contexts by grouping documents and interact with them through chats grounded in this material, generating structured outputs such as reports, drafts, visualizations, and audio summaries.
  • 3. In-Chat Document Search
    Users needed access to the broader internal document database beyond uploading files into the chat.
    → This led to the introduction of a document search capability within the World Bank knowledge base, enabling users to find and add relevant internal documents directly into their workflow.
  • 4. AI Application Library
    AI-driven solutions started emerging independently across The World Bank departments, with teams building domain-specific chat-based tools connected to their own knowledge bases.
    → This led to the introduction of an AI Application Library as a unified space to surface and organize internal AI tools with filtering and discoverability.
  • 5. Mobile Application
    As adoption of the product grew among World Bank employees, demand increased for access beyond the desktop environment.
    → This led to the decision to develop a native mobile application, enabling users to interact with the AI product and access core functionalities directly from their mobile devices.
  • 6. Design Adaptation
    As the product became more feature-rich and complex in scope, it required a clearer structure to maintain ease of navigation and focus on core functionality.
    → In order to simplify the experience, the interface was restructured to improve information hierarchy and reduce cognitive load and ease of use.
  • 7. Voice Transcription
    Following emerging market capabilities in voice-to-text and speech recognition models, user expectations shifted toward more flexible input methods beyond typing.
    → This led to the introduction of voice transcription capabilities, enabling speech-based interaction with the AI directly within the chat.
  • 8. Type-In Block Improvements
    Users still faced challenges when interacting with the type-in block and were uncertain how to properly formulate prompts.
    → This led to simplifying the entry experience (“Ask anything”) and introducing quick action shortcuts (like "Help Me Find" and "Translate"), to reduce effort and accelerate interaction.
What Drove 2nd Product Iteration:

While the 1st Product Iteration significantly improved the MVP, introducing new capabilities and more structured interactions, it remained focused on single-conversation workflows. As usage matured, users began working across multiple documents and relied on AI for more complex, ongoing tasks.

At the same time, rapid advancements in the AI landscape were raising user expectations, creating pressure to deliver more powerful and flexible capabilities within a secure internal environment.

This revealed the need to move beyond isolated chats toward a structured workspace enabling context organization, persistent knowledge, and access to reusable AI applications.

Product 2nd Iteration

Overview

Building on the 1st product iteration, the system evolved from a structured AI workspace into a more comprehensive multi-document AI environment designed to better reflect how employees actually work with knowledge across different contexts.


The experience was further personalized through a connected employee identity layer, which introduced a more tailored interaction model based on user context and role within the organization. The product introduced a Multi-Document AI Workspace (“Spaces”), allowing users to group documents, data, and prompts into dedicated environments with persistent context. This enabled a shift from isolated conversations to structured, task-oriented workspaces supporting diverse outputs such as reports, summaries, visualizations, and audio insights. In parallel, a curated AI applications layer (“Apps”) surfaced reusable solutions across departments and created a unified entry point to access and apply organizational AI capabilities.

Delivery Process

The product was developed over a 3 year period using an iterative, Agile-based approach, with continuous validation and close cross-functional collaboration.

1. Problem Framing & Product Vision

The process started with identifying key challenges around secure AI usage, fragmented tools, and limited access to internal knowledge.

These insights informed a clear product vision, defining the shift toward a unified, secure, AI-driven workspace for employees.

2. Roadmap Definition

and Alignment

Based on the product vision, a clear and transparent roadmap was established, outlining key milestones, priorities, and expected evolution of the product.

The roadmap was accessible across teams, enabling shared understanding of:

  • Product direction
  • Scope and priorities
  • Team involvement and timelines.

3. Cross-functional Discovery

Early-stage exploration involved close collaboration across product, design, engineering, and architecture.

Work was driven through:

  • Collaborative workshops (whiteboards, workshop sessions)
  • Alignment discussions with technical architects
  • Iterative exploration of possible solutions
  • This helped translate complex requirements into feasible product directions.

4. Agile Delivery

via Azure DevOps

The delivery process was structured using MS Azure DevOps, with work broken down into:

  • Epics
  • User stories
  • Tasks

This ensured clear traceability from high-level goals to implementation.

5. Sprint Execution

and Scrum

Development followed an Agile sprint cycle, including:

  • Sprint planning
  • Backlog grooming
  • Daily syncs
  • Sprint reviews
  • Regular Scrum ceremonies ensured alignment, prioritization, and continuous progress across teams.

6. Iterative Development

The product evolved through structured iterations:

  • MVP → validation of core concept
  • Iteration 1 → usability and interaction improvements
  • Iteration 2 → expansion into a scalable AI workspace
  • Each phase built on previous learnings, allowing the product to scale in both functionality and complexity.

7. User Feedback and Validation

Continuous feedback loops were embedded throughout the process via:

  • Usability testing sessions
  • Internal feedback channels
  • Direct user observation
  • Insights were used to refine features, improve usability, and guide prioritization.

8. Continuous Alignment & Delivery

Ongoing collaboration across teams ensured alignment between:

  • User needs
  • Technical constraints
  • Business goals
  • This enabled consistent delivery of a complex product while maintaining clarity, usability, and strategic direction.
Impact & Metrics
Product impact was measured against key strategic objectives defined in the initial product vision.
  • 81%
    Organizational Capability Shift

    81% of users transitioned from initial exploration to repeated usage, reflecting a shift from experimentation to integration of AI into daily workflows.

  • 90%
    AI-Assisted Workflow Efficiency

    Up to 90% faster task completion for document-based workflows, highlighting the impact of AI on productivity and decision-making processes.

  • 75%
    Trust-First AI
    Environment

    75% reduction in reliance on external AI tools, demonstrating increased trust in a secure, internal AI environment for handling sensitive data.

  • 65%
    Unified Knowledge Experience

    65% of active users engaged with advanced features such as Spaces and Apps, indicating successful adoption of a unified knowledge layer across internal sources.

Role and Recommendations:

Initially brought in as a Product Designer, responsible for shaping user experience and interface design, the role expanded significantly due to the complexity and evolving nature of the product.
Working closely with Product Owners and engineering teams, design became deeply embedded in product decision-making, from defining interaction models to clarifying feature logic and structuring user flows. As the product evolved, design outputs often served as a foundation for product direction, backlog definition, and user story development.
This resulted in a contribution that extended beyond design execution into product thinking, cross-functional alignment, and active participation in shaping the product, which is reflected in stakeholder recognition and recommendations.
  • Amy Jean Doherty I Global CIO at The World Bank
    "We appreciate your contributions and leadership!" (comment to post on LinkedIn)
  • Onika Vig I AI Solutions Product Owner at The World Bank
    "... Beyond her skills, Alice brings a positive, solution-oriented attitude that uplifts everyone she works with. I highly recommend Alice — she would be a tremendous asset to any organization looking for creativity, dedication, and impact. I love working with Alice ❤️"
Let's connect!
LinkedIn
website icon
Mail