Research

My research explores the future of how humans access, interact with, and make sense of information. At the heart of this work is a commitment to reimagining information retrieval systems—not as static tools for locating documents, but as intelligent, adaptive agents that understand context, reason over content, and engage users in meaningful interaction.

I work at the intersection of Information Retrieval (IR), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG). My goal is to develop systems that bridge the gap between unstructured data and human understanding—making search more conversational, adaptive, and personalized. I focus on building retrieval systems that are not just efficient but also intuitive and contextually aware, aligning their behavior with users’ evolving information needs.

A major theme in my work involves advancing neural IR models that dynamically adapt document and query representations based on interaction context. Relevance is not fixed—it emerges from the interplay of user goals, query formulation, and document content. To capture this, I design models that adjust their retrieval strategy on-the-fly, enhancing both accuracy and user satisfaction.

My research also investigates how LLMs can be leveraged to enhance retrieval performance. I explore how these models generate rich textual and semantic representations that help systems better understand entities, relationships, and complex multi-turn queries. I’m particularly excited by the potential of self-refining systems, where LLMs adaptively refine queries, rerank results, or even simulate expert-like behavior in response to feedback.

I am interested in applying reinforcement learning (RL) to train retrieval agents that can simulate or learn from user interaction. By using LLMs to model user behavior, these systems learn strategies for clarification, result refinement, and interactive retrieval—key for powering next-generation conversational search systems.

In addition to building systems, I’m deeply interested in the theoretical foundations of neural IR—especially in designing loss functions and training frameworks that go beyond traditional ranking metrics to optimize for contextual relevance, diversity, and user intent alignment.

Broadly, my vision is to create information access systems that don’t just retrieve—they understand, synthesize, and assist. By uniting LLMs, reinforcement learning, and core IR principles, I aim to build AI systems that make accessing knowledge more intelligent, more human-centered, and more impactful across domains like search, recommendation, education, and decision support.