I'm a final-year CS undergrad working on the failure modes of long-context retrieval systems. This page is a working index of my publications, posters, and ongoing thesis. Last updated 2 March 2026.
I work at the intersection of long-context retrieval and small-data benchmarks. My undergraduate thesis investigates how retrieval-augmented generation degrades on inputs longer than 128K tokens, and proposes a small set of evaluation tasks that surface degradation earlier than the loss curve does.
Before this, I worked on classical RAG benchmarks at IIT Madras' SocAI lab. The work in this manuscript was supported in part by a summer fellowship at Mila Quebec under Yoshua Bengio's group, where I worked on the agentic memory thread.
I am applying for PhD programs in machine learning starting Fall 2026.
Selected, in reverse chronological order.
Worked on the agentic memory thread under Sasha Rush. Outcome: a workshop paper at NeurIPS 2024.
Working on long-context evaluation. Led a 12-task benchmark currently under review at EMNLP.
Worked on Indian-language retrieval. Co-authored a workshop paper at WiML.
I am applying to PhD programs in machine learning starting Fall 2026, and am happy to chat with potential advisors, hiring researchers, or anyone working on retrieval / agentic systems. The fastest way to reach me is email; I read everything within a working day.