Logo
HomeThoughtsProjectsJourneyResearchNotesTravelFoodBooksSports

Research

My adventures in pushing the boundaries of technology through experiments.

Efficient Attention Mechanisms for Long-Context Transformers

Research Paper • 2024

Standard self-attention in transformers scales quadratically with sequence length, limiting their application to long documents. We survey recent approaches to achieve linear or ne

Deep LearningTransformersResearch
Read Full Paper →

Privacy-Preserving Techniques in Federated Learning

Research Paper • 2024

Federated Learning (FL) enables training machine learning models on distributed data without centralizing it. While this provides inherent privacy benefits, additional techniques a

Federated LearningPrivacyMachine Learning
Read Full Paper →

Exploration Strategies in Deep Reinforcement Learning

Research Paper • 2024

Effective exploration remains a fundamental challenge in reinforcement learning. We review classical and modern exploration strategies, analyzing their theoretical properties and e

Reinforcement LearningAIExploration
Read Full Paper →