Skip to Content

Hi, my name is

Radha Gulhane.

Curious Mind loading...

My areas of focus include Distributed Deep Learning, Natural Language Processing, and High-Performance Computing. I graduated from Ohio State University with a major in Computer Science & Engineering, where my research centered on Distributed DL.

About Me

Hello! I'm Radha, and I find great satisfaction in tackling intricate problems.

My focus is on distributed deep learning, which I have actively researched and contributed to as a graduate researcher at OSU's High-Performance Computing lab, NOWLAB .

At present, I work at Zoom AI team, focusing on multimodal reasoning and LLM training.

In my leisure time, I pursue my passion for portrait sketching and hiking.  

Here are a few technologies I’ve been working with recently:

  • Large Language Models
  • Distributed Deep Learning
  • Reinforcement Learning
  • Reward Modeling for RL
Headshot

Where I’ve Worked

Senior AI Software Engineer
Zoom Communications

May 2024 - Present

  • Working on Reinforcement Learning to enhance the reasoning capabilities of Vision-Language Models (VLMs).
  • Implemented novel reward modeling using a hybrid reward mechanism with support for both sparse and dense rewards for VLMs.
  • Worked on enabling and performance tuning of inference engines to accelerate data synthesis efforts.

Some Things I’ve Built

Other Noteworthy Projects

view the archive
  • Folder

    Distributed Object Storage for to CORTX : B+ tree

    Provided CRUD operations support with asynchronous transcation execution for metadata storage of CORTX. Additionally, assited in memory limit feature and implemented various node formats for variable-sized objects with CRC (Cyclic redundancy check) support for data recovery.

    • C
    • Distributed Systems
  • Folder

    Data Parallelism : Distributed Deep Learning

    Implemented Data Parallelism using Horovod for PyTorch Distributed and PyTorch distributed for the ResNet model. Also, conducted performance evaluations for both weak and strong scaling by varying the number of nodes. The objective was to identify the optimal configuration that offers the best performance in terms of scalability and efficiency.

    • PyTorch
    • Python
    • Distributed Deep Learning
  • Folder

    Pipeline Parallelism: Distributed Deep Learning

    Implemented pipeline parallelism using DeepSpeed for the AlexNet and VGG19 models. Also, evaluated performance to analyze the performance trends of pipeline parallelism for both models, considering various numbers of GPU nodes.

    • PyTorch
    • Python
    • Distributed Deep Learning
  • Folder

    Machine Learning Algorithms for Clustering

    The objective is to apply clustering techniques to three distinct datasets : small_Xydf, large1_Xydf, large2_Xydf. Subsequently, evaluate the effectiveness of various clustering algorithms on these datasets and compare their performance to determine the most suitable algorithm.

    • Python
    • TensorFlow
  • Folder

    Machine Learning Algorithms for Classification

    Implement different classification models for Hotel Booking Dataset to develop effective analyses and models for predicting which future reservations are likely to be at risk of cancellation.

    • Python
    • TensorFlow
  • Folder

    Metadata Object Storage

    Object storage data stuctures widely used in File Systems and Databases (example, MongoDB indexes use a B-tree, SQL uses B+tree as a object stoarge). This project includes implemented B, B+ tree and B-episilon tree.

    • C
    • C++

Education

Master of Science
Computer Science & Engineering

Ohio State University

August 2022 - May 2024

What’s Next?

Get In Touch

If you'd like to chat or are keen to dive into my experience, recent work, and interests, feel free to drop me an email.