top of page
Image by Adrien Olichon
D-0146.jpg

About Me

Deep learning researcher (PhD) working on representation learning and multimodality, employing transformers and other neural network architectures. Having trained in both the theoretical and computational domains, I am interested in neural networks, including the building blocks and fundamental aspects of deep learning, such as architecture design, backpropagation, compression, and reasoning.

Has published in relevant conferences and journals (eg. NeurIPS spatiotemporal workshop, European Conference on Information Retrieval, RecSys'23 fashion workshop). TensorFlow, MXNet, and HuggingFace Transformers open source contributor of deep learning research.

Drawn to interesting/open problems related to (deep) neural network architectures.

My PhD research related to maximum likelihood estimation and Kullback-Leibler divergence. Before the Msc and PhD in probabilistic statistics at Virginia Tech, I majored in computer science during my undergraduate degree (in machine learning) and in high school. Math and physics olympic (silver medal, Romania).

 

 Lives and works in New York City.

Google Scholar

DBLP

GitHub 

LinkedIn

Work highlights on X(Twitter)

Home: About
Home: Contact

Contact

bottom of page