Research

My current research focuses on emerging floating-point numbers. In particular, my work tries to develop a hardware platform using RISC-V CPUs to explore posit arithmetic in domain-specific applications.

Exploring new arithmetics is a way of optimizing the execution of specialized compute-intensive applications. This is the case of machine learning, for example with the bfloat16 format present in TPUs, or scientific computing.

I am researching hardware implementations of posit arithmetic. This is a promising alternative to IEEE 754 floats and doubles (the ones we are all used to) that was proposed in 2017. Posits have a good trade-off between dynamic range and accuracy and encounter few exceptions when operating (in fact, only two: $0$ and $\pm\infty$). They also have tapered precision, which means that numbers near $\pm 1$ are more accurate, while very big and very small numbers have less accuracy.

You can find all the details in my publications.