Skip to Content Skip to Main Menu

Faculty of Arts & Science

Arts & Science News

U of T computer scientists develop solutions for faster computation

Photo of Maryam Mehri Dehnavi and Kazem Cheshmi.

Assistant Professor Maryam Mehri Dehnavi and PhD student Kazem Cheshmi. Photo: Ryan Perez.

In the 2003 hit film The Matrix Reloaded, Neo, portrayed by Keanu Reeves, asks the character known as The Architect, “Why am I here?” The Architect answers: “Your life is the sum of a remainder of an unbalanced equation inherent to the programming of the matrix.”

Maryam Mehri Dehnavian assistant professor of computer science at the University of Toronto, is conducting research in numerical analysis, parallel computing and compilers – which might read like language taken from a screenplay by the Wachowskis, but the matrix and the algebra are real and necessary for today’s computation.

“A lot of our papers have matrix in the title because it’s all about the matrix,” says Dehnavi about high-performance computing. “You’re inside this loop of, ‘What kind of matrix is this? What kind of algebra am I doing? What is my architecture? What’s the parallel algorithm?’ A lot of the time, not all of them are known at the same time, to give you the performance you want. That’s the issue.”

Dehnavi and Kazem Cheshmi, a PhD student in computer science, build high-performance software for applications used in various domains, including computer graphics, robotics and machine learning.

“We figure out what all of these applications share in common, from the numerical methods to the optimization algorithms. We also understand the hardware that these applications are going to run on, and so we build a whole infrastructure that connects these two.”

Dehnavi and Cheshmi, along with Adobe Research’s Shoaib Kamil and Michelle Mills Strout, a professor of computer science at the University of Arizona, presented “ParSy: Inspection and Transformation of Sparse Matrix Computations for Parallelism” at Supercomputing, the international conference for high-performance computing, networking, storage and analysis, in Dallas this week.

The conference draws over 11,000 attendees and hundreds of companies and research centres from around the world. Dehnavi says software companies in particular are on the lookout for technical papers that outperform their software and make code run fast on their architectures.

Their “Sympiler” framework, which is available by open licence, was presented at Supercomputing last year. Cheshmi received the Association for Computing Machinery’s highest honour in the student research competition, and the Adobe Fellowship. Their latest paper, ParSy, extends this work to parallel machines.

One might think that a line of code or an algorithm simply works once written, but it has to be specialized as the underlying computer hardware changes. Dehnavi says the problem right now is that hardware is moving much faster than the software.

A large number of off-the-shelf, high-performance software exist, each of which are developed for some particular machine and application. Not only is this variety confusing, but software often do not deliver the speed-ups that modern big data applications demand. Domain-specific compilers attempt to resolve this issue by automating code optimization.

“They are like a human brain that makes these hard decisions for you, automatically,” says Dehvani.

“Of course, this is step-by-step. We can’t write this brain for all the applications at-once. We start by one numerical method, one domain, and gradually the long-term objective is to extend that to a general programming language.”

Dehnavi says it’s all about getting the maximum kind of performance you want from the application people care about. In machine learning, she says, there are many big data applications. The problem is, when you run them in modern parallel computing platforms, the challenge is how to scale efficiently.

“The trend is going in the right direction – putting more investment into performance engineering, code, into writing parallel and optimized algorithms. More importantly, making these automatic systems that will do this for us instead of all this manpower spent on ‘how to write the fastest code’. We want to make this automatic.”

Dehnavi joined U of T from Rutgers University this summer. The one-time Montrealer who completed her PhD studies at McGill University says she is already feeling settled in Toronto.

“It’s amazing, I love the city, it’s very alive,” she says. “The people here at U of T are very collaborative – it’s a very open environment. And it’s very encouraging.”