U of T computer science professor named NSERC Industrial Research Chair in Machine Learning
University of Toronto professor of computer science Richard Zemel has been named NSERC Industrial Research Chair (IRC) in Machine Learning, effective August 1, 2018 for a renewable five-year term.
The NSERC IRC program assists universities in building on existing strengths and furthering research efforts in fields that have not yet been developed. The program also ensures an enhanced training environment for graduate students and postdoctoral fellows by exposing them to research challenges unique to industry.
The new chair, in partnership with Google, will enhance collaborations between the company and the Department of Computer Science in the Faculty of Arts & Science and its machine learning research group.
Centre for Ethics, University of Toronto
Zemel, who is also a co-founder and the research director of the Vector Institute for Artificial Intelligence as well as a senior fellow with the Learning in Machines & Brains research program at the Canadian Institute for Advanced Research, is examining how machine learning can be made to be more expressive, controllable and fair.
“It’s fairly recent that the successful approaches working in real-world applications are big, deep, and opaque neural networks,” says Zemel. “The idea is to try and open the box a bit.”
He explains that in addition to trying to understand this “box”, it would also be useful to add some control – an input the external user provides to adjust the output.
“In machine learning we want to learn in a particular way, or shape the learning,” says Zemel. “Let’s say we want to predict your taste in movies. But not for all movies – we may want to specialize to your taste in horror, or romantic comedies. We may want to develop a rich representation of an individual, covering their various tastes and moods.”
“Your phone will know you better than your friends do,” Zemel predicts.
Fairness is another important area that must be addressed, he says, noting that learning based on historical data, such as whether a bank loan was approved or the length of prison sentences, will naturally pick up on biases.
“In many cases we want a decision-making system to be accurate with respect to the historical data, but not completely. We also want [algorithms] to be fair. A challenge is to define what it means to be fair, and develop a system that can learn to be both accurate and fair.”