Dr. Elisa Gagnon Obtains a FRQSC Research Support for New Academics Grant

Dr. Elisa Gagnon Obtains a FRQSC Research Support for New Academics Grant

Dr. Elisa Gagnon

Artificial Intelligence (AI) can be described as a branch of computer science that examines the processes of human intelligence with the goal of synthetizing intelligence and establishing ideal thinking and behavior. We are routinely interacting with AI-based systems. When you look for information on Google, follow a recommendation for a new show on Netflix or even ask Siri what the weather forecast is for the next day, you are doing so thanks to AI.

Nowadays, companies are operating in increasingly complex environments, and must continuously take actions to improve their business processes and decision-making. One way to do so is to incorporate Artificial Intelligence to improve the effectiveness of operations through automation (replacing human decision-making and actions with technology) and augmentation (supporting and improving human decision-making and actions with technology). Although the development of genuinely artificial intelligent machines has yet to happen, the consensus is that the potential uses of AI could go beyond impacting the nature of work and will likely change economic mechanisms and business models.

However, there is uncertainty for businesses on how to manage AI and its imperfections. When developing the algorithm for AI, developers can make errors or unconsciously introduce prejudices, or the data used to train the algorithm can be biased, which could result in the algorithm being skewed. If those situations arise in an artificial intelligence system that is so complicated that a human cannot understand how the AI arrived at this conclusion (called Black box AI), errors may go unnoticed.

Dr. Elisa Gagnon, Associate Professor at the Williams School of Business, was awarded a FRQSC Research Support for New Academics for the project Rethinking How Humans and Machines Make Sense Together. With this project, Dr. Gagnon aims to enhance our understanding of explainable AI. Explainable AI is the opposite of black box AI; as such, it is transparent, and humans can understand how the AI arrived at its conclusion. With explainable AI, humans and machines can work together to augment and enhance each other’s capabilities, explains Dr. Gagnon. This study addresses a specific emerging challenge in the practice of AI. It aims to examine where we should draw the line between rejecting and embracing black box AI. This research project should facilitate the development of guidelines and the move to the design, development, and practices of explainable AI.

The Fonds de recherche du Québec (FRQ) support and promote excellence in research, and the training of the next generation of researchers. In June, the results of their 2022-2023 competitions were made available to the research community. The aim of the Research Support for New Academics program is to help support a new generation of university academics by assisting early-career faculty to establish themselves as independent researchers, become competitive nationally and internationally, and train the next generation of students.