By Thanh Tung Khuat, Complex Adaptive Systems Lab, University of Technology Sydney, Australia, thanhtung.khuat@uts.edu.au | David Jacob Kedziora, Complex Adaptive Systems Lab, University of Technology Sydney, Australia, david.kedziora@uts.edu.au | Bogdan Gabrys, Complex Adaptive Systems Lab, University of Technology Sydney, Australia, bogdan.gabrys@uts.edu.au
As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the ‘how’ and ‘why’ of human-computer interaction (HCI) within these frameworks, both current and expected. Such a discussion is necessary for optimal system design, leveraging advanced data-processing capabilities to support decision-making involving humans, but it is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. Within this context, we focus on the following questions: (i) What does HCI currently look like for state-of-the-art AutoML algorithms, especially during the stages of development, deployment, and maintenance? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, we project existing literature in HCI into the space of AutoML; this connection has, to date, largely been unexplored. In so doing, we review topics including user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, we contemplate how AutoML may manifest in effectively open-ended environments. This discussion necessarily reviews projected developmental pathways for AutoML, such as the incorporation of high-level reasoning, although the focus remains on how and why HCI may occur in such a framework rather than on any implementational details. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
Recent years have seen an unprecedented level of technological uptake and engagement by the mainstream. From deepfakes for memes to recommendation systems for commerce, machine learning (ML) has become a regular fixture in society. This ongoing transition from purely academic confines to the general public is not smooth as the public does not have the extensive expertise in data science required to fully exploit the capabilities of ML. As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the ‘how’ and ‘why’ of human-computer interaction (HCI) within these frameworks. This is necessary for optimal system design and leveraging advanced data-processing capabilities to support decision-making involving humans. It is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy.
In this monograph, the authors focus on the following questions: (i) What does HCI currently look like for state-of-the-art AutoML algorithms? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, the authors project existing literature in HCI into the space of AutoML and review topics such as user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, they contemplate how AutoML may manifest in effectively open-ended environments. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.