Published by MIT Press Hardcover ISBN: 9780262038829
To support a precise discussion of interactive task learning, the problem setting in which teachers and learners interact in a shared world must be clearly defined and understood. This chapter provides a formalism to enable discussion of the different types of interactive learning: from teaching a robot to grasp a novel object, to instructing a mobile phone how to reach a friend in an emergency. It provides a way to speak precisely about notions such as shared knowledge between teachers and learners, presents working definitions of the internal structures of the agent, and describes the relationships between the task environment and the communication channel. It focuses on the problem of interactive task learning, not its solution, as a backdrop to further discourse in this volume.
What knowledge needs to be learned to acquire a novel task? What background knowledge does an agent need to use newly acquired knowledge effectively? This chapter considers the functional roles of knowledge in task learning. These roles of knowledge span interaction with other entities and the environment and core functional capabilities of the reasoning system itself (i.e., architecture). Perspectives are offered on the definition of “task” and the relationship between task and knowledge. In addition, three specific challenges central to the role of knowledge in interactive task learning (ITL) are examined: the identification of architectural primitives (basic functional and representational building blocks) needed for ITL, requirements for enabling shared understanding (“ common ground”) between learner and instructor, and conditions that support projection and anticipation of future states. In conclusion, specific research questions are put forth to address these challenges and advance ITL as a field of inquiry.
Computational models offer a precise, quantitative way to represent the cognitive processes and representations involved when an agent interacts with another agent: from the receiving of instructions, to their interpretation, to the processes involved in learning to perform a task. This chapter discusses various forms of knowledge and skills involved in interactive task learning (ITL). It describes the components and processes in cognitive architectures relevant to ITL, organized around dichotomies of declarative knowledge and procedural skills, symbolic representations and subsymbolic statistics, as well as cognitive, perceptual, and motor processes. One specific cognitive architecture, ACT-R, serves to focus discussion. Using a model of interactive learning in decision making, it demonstrates how these components and processes interact. Representation, learning, and processing issues are discussed both in isolation as well as in the context of this integrated task learning model.
Learning and teaching are best viewed as a collaborative interaction. As participants, both the teacher and learner share the goal of increasing the learner’s abilities. Yet what does it mean to know how to do something? This chapter analyzes the abstract form, nature, and organization of task knowledge and illustrates these concepts using a shared task of tire rotation. It applies a hierarchical decomposition of knowledge for interactive task learning that involves three levels: domain knowledge, procedural knowledge, and metaknowledge. In addition, the traditional distinction between symbolic versus nonsymbolic task knowledge is noted. Representative examples are given, and open questions and unresolved problems are highlighted as suggested directions for future inquiry.
Interactive task learning requires knowledge and skills that are highly flexible and composable, and a cognitive architecture to support this. Cognitive architectures aim to bridge the gap between the brain and intelligence, providing a formal level of description for rigorous theories of behavior. Architectures typically operate on a single level of abstraction, but this may be too limited for interactive task learning. Instead, architectures with multiple levels of abstraction should be considered, each with their own formalisms and learning mechanisms. Each level should be able to explain the abstraction level above it, thus creating a reductionist hierarchy of theories to model human intelligence, not with a single formalism, but with several.
This chapter considers the qualities of human interaction and learning that will be most effective and natural to incorporate into any interactive task learning agent, and focuses specifically on the interactions involved in learning from explicit instruction. At the center of this interaction is a process that brings the common ground between a teacher agent and a learner agent into alignment. Errors or misalignments to this common ground drive the interactive learning process. The importance of timing is highlighted as is the dynamics of an interaction, as a communication channel itself, in this alignment process.
The design of an interactive robot should make crucial reference to the observed properties of human interaction. Obviously, human communicative interaction varies across languages and cultures, but remarkably uniform is the basic organization of interactive language use: participants take short turns at talking while avoiding overlap; they utilize a basic inventory of action–response pairs (e.g., question–answer), which can be recursively employed; they have systematic backup systems for communicative difficulties and deploy multimodal signals (speech, gesture, facial expression, gaze) to disambiguate or reinforce intended content. This chapter spells out these design properties and makes the point that human comprehension is fundamentally predictive, and has to be so to achieve the typically rapid response times despite the large latencies involved in generating speech. These properties may pose a substantial, even insuperable, hurdle for a fully humanoid interactive robot, but fortunately humans are excellent at adapting to interactants with restricted capabilities, such as children, foreigners, or aphasics.
This chapter focuses on the main challenges and research opportunities in enabling natural interaction to support interactive task learning. Interaction is an exchange of communicative actions between a teacher and a learner. Natural interaction is viewed as an interaction between a human and an agent that leverages ways in which humans naturally communicate and does not require prior expertise. The goal of communication is to achieve common ground and allow the learner to acquire new task knowledge. This chapter outlines the different types of knowledge that can be transferred between agents and discusses the perception, action, and coordination capabilities that enable teaching–learning interactions.
Studying the essence of interaction requires task environments in which changes may arise due to the nature of the environment or the actions of agents in that environment. In dynamic environments, the agent’s choice to do nothing does not stop the task environment from changing. Likewise, making a decision in such environments does not mean that the best decision, based on current information, will remain “best” as the task environment changes. This chapter summarizes work in progress which brings the tools of experimental psychology, machine learning, and advanced statistical analyses to bear on understanding the complexity of interactive performance in complex tasks involving single or multiple interactive agents in dynamic environments.
An early concept of interactive task learning (ITL) assumed a human teacher and machine learner. This book broadens the thinking about this relationship by explicitly allowing flexibility regarding the teacher and learner roles. Future ITL systems will be maximally useful and beneficial to the extent that they are effective and efficient learners as well as effective and efficient instructors. Focusing on task instruction, the primary goal of this chapter is to relate the critical role of instruction in ITL to key existing literature from related areas of research. The general concept of co-constructive task instruction is introduced and differentiated from traditional conceptualizations of fixed instructor and learner roles. Frameworks, models, and methods for task instruction are discussed, and broad connections are made between ITL and structural and adaptive improvements to instruction, historical developments in programming, and the extraordinary challenge that fluid, flexible, co-constructive task instruction and learning places on the vision for ITL.
People teaching an agent or robot might use the same methods that they use when tutoring a human student. Because teaching agents and robots is a central topic of this Ernst Strüngmann Forum, this chapter reviews research that characterizes human tutoring. Most of this research was done to improve the design of computer-based tutoring systems, which were assumed to be inferior to human tutors. However, it turns out that human tutors and a certain class of tutoring systems actually behave quite similarly, and their effectiveness is about the same. This chapter begins with a description of prototypical human tutoring behavior before discussing some common hypotheses about human tutoring behavior, which turn out to be unsupported by studies. It concludes with an attempt to synthesize these descriptions and apply them to the goals set forth at this Forum.
A strategy is a way to make decisions that come up when handling a task. It requires a problem solver able to address routine cases and a set of diagnostics and repairs to handle, in a flexible way, unusual or unforeseen situations. Between humans, interactive task learning and teaching appear to involve strategies at three levels: (a) the execution of a task with available knowledge (task strategy), (b) interactive learning to expand the available knowledge and thus become a better problem solver in the future (learning strategy), and (c) interactive teaching or tutoring to help others learn ( teaching strategy). This chapter examines the general architecture that is needed to build artificial agents that can play either the role of teacher, by carrying out teaching strategies, or the role of learner, by carrying out learning strategies that benefit from these teaching strategies. Focus is on artificial teachers that interact with humans or artificial learners as well as on artificial learners that interact with human or artificial teachers. We argue that the use of a meta-layer is of primary importance for understanding and implementing strategies and point to operational examples from an implementation of this hypothesis in the domain of second-language teaching.
This chapter provides a historical perspective on the concept of creativity and its relationship to the development of education theory during the first half of the twentieth century. In the early twentieth century, creativity had a very specific meaning, which expanded in the mid- to late twentieth century into a more general, and in our view less useful, meaning. These two perspectives are linked to two conflicting educational theories, represented by Edward Lee Thorndike and John Dewey. Dewey described learning as a natural part of being an inquiring human being in a social and physical world, whereas Thorndike’s view was more reductionist, based on stimulus–response connections. The Thorndike’s theory gained prominence and still dominates today, over the Deweyan theory, due in part to the ease with which it can be experimentally tested.
Ideas are developed into a two-part manifesto to inform teaching practice and the development of education technology. The first part delineates the conditions for creative feedback in social learning and encapsulates a Deweyan educational approach. The second part describes the characteristics of education technology that can be used to experiment with creative feedback and social learning, and establishes how we can begin to validate experimentally the Deweyan theory of education.
Interactive task learning considers the challenge of interactively training bots to carry out a task. This chapter is most relevant to medium-term and future tasks for bots within a social context involving humans and bots, and may offer subjective or dynamic evaluation criteria. Bot instructors working with these types of tasks may benefit from considering the complexity and nuances of creative feedback.
How does an agent acquire (i.e., learn) knowledge and information about a specific task by interacting with a teacher, so that ultimately the agent is able to execute the task successfully? This chapter reviews critical aspects of the learning process in interactive task learning (ITL). It discusses learning task knowledge through interaction, capabilities that facilitate learning, aspects of interaction that relate closely to learning, and evaluation dimensions and metrics for ITL systems. Given the interconnected nature of ITL, it also explores relationships between learning, knowledge, interaction, and tasks: how tasks influence learning, how knowledge should be represented, and what types of information and communication are needed to facilitate learning.
Humans are better than artificial computational systems at learning to do new tasks through interaction. Part of this ability stems from preexisting capabilities that appear early in human development. Children have internal physical models of how objects move and they attribute mental states (e.g., goals, beliefs) to objects when their behavior is unpredictable. They are also able to develop context-specific rules and identify how to help others achieve their goals. To explore how these abilities can be transferred to interactive task learning (ITL) systems, this chapter proposes a world-state prediction model. The prediction model can learn detailed physical regularities in the environment and is able to develop representations for predicting the actions and goals of animate agents. The model suggests that prediction and prediction error are capabilities that could improve ITL systems.
In most learning problems, a single type of task knowledge is learned using a single specialized learning algorithm designed and optimized for that specifi c type of knowledge and the environment in which it is learned. In contrast, interactive task learning (ITL) involves learning all types of task knowledge where such specialization is impossible. This chapter describes these characteristics of the ITL learning problem, which distinguish it from other learning problems, and examines how those characteristics infl uence the underlying learning algorithms. Throughout our discussion, the Rosie agent is used as an example of an ITL agent that can learn many tasks in a variety of domains. The distinguishing characteristics explored include learning across different domains, learning diverse task knowledge, interactivity in learning, the situated aspects of learning, and how an ITL agent can exploit multiple data sources. Learning approaches are then discussed that can be used in ITL from the perspective of how they address the unique challenges of ITL.
As with all transformative technologies, humanity needs to analyze the ethical challenges and potential impacts associated with implementation. This chapter explores fundamental questions that pertain to interactive task learning (ITL): What is being taught and what are the associated risks? What are the dynamics of human–machine instruction? What effects will ITL have on human instructors and society? It explores the long-term impact that ITL could have on humans and human society, and discusses concerns valid to both machine learning and ITL (e.g., how to ensure that machines will learn knowledge that they can put to good use, that they will serve humans well and not become deviant). Importantly, it stresses the unique aspects of ITL and proposes that the time to think about and take action on these concerns is now.