The Metacognitive Demands and Opportunities of Generative AI
Presented by Lev Tankelevitch at Microsoft Research Forum, Episode 2
Lev Tankelevitch explored how metacognition—the psychological capacity to monitor and regulate one’s cognitive processes—provides a valuable perspective for comprehending and addressing the usability challenges of generative AI systems around prompting, assessing and relying on outputs, and workflow optimization.
Transcript
The metacognitive demands and opportunities of generative AI
LEV TANKELEVITCH: My name is Lev. I’m a researcher in the Collaborative Intelligence team in Microsoft Research Cambridge, UK, and today I’ll be talking about what we’re calling the metacognitive demands and opportunities of generative AI. So we know that AI has tremendous potential to transform personal and professional work. But as we show in our recent paper, a lot of usability challenges remain—from crafting the right prompts to evaluating and relying on outputs to integrating AI into our daily workflows. And [what] we propose in a recent paper is that metacognition offers a powerful framework to understand and design for these usability challenges.
So metacognition is thinking about thinking and includes things like self-awareness, so our ability to be aware of our own goals, knowledge, abilities, and strategies; our confidence and its adjustment, so this is our ability to maintain an appropriate level of confidence in our knowledge and abilities and adjust that as new information comes in; task decomposition, our ability to take a cognitive task or goal and break it down into subtasks and address them in turn; and metacognitive flexibility, so our ability to recognize when a cognitive strategy isn’t working and adapt it accordingly. Let me walk you through a simple example workflow.
So let’s say you decided to ask an AI system to help you in crafting an email. So in the beginning, you might have to craft a prompt. And so you might ask yourself, what am I trying to convey with this email? Perhaps I need to summarize x, clarify y, or conclude z—all in the correct tone. You might then get an output and then need to evaluate that. And then you might ask yourself, well, how can I make sense of this output? In the case of an email example, it’s pretty straightforward. But what if you’re working with a programming language that you’re less familiar with? You might then need to iterate on your prompt. And so then you might ask yourself, well, how does it relate to my ability to craft the right prompt versus the system’s performance in a given task or domain?
And now if you zoom out a little bit, there are these questions around what we’re calling automation strategy. So this is whether, when, and how you can apply AI to your workflows. So here you might ask yourself, is trying generative AI worth my time versus doing a task manually? And how confident am I that I can actually complete a task manually or learn AI effectively to help me do it? And then if I do decide to rely on AI on my workflows, how do I actually integrate it into my workflows most effectively? And so what we’re proposing is that all these questions really reflect the metacognitive demands that generative AI systems impose on users as they interact with these systems. So, for example, at the prompt formulation stage, this involves self-awareness of task goals. So knowing exactly what you want to achieve and break that down into subgoals and subtasks and then verbalize that explicitly for an effective prompt. At the output evaluation stage, it involves well-adjusted confidence in your ability to actually evaluate that output. And so that means disentangling your confidence in the domain you’re working with from the system’s performance in that task or domain.
In the prompt iteration stage, it involves well-adjusted confidence in your prompting ability, so this is about disentangling your ability to craft an effective prompt from the system’s performance in that task or domain, and metacognitive flexibility, which is about recognizing when a prompting strategy isn’t working and then adjusting it accordingly. In the automation strategy level, this is about self-awareness of the applicability and impact of AI on your workflows and well-adjusted confidence in your ability to complete a task manually or learn generative AI systems effectively to actually help you do that. And then finally, it requires metacognitive flexibility in actually recognizing when your workflow with AI isn’t working effectively and adapting that accordingly.
So beyond reframing these usability challenges through the perspective of metacognition, we know from psychology research that metacognition is both measurable and teachable. And so we can now think about how we can design systems that actually support people’s metacognition as they interact with them. So, for example, you can imagine systems that support people in planning complex tasks. So let’s say you’ve decided to ask an AI system to help you craft an email. It might actually break that task down for you and remind you that certain types of content are more common in such emails and actually proactively prompt you to fill that content in. It might also make you aware of the fact that there’s a certain tone or length that you might want to have for this email. And so in this way, it, sort of, breaks the task down for you and actually improves your self-awareness about different aspects of your task.
Similarly, we can imagine systems that support people in reflecting on their own cognition. So let’s say you’ve asked the system to help you craft a proposal based on a previous document. Now a smart system that knows in the past you’ve had to edit this output quite extensively might let you know that you should specify an outline or other details and provide you with examples so that you can save time later on. Similarly, at the output evaluation stage, you can imagine how such an approach can augment AI explanations. So this is work done by the Calc Intelligence team here at Microsoft Research, and it shows a system that can help users complete tasks in spreadsheets. And it shows a step-by-step breakdown of the approach that it took to complete that task. So you can imagine a system that proactively probes users about different steps and their uncertainty around those steps and then tailors explanations effectively to that user’s uncertainty.
So in sum, we believe that a metacognitive perspective can really help us analyze, measure, and evaluate the usability challenges of generative AI. And it can help us design generative AI systems that can augment human agency and workflows. For more details, I encourage you to check out the full paper, and I thank you for your time.
- Research Forum:
- Episode 2
- Series:
- Microsoft Research Forum
- Date:
-
-
Lev Tankelevitch
Senior Researcher
-
-
Research Forum: Episode 2
-
-
-
-
-
What's new in AutoGen?
Speakers:- Chi Wang
-
-