Understand the fundamental concept of how Large Language Models work, enabling you to interact with AI assistants more effectively.
30-40 minutes total:
AI chat assistants work through what's called "autoregression" - they predict what words should come next based on the previous words they've seen and produced themselves. Additionally, these predictions are informed by the vast amounts of text they saw in their training data as well as how they were instructed to teach themselves about language during the training process.
Key Terms Worth Knowing:
These concepts matter because:
Even though generative AI tools are programs, sometimes it can be helpful to think of them as having a conversation with a knowledgeable colleague or friend who:
Unlike traditional software with rigid commands and interfaces, AI assistants can understand and respond to natural language. You can:
Let's see how understanding these concepts helps us interact more effectively. Here's how a conversation might progress from basic to expert level.
Try these variations of a simple phrase completion with any AI assistant. Notice how each additional instruction or context changes the response:
Basic Phrase Completion: "Complete this phrase: A bird in the hand..."
Adding Context: "I'm teaching a class about criminal procedure in law. Complete this phrase and explain how it applies: A bird in the hand..."
Expert Variation: "I'm writing an academic paper about cognitive biases in behavioral economics. Reframe this common phrase 'A bird in the hand...' using modern financial terminology while preserving its core meaning."
Just having fun: "Complete this phrase: A bird in the hand... but in the voice of a neanderthal"
Notice how:
Watch how the AI adapts to increasing levels of expertise. Here's an example using cognitive psychology:
Basic Query: "What is cognitive load theory?"
Domain-Informed Query: "How does cognitive load theory relate to multimedia learning design? I'm particularly interested in Sweller's work on split-attention effects."
Expert-Level Query: "I'm conducting research on cognitive load effects in AR-based medical training. Looking to examine the interaction between element interactivity and germane load using a dual-task methodology. Need to account for Paas's nine-point mental effort scale in conjunction with NASA-TLX metrics. Particularly interested in your thoughts on controlling for expertise reversal effects in my methodology."
[Additionally, the user has uploaded relevant research papers and literature reviews about cognitive load theory, AR training methodologies, and measurement techniques.]
Notice how each level:
Explore how the AI uses conversation history and the limitations of context windows:
Large Language Models (LLMs) operate within fixed token limits, which define the "context window" β the amount of text they can "see" and process at any given time. For example:
Example: Provide a lengthy passage from a legal document, then ask the AI to summarize it. After adding more unrelated context, ask the AI to reference the original document and note if details are lost.
Understanding token limits helps tailor your interactions for better, more accurate results.
Understanding when to be explicit and when to let the AI fill in gaps is crucial:
When to Be Explicit:
When to Let AI Fill Gaps:
Example Conversation Flow:
You: "I need help writing an email."
AI: [Asks for basic context about the email.]
You: "It's to request time off from work."
AI: [Provides a general professional template.]
You: "Great, but I need to include specific dates and mention my project handover plan."
AI: [Incorporates your specific requirements while maintaining the professional tone.]
This natural back-and-forth helps achieve the best results - you provide key information and requirements, while allowing the AI to contribute its capabilities where appropriate.
Now that you understand this concept, you can:
Create three different versions of the same request:
Compare the responses and note the differences in quality and relevance.
Made with Midjourney: A small, simple watercolor gingersnap cookie with a cute smile, soft pastel colors, black background. --chaos 20 --ar 1:1 --style raw --sref https://s.mj.run/C04IY4x3NTY --personalize us3j9yo --v 6.1