π§πΎβπ»
prep
Mental models for logical reasoning
π§ Introduction
Welcome to Logic. For this module, we will get out of VSCode and build mental models we can use anywhere to reason more effectively.
You will need
π Notebook and pen
π§ Your wonderful brain
π And you will still need the curriculum and Google, that’s not banned, don’t worry.
Learning by teaching
In this prep you will build a series of mental models necessary for logical reasoning. You likely already know some of these pieces. We will start to build each model by playing a game. Pay attention, because…
In class this week you will be teaching something. You will be explaining one of these mental models. You will not use a computer to explain this, but something else. This could be a drawing, a game, a conversation, or anything you like that will help you communicate the concept, except a computer!
Mental models
- Deduction: Reasoning from general rules to a specific conclusion that is definitely true
- Induction: Reasoning from specific examples to form general patterns that are probably true
- Abduction: Reasoning to the best explanation for all the evidence we observe
- Falsification: Testing a theory by trying to prove it wrong
- Problem Domain: Identifying the bounded space that contains all possible solutions to a problem
- Bisection: Reasoning by reducing a problem space to the smallest possible size
- Binary Logic: Reasoning with only two possible states (true or false)
π₯ Deduction
Learning Objectives
Deduction is reasoning from general rules to a specific conclusion that is definitely true
In Murdle, we use deduction to solve murders. Given general rules about the crime scene and specific clues, we can
Given the body was found in the kitchen
And only Miss Saffron had been in the kitchen
Then Miss Saffron must be the murderer
This is deduction: starting with general rules and arriving at a specific conclusion that must be true. Unlike guessing or inferring patterns, deduction gives us certainty. If our premises are true, our conclusion must be true.
In Murdle, every puzzle can be solved through pure deduction. There’s no need to guess. The clues and rules will lead you to a single possible murderer.
Go play Murdle.

Murdle
π A deductive logic puzzle game
π± Induction
Learning Objectives
Induction is reasoning from specific examples to form general patterns that are probably true
In Sushi Go, we use induction to build winning strategies. By observing specific outcomes across multiple hands, we form general theories about what works. For example:
Given collecting 3 tempura scored 10 points
And collecting 2 tempura scored 5 points
And collecting 1 tempura scored 0 points
Then tempura probably works best in pairs
Unlike deduction which gives certainty, induction helps us form educated guesses about patterns. The more examples we see, the more confident we can be in our general conclusions - but we can never be 100% certain.
In Sushi Go, every game teaches us something new about card combinations, timing, and player behavior. Through repeated play, we inductively learn strategies like:
- Watching what others collect helps predict what cards will come around
- Early puddings often pay off in the final round
- Chopsticks are most valuable when saved for high-scoring combinations
Play a few rounds of Sushi Go and practice inductive reasoning. Try to:
- Notice specific scoring patterns
- Look for recurring situations
- Form general theories about good strategies

Sushi Go
π Pattern finding
π Abduction
Learning Objectives
Abduction is reasoning to the best explanation for all the evidence we observe
In Sherlock Holmes: Consulting Detective, we think like detectives. Each case presents us with mysterious evidence that needs explaining. Unlike deduction which proves only what must be true, or induction which only finds patterns that are probably true, abduction seeks the most complete explanation.
Given a woman was found dead in her apartment
And her jewelry was untouched
And there were no signs of forced entry
Then the killer likely knew the victim (but we can’t be certain)
Each lead we follow adds new evidence. A witness statement might support our theory, contradict it, or suggest a completely different explanation. We must:
- Keep track of all evidence
- Form multiple possible theories
- Test each theory against all the evidence
- Choose the explanation that best fits everything we know
- Be ready to revise our theory when new evidence appears
It’s quite a lot like problem solving we’ve done before, isn’t it? Now go solve a case:

Sherlock Holmes: Consulting Detective - Case 3
π Use abductive reasoning to best explain the evidence
πΊοΈ The Problem Domain
Learning Objectives
The problem domain is a bounded space that contains all possible solutions to a problem. Everything outside the problem domain is impossible or irrelevant.
Given no constraints
Then… the answer could be anything in the universe!
When we add “must be a number”
Then we constrain to the domain of numbers
Before we can solve a problem, we need to understand what’s possible. In Twenty Questions we start with everything in the universe, then ask questions to reduce our problem space. We might start with “Is it alive?” to constrain our domain to animals, then “Is it a mammal?” to reduce further.

Twenty Questions
π Ask questions to reduce possibilities
β Falsification
Learning Objectives
Falsification is an efficient reduction strategy. It means making predictions that eliminate possibilities, rather than gathering evidence that supports them
Given many possible rules
Make a prediction that could eliminate some
When the prediction fails
Then we can discard those possibilities
This is a subtle distinction: dis confirmation is the mental model we must build here. In 20 Questions we discovered our problem space by confirming and disconfirming our guesses. In Zendo, we will try to discover the rule governing pyramid patterns not by confirming our guesses, but by eliminating what’s impossible
Here’s a classic example:
Popper explains that each additional white swan appears to confirm our wrong idea that all swans are white. A single black swan disproves it, and ends the loop. This strategy shows us that:
- Only gathering confirming evidence leaves too many possibilities, or too large a problem domain
- However, each failed prediction narrows our search space by discarding possibilities
- We learn more from being wrong than being right
It is more efficient to find a way to disprove your hypothesis or falsify your proposition, if you can. This is because you only need to disprove something once to discard it, but you may apparently verify a hypothesis many times in many different ways and still be wrong.
Now, practice eliminating possibilities in Zendo. For this game you need a group, so post in Slack to find others to play with.

Zendo
π Eliminate to learn
βοΈ Bisection
Learning Objectives
In bisection, we start with a large problem space and cut it in half with each guess.
In software development, bisection helps us find exactly when a change occurred. For example:
Given our code worked last week but not today
When we test the middle version and it works
Then the problem must be in the newer half
With each test, we:
- Select the middle version
- Test if it works
- Eliminate half the versions
- Repeat until we find the exact change
This binary search technique is remarkably efficient. Even with thousands of versions, we’ll find the problematic change in just a few tests. Git and other version control systems include built-in bisect tools for this purpose.

Higher or Lower
π Guess the number efficiently
π Boolean Logic
Learning Objectives
Boolean logic uses only true or false to reason about the world.
In the real world, we use logic to make decisions all the time. For example: if it’s raining and you don’t have an umbrella, you will get wet. This can be represented as a truth table:
Is Raining | Has Umbrella | Is Wet |
---|---|---|
F | F | F |
F | T | F |
T | F | T |
T | T | F |
Truth tables show all possible combinations and all possible outcomes.
Given A is true (1)
And B is true (1)
Then A AND B is true (1)
In computers, we use binary logic to derive conclusions from data. Each bit can represent a logical state:
Raining | Umbrella | Wet |
---|---|---|
0 | 0 | 0 |
0 | 1 | 0 |
1 | 0 | 1 |
1 | 1 | 0 |
This is fundamental to how computers work. Every operation a computer performs, from simple addition to complex decision-making, ultimately comes down to chains of basic logical operations on 1s and 0s.
Try building some truth tables yourself in your notebook. Here are some examples to get you started:
- “You can get a loyalty reward if you have the app AND have made 10 purchases”
- “The alarm will sound if the door is open OR motion is detected, UNLESS the system is disabled”
- “Trainees pass the course if they complete coursework AND attend class AND complete their steps”