Co-Intelligence by Ethan Mollick

Co-Intelligence

Ethan Mollick

Format: Audio/Print Personal Score: 8 / 10

Work with AI on purpose: decide the role, set the rules, measure the result.

Essence (why this landed for me)

A clean starting map for applied AI. It explains how to think with models, not just about them, and turns big ideas into small habits I can practice. Helpful for someone new to AI who wants the basics of prompting, testing, and guardrails without the noise. It fits my belief that the opportunity is in application.

Insights (mapped to mental models)

Takeaways grouped by mental models, with a short action you can use now.

Decide the AI’s job before you start

ACTION Write the role.
HOW IT SHOWS UP IN THE BOOK Define whether the model is a critic, explainer, generator, or planner to improve outcomes.
MENTAL MODELS Role Clarity, Interface Design
MODEL CLUSTER Logic & Reasoning

Good prompts are processes, not one-offs

ACTION Template the steps.
HOW IT SHOWS UP IN THE BOOK Structured prompts with goals, constraints, and checks outperform ad-hoc requests.
MENTAL MODELS Checklists, Abstraction
MODEL CLUSTER Systems & Adaptation

Iterate with fast feedback to raise quality

ACTION Loop once.
HOW IT SHOWS UP IN THE BOOK Draft, critique, and refine cycles lift results more than longer first prompts.
MENTAL MODELS Feedback Loops, Marginal Gains
MODEL CLUSTER Systems & Adaptation

Use multiple perspectives to reduce blind spots

ACTION Add a second agent.
HOW IT SHOWS UP IN THE BOOK Critic, red-team, or alternate persona catches errors the first pass misses.
MENTAL MODELS Adversarial Testing, Devil’s Advocate
MODEL CLUSTER Human Judgment & Bias

Ground answers in sources you trust

ACTION Attach context.
HOW IT SHOWS UP IN THE BOOK Providing documents or data improves relevance and lowers hallucinations.
MENTAL MODELS Map ≠ Territory, Evidence First
MODEL CLUSTER Logic & Reasoning

Evaluate outputs with simple, repeatable checks

ACTION Score on a rubric.
HOW IT SHOWS UP IN THE BOOK Clear rubrics and spot checks make quality visible and comparable.
MENTAL MODELS Occam’s Razor, Decision Hygiene
MODEL CLUSTER Growth & Focus

Keep a human in the loop where stakes are high

ACTION Draw the handoff.
HOW IT SHOWS UP IN THE BOOK Humans decide final calls and feed corrections back into the workflow.
MENTAL MODELS Risk Management, Leverage Points
MODEL CLUSTER Systems & Adaptation

Pick tools for the task, not for novelty

ACTION Choose fit first.
HOW IT SHOWS UP IN THE BOOK Model size and mode matter less than latency, cost, and task match.
MENTAL MODELS Fit for Purpose, Cost–Benefit
MODEL CLUSTER Growth & Focus

Chain smaller tasks instead of asking for magic

ACTION Split the job.
HOW IT SHOWS UP IN THE BOOK Breaking work into plan, draft, critique, and polish yields better results.
MENTAL MODELS Decomposition, Workflow Design
MODEL CLUSTER Logic & Reasoning

Use exemplars to teach style and structure

ACTION Provide one example.
HOW IT SHOWS UP IN THE BOOK Few-shot patterns guide format, tone, and accuracy.
MENTAL MODELS Analogy, Pattern Matching
MODEL CLUSTER Growth & Focus

Ask for uncertainty to expose weak areas

ACTION Request confidence.
HOW IT SHOWS UP IN THE BOOK Calling for caveats and confidence bands surfaces where to double-check.
MENTAL MODELS Calibration, Falsification
MODEL CLUSTER Human Judgment & Bias

Red-team your own prompts

ACTION Probe for failure.
HOW IT SHOWS UP IN THE BOOK Intentionally stress prompts to reveal brittleness and improve safety.
MENTAL MODELS Threat Modeling, Error Minimization
MODEL CLUSTER Systems & Adaptation

Keep a small library of reusable workflows

ACTION Save one template.
HOW IT SHOWS UP IN THE BOOK Repeatable prompt patterns become assets across projects.
MENTAL MODELS Standard Work, Compounding
MODEL CLUSTER Growth & Focus

Measure value, not novelty

ACTION Track a real metric.
HOW IT SHOWS UP IN THE BOOK Time saved, quality gains, or revenue beat raw prompt cleverness.
MENTAL MODELS Goodhart’s Guardrail, Leading Indicators
MODEL CLUSTER Growth & Focus

Absorption Notes (short essay)

Treat the model as a teammate. Write its role, goal, inputs, and done-definition. Start with a small example, then loop: draft, critique, revise. Attach trusted context when accuracy matters. Use a simple rubric to score usefulness so changes can be compared. Keep the high-stakes handoff human and feed errors back into the prompt or workflow. Build a tiny library of prompt templates for common jobs: plan, summarize, critique, generate options, rewrite for audience. Split big work into steps and track one metric that shows real value, like time saved or defect rate. When something breaks, red-team the prompt and adjust the process, not just the wording. Calm, steady improvement.

Reflection Prompts (product × design × engineering)

Questions to apply the ideas across projects. Pick one or two and use them today.

Role first

What job is the model doing here

Role Clarity

Name it.

Context

What trusted sources should I attach

Evidence First

Link or paste.

Decompose

Which steps can I split into smaller tasks

Decomposition

List three.

Rubric

How will I score usefulness or quality

Decision Hygiene

Define criteria.

Loop

What is the shortest draft-critique-revise cycle I can run

Feedback Loops

One pass.

Safety

Where do I need a human handoff

Risk Management

Draw it.

Exemplars

What example will teach tone or structure

Analogy

Include one.

Calibration

What uncertainty or caveats should I request

Calibration

Ask for ranges.

Red team

How could this prompt fail or be misused

Threat Modeling

Test once.

Value

What real metric proves this is worth keeping

Leading Indicators

Pick one.