Minds, Models, and Mentors
  Minds, Models, and Mentors

Minds
Models
Mentors

Brains
Information
Learning

Neurons
Time Goes By

Awareness
Shannon and Information
Learning and Information
Remembering
Quantization
Stories
Smell and Context

Emergence of Consciousness
Mind's Eye
Welcome to My World
Under the Hood
Waves Keep Coming
Autopilot
Temptation
Self-Talk
Who Says I'm Not the Boss?
We Are Born That Way
Why Now?

The Eyes Have It
Deception
I Didn't See That
It Seemed Like Forever

Bingo
Beyond the Bounds
Liar, Liar, Pants on Fire
Forgotten
Sleep
Waking Up
Mind Wandering
I Believe

Motivation
I'm "Good" Today
You'll Like This
I Liked That
I Loved That
I'll Do That
... Two Marshmallows
Happiness

Working Memory
Chunking
Fixed Ability
Curious Cat
Believe In Me
Multitasking
Rewiring

I Know That
By The Numbers
Predicting Values
Somehow I Know That
If It Looks Like a Duck
They're All Like That
That's Crazy

Faces
Libraries
More Stories
(Modern) Telepathy
Mind Reading

Imagine This and That
Imagine Now and Then
Mind the Load
Load the Mind
System-1, System-2


Applications


Teaching
Teachers
The Yellow Brick Road

Formal Instruction
Lectures
Buddies
Scripted vs Extensively Planned
Building Chunks
Inquiry
Machines & The Challenge

Designing Instruction
My Pace
Scaffolding
Learning Goals
Prior Knowledge
Effort
Studying
Games
Choices

Schools
Museums

High Stakes
Rating Teachers
Improving Teachers
Empowering Learners
Pay Attention!
(Pretend to) Think Out Loud

The Closing Bell

Appendices


Old-fashioned Psych

Invertebrate Animal Models
Vertebrate Animal Models
Neurons and Light

Brain Surgery
Specialized Neurons
Pins and Needles
Brainspeak
Broken Brains
Strokes

Don't Stare
fMRI
EEG
Magnetic Personality
Shocking

Artifical Intelligence and Neural Networks


Edits

June 30, 2016
September 15, 2016
December 20, 2016
March 23, 2017
August 8, 2017
December 30, 2017


 

   Minds, Models, and Mentors



Minds, Models, and Mentors



Version Edited as of: Saturday, January 6, 2018 10:01:40 AM

by

David W. Brooks, Guy Trainin, & Khalid Sayood


Authors

Sayood, Brooks, & Trainin

Copyright 2015 by Guy Trainin, all rights reserved.

David Brooks, current M3 managing editor, became interested in chemistry by age 12 and in teaching by age 19. He has taught college chemistry and graduate courses in education. Brooks is a Fellow of the American Association for the Advancement of Science and the American Educational Research Association. Two decades ago, he became interested in understanding learning as a foundation for improving his teaching — hence his participation in writing M3.

Khalid Sayood is an information junky disguised as a professor of electrical and computer engineering. He has managed his condition by teaching and conducting research in information theory, data compression, and bioinformatics. Sayood has written a number of books on topics ranging from data compression and circuit analysis to genomic signatures. He has received a number of teaching awards from his students without understanding why. His participation in the writing of M3 has been an attempt to understand the answer to that question.

Guy Trainin is on his third career. An educational researcher and professor, he tries out new research insights in his teaching to the chagrin of his students. Guy focuses his research in the areas of literacy development and literacy integration with technology and the arts. In recent years, he has been studying 21st-century learning in schools in Nebraska and China with a specific focus on mobile devices and creativity. He has published research articles and books as well as extensive digital authorship of over 200 videos (YouTube Techeedge01) and 200 blog posts (guytrainin.blogspot.com).

Based upon a series of lectures at Harvard in 1987, Allen Newell suggested that a unified theory must offer three advantages: explanation, prediction, and prescription for control. That notion served Duane Shell along with two of us and others when writing the Unified Learning Model (ULM) published by Springer in 2010. Both Newell's work and the ULM serve us in this work where we try to take explicit account of awareness both from the perspective electrophysiological measurements and information theory. This book presents a model of how our minds work with an emphasis on mentoring.

When several separate sources of information, each doing its own thing, share their results, something can emerge from that sharing that goes far beyond the scope of any one of the sources. Consciousness or awareness appears to be such an emergent property. That's what has brought together an information theorist and an enzyme mechanisms person, both of whom are teachers. We've gotten together with a teacher-of-teachers to try to convey a model of how our minds work to bring about learning. How our minds work is a question that can be answered on many levels, from a molecular level to the practical level of classroom instruction. The result of our collaboration is this book, Minds, Models, and Mentors, (M3).

Carrie Clark

Carrie Clark

The authors thank Carrie Clark for a very helpful review of an earlier version of this book.

      

 

   Minds

A mind is the set of cognitive faculties that enables consciousness, perception, thinking, judgment, and memory — a characteristic of humans, but which also may apply to other life forms.

Wikipedia, October 25, 2015

Wikipedia, "a free-access, free-content Internet encyclopedia, supported and hosted by the non-profit Wikimedia Foundation," has been shown to be generally as accurate as other trusted information sources. We support the Wikimedia Foundation through donations.

Mind

Model of mind as phrenology

Understanding the concept of mind rests in an understanding of consciousness, and this remains a major challenge in psychology. Let's list some of the kinds of thoughts that might "come to mind":

These thoughts are rarely mentioned in books on educational psychology. Here are some that might be:

Those statements are "school stuff." That's our business. The same mind that gives you the second list of statements also gives you the first list. If only we knew more about the mind and how it works!

Dehaene_2014

Stanislas Dehaene

This book asserts that we know enough to begin making teachers aware of this new information. It is now well established that our awareness of a thought runs behind (runs late) after mental processing has occurred. A good model for this has been set for by Stanislas Dehaene in his book Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts.

      

 

   Models

Building Model  Clothing Model  Atom Model

Examples of Models

Model (noun)

  • a usually small copy of something
  • a particular type or version of a product (such as a car or computer)
  • a set of ideas and numbers that describe the past, present, or future state of something (such as an economy or a business)

Merriam-Webster, November 5, 2015

Well before we entered the world of technology that surrounds us today, humans were always immersed in much more information than they could manage. As a result, our minds have evolved to make predictions about how things should turn out by creating models, and then attend to the information that comes from senses essentially comparing sensory input with model prediction. The result is that, for most things, little focus is required. Once we decide to walk someplace and to chew gum while walking, we can walk and chew gum at the same time. We have a model for walking and a model for chewing, and they don't compete much for neural resources.

We have many neural systems involved in making predictions. It turns out that these systems always make predictions. So your "Am I hungry?" predictor is working at the same time as your "Am I happy or sad?" predictor and your "What is this book about?" predictor. We expect that your "What is this book about?" predictor just won out, but maybe not. Maybe you'll get back to these words after a snack. If there were several loud sounds in quick succession, your "Check my surroundings; am I in danger?" predictor may have taken over. In any case, all of these predictors make their predictions, and you become aware of one of them. That turns out to be a "big deal" electrophysiological event, one easily detected using appropriate measuring hardware (and software).

Over your lifetime, something else has emerged — the self you think of when you think of yourself. This self is the YOU of you. It, too, is a model. It's your model of you. If you are reading this book, by now your model of yourself has become a big, multifaceted, and complex model. It is a male or female model. It may be a fat or skinny model. It may be a Methodist or Shia or Hasidic model. It is may be privilege- or poverty-based model. It may be a Democrat or Republican model. It may be a youthful or elderly model. Whatever that model is, it is YOU.

The model reveals itself to you through a voice in your head — your voice. But this voice never tells you about all of the predictions that just went on. You are not concerned with all of those processes that were and still are whirling away. Instead, it tells you about just one of those processes. As it does so, you think to yourself that it is in real time and that you are in control of that process. That's what we all think, but there is clear evidence that this is not so. It is clear that our voice is a reporter, not a decider.

You receive at least some information that is inconsistent with your model of you. You may use that information to change your model. For the most part, however, you discard this discrepant information. You have a stereotype of what it means to be an Irishman or an Israeli or a Pakistani — and other humans properly described by those labels are evaluated through the lens of your corresponding stereotypes. Worse yet, sometimes our eyes and ears do deceive us — what we "see" is not what we've seen; what we "hear" is not really what we heard.

We don't see things as they are,
we see things as we are.

Anaïs Nin (attributed by her to Talmud)

We don't work this way because we are good or bad people — smart or stupid people. This is just how humans work. This is how human minds deal with the ever present avalanche of information.

This book tries to do two things. First, it tries to explain how our minds work given that we know about those aspects of prediction and comparison with prediction.

The second thing it tries to do is begin a description of how knowing about this mindful learning model we might think about our teaching. In what ways might we design and deliver our instruction so as to be consistent with the notions described in M3?

      

 

   Mentors

Mentor
someone who teaches or gives help and advice to a less experienced and often younger person

Merriam Webster, December 8, 2015

We authors are teachers. The model we are describing applies more broadly than does the label teacher and includes parent, coach, minister, etc. We use the term mentor to imply this broader use of the label teacher.

Teach

  • to cause to know something (taught them a trade)
  • to cause to know how (is teaching me to drive)
  • to accustom to some action or attitude (teach students to think for themselves)
  • to cause to know the disagreeable consequences of some action (I'll teach you to come home late)
  • to guide the studies of
  • to impart the knowledge of (teach algebra)
  • to instruct by precept, example, or experience
  • to make known and accepted (experience teaches us our limitations)
  • to conduct instruction regularly in (teach school)
  • to spread abroad as though sowing seed (disseminate ideas)
  • to disperse throughout

Merriam-Webster, October 25, 2015

Mentors allow us to learn from the experience of others "skipping" the need to experience everything on our own. Mentoring is an essential component in our ability to adapt as a species. We have formal teachers, informal teachers, coaches — mentors — throughout our lives.

The purpose of this book is to offer an explicit view of what it is that mentors do. Mentors help us build models; they help us modify models we've already built.

Humans take longer to "develop" than other species. While a newborn zebra may run within an hour after birth, humans take much longer. We are somewhat prewired to learn to speak. Most humans speak by three years, and whatever part of our prewiring is innate, that wiring seems to support the learning of thousands of different languages. In fact, given appropriate teaching from adults, toddlers seem to be able to learn two or three different languages.

In modern societies, the need for teachers has grown because human success seems to depend on acquiring greater breadth and depth of skills than in past centuries. Another way of putting this: Today, we need to master a much larger number of models than was needed in years past to "succeed."

      

 

   Brains

The brain is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. Only a few invertebrates such as sponges, jellyfish, adult sea squirts and starfish do not have a brain; diffuse or localised nerve nets are present instead.

Wikipedia, October 25, 2015

Redwood.

Redwood tree

Trees don't move. They have fixed positions in their environments and depend on those environments for raw materials (water, carbon dioxide, minerals), suitable temperature control, and adequate sunlight. Trees don't have brains.

Not all species have brains. Many single-celled organisms move without brains. They detect environmental phenomena such as chemical concentration gradients, and they use primitive mobility devices (such as flagella) to move up or down those gradients.

Sea Squirt

Colorful Polycarpa aurata, sea squirt

Sea squirts (tunicates) are very interesting creatures. They swim using inputs from cerebral ganglia until they affix themselves to a rock. They then digest the ganglia, thus leading to the notion that they "eat" their own brains. In this sense, we argue that they have brains while they have the need to move around and keep updating the model of their surroundings, but give them up once their need to move comes to an end and the model of their physical surroundings becomes more or less static.

Brains were intended to help us move around in our environments and otherwise succeed (survive, live, and thrive).

Brains have two jobs. First, they create an understanding (a model) of their organism's environment. This understanding may be at the level of walking or eating or hunting or finding a place to sleep. Second, they may store those understandings for later recall.

A great deal of time and effort has been spent on describing the brain's role in memory. However, the complex roles humans play in the 21st century require that we — especially teachers — begin to think more clearly about the brain's role in building models. How are these models created? How do they evolve? Perhaps most important, how can they be changed?

      

 

   Information

Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. …

Wikipedia, October 25, 2015

Our interest is in compression and storage since what we compress and store in our brains are the models we have created as we come to understand our lives.

Lossy

Lossy2

Compare smaller (top) and larger (bottom) compression

Data and information often are used synonymously. However, the two are not the same. When we compress a file, we may remove some, maybe a lot, of data, but we may or may not remove information. The image of the cat in the first picture is compressed to 37 Kbytes. A lot of data is thrown away. In terms of the human visual process there has been no loss of information. The picture at the bottom is compressed even further to 21 Kbytes. Has information been thrown away? If information for us is contained in the texture of the cat's fur, then yes, information has been thrown away. If for us the information contained in the image is the type of animal portrayed then no information has been thrown away — it's still a cat.

      

 

   Learning

Learning is the act of acquiring new, or modifying and reinforcing, existing knowledge, behaviors, skills, values, or preferences and may involve synthesizing different types of information. The ability to learn is possessed by humans …

Wikipedia, October 25, 2015

Teachers always are trying to accomplish the construction, elaboration, or refinement of a model in their students' brains of whatever it is they are trying to teach. The gateway to this is in achieving awareness events in the students' brains. The desired outcome of teaching is neural change in the students.

Edelman

Gerald Edelman, 1929-2014.

Gerald Edelman opened his book, Neural Darwinism, with these words:

It is difficult to imagine the world as it is presented to a newborn organism of a different species, no less our own. Indeed, the conventions of society, the remembrances of sensory experience, and, particularly, a scientific education make it difficult to accept the notion that the environment presented to such an organism is inherently ambiguous: even to animals eventually capable of speech such as ourselves, the world is initially an unlabeled place. The number of partitions of potential 'objects' or 'events' in an econiche is enormous if not infinite, and their positive or negative values to an individual animal, even one with a richly structured nervous system, are relative, not absolute.

Edelman

The association of naming and learning has been known since the beginning of civilization. The first verse of the Tao Te Ching says, "Naming is the origin of all particular things ..." Naming forms an important part of Adam's education in Abrahamic religions. Names are often a way to index the models we build during our lives. Dave Brooks is an index to the model we have for a whiskey loving, curious chemistry professor of Irish descent. By assigning names to elements in our surroundings we classify them according to which model in our mind they fit best. The application of labels is an exercise in classification that is an essential aspect of learning.

Zebra.

Zebra at 3 weeks

Some animals are born with powerful models of how their environments work and how they should respond within them. Newborn zebras stand at 10 minutes and run at 50 minutes. Humans don't stand at 10 weeks or run at 50 weeks.

For humans, all of our lives are spent developing and modifying the understandings (the models, the schemas) we have about our world. Teachers are those who help us create and modify our understandings of the world.

      

 

   Neurons

Pyramidal Cell

Living human pyramidal brain cell

Published in October 2017 by the Allen Institute, the above image shows a human pyramidal neuron in living tissue. The tissue was obtained from small samples removed during surgery.

A neuron called a pyramidal cell, for instance, has a bushy branch of dendrites (orange in 3-D computer reconstruction, above) reaching up from its cell body (white circle). Those dendrites collect signals from other neural neighbors. Other dendrites (red) branch out below. The cells axon (blue) sends signals to other cells that spur them to action.

Science News, 11/14/17.

Our brains are made up of cells. There is evidence that human brains have about 900 different kinds of cells with adjacent cells in tissues usually having similarly expressed genetic patterns. The cells most prominently involved with learning are called neurons; there are about 86 million of them in a human brain, and there are many different kinds. We also have neurons (nerve cells) throughout our bodies. For example, our retinas — neural tissues at the backs of our eyes — contain at least 60 different kinds of neurons. Neurons have the special property of being able to transmit electrical signals. In one sense, they are biological wires, but they also could be thought of as switches usually off, but momentarily on every once in a while. Some neurons seem to play a role such that, when they "fire," other neurons are inhibited from firing.

neuron

Model of a neuron.

Brains and peripheral nerves also perform other basic and often automatic functions such as control of physical, hormonal, and body regulatory functions like heartbeat.

The interplay between what is wired at birth and what is learned forms the basis of unending nature-nurture debates. Comments by Marcus are helpful in resolving this debate:

What this all boils down to, from the perspective of psychology, is an astonishingly powerful system of wiring the mind. Instead of vaguely telling axons and dendrites to connect at random to anything else in sight, which would leave all of the burden of mind development to experience, nature supplies the brains wires — axons and dendrites — with elaborate tools for finding their way on their own. Rather than waiting for experience, brains can use the complex menagerie of genes and proteins to create a rich, intricate starting point for the brain and mind.

Marcus, 2004, p. 95

Three mechanisms seem to be involved in learning. There are connections between neurons call synapses. When several neurons "fire" on the synapses of one neuron, it importantly increases or decreases the probability that the target neuron will fire. Said differently, once enough switches connected to a 'downstream' switch turn on or off, their cumulative effect may lead that 'downstream' switch to turn on, too, or to become much less likely to turn on (i.e., inhibited). This is often called Hebb's Postulate:

… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.

Wikipedia, Accessed 12/29/17

Learning involves changing the ability of synapses to fire. It seems that the more they fire, the easier it becomes for them to fire at a later time. Firing changes the synapses at the subcellular level. At the level of the organism and its behavior, we say those changes have led to learning. Recent work has demonstrated mechanisms at the cellular and synaptic levels that contribute to "efficient memory storage and retrieval … "

Synapse

Model of a synapse.

The second mechanism for learning involves glial cells whose roles are becoming more apparent.

Glial cells, sometimes called neuroglia or simply glia, are non-neuronal cells that maintain homeostasis, form myelin, and provide support and protection for neurons in the central nervous system and peripheral nervous system.

Wikipedia, November 17, 2015

Some glial cells wrap around neurons providing them with a sort of electrical insulation. This insulation reduces what amounts to leakage from the neurons and reduces the time for the electrical pulse to go to the end of the downstream neuron.

Myelin

Model of myelin.

Although the myelin role has long been suspected and recently experimentally confirmed, other important learning roles appear to be played by glial cells.

Synaptic changes need hours and may require sleep to become long-term (permanent). Myelination requires repetition and is a much longer-term process, perhaps requiring months to become a meaningful contributor to learning. Myelination affects the speed of neural transmission. What about short-term recall -- recall in seconds that lasts for minutes or hours such as "Where did I park this morning?" Bittner et al. (2017) reported behavioral time scale synaptic plasticity. They describe changes in neurons on time scales of seconds in a region of the hippocampus (the brain structure known to be connected to short-term memory). These authors assert: " … BTSP may provide a more straightforward physiological basis for many types of learning than plasticity that explicitly conforms to Hebb's postulate."

Whatever the time scales or mechanisms involved, there is one VERY important take-away:

Learning involves brain cell modification. Teaching is — hopefully systematic — brain cell modification technology.

      

 

   Time Goes By

The mental realities of time can be quite different from the physical realities. What amounts of time are involved with various neural processes?

stopwatch

Stopwatch timing

For many activities, time can be measured using a watch . As early as 1907, Hamilton reported about reading based on experiments during which exposure to the reading material was timed. A stopwatch once was the tool of measuring success in athletic competitions as it was of psychologists measuring aspects of human performance.

Almost as soon as desktop computers came into use, psychologists started developing ways of timing and altering screen displays. Their experiments led to volumes of research in an area called priming.

Priming is an implicit memory effect in which exposure to one stimulus influences a response to another stimulus.

Wikipedia, October 27, 2015

Imagine a way of controlling very brief electric sparks.

Spark

Electric spark jumping a gap

These sparks are very intense, last for less than a microsecond, and usually are readily detected. How long must we separate two different sparks such that we see them as two sparks rather than as one single spark? About 45 milliseconds.

What about separating sounds? Given that we can generate a sound pulse such as a click, two of these clicks need to be separated by about 2 milliseconds for them to be distinguished as two clicks rather than just one click.

How long does it take for us to become aware of a spark or click? Well, that seems to be about 60-80 milliseconds. "Motion pictures" consist of still photographs displayed at a rate of 24 pictures or frames per second. That's about 42 milliseconds per frame, and it gives us a sense of smooth motion. In fact, leaving out a frame here and there is not perceptible. Although it has been alleged that inserting frames for "subliminal advertising" is effective, most scientists believe that inserting a frame here and there has little measurable effect on behavior.

Simple reaction time is the motion required for a person to respond to the presence of a stimulus. For example, someone might be asked to press a button as soon as a light or sound appears. Mean reaction time for college-age individuals is about 160 milliseconds to detect an auditory stimulus, and approximately 190 milliseconds to detect a visual stimulus.

Wikipedia, October 27, 2015

Finally, how long does it take for us to make sense of something, say to look and see and hear something enter your perception and become aware of what you think it is (like, that's a dog)? Well, that time can be 300-500 milliseconds. In other words, our consciousness of the world follows the world by a few hundred milliseconds.

Nevertheless, we certainly think of ourselves as being in the here and now. This processing gap may seem insignificant in terms of our daily lives. However, it is highly significant to understanding the idea that what we perceive as executive control is a mere reporter of what has already transpired.

Sometimes when we are asleep — during REM or rapid eye movement sleep — time seems to run "faster." We seem to be able to relive the events of a day in one tenth the time as it took us actually to live those moments.

We can take in information faster while reading that listening to speech. Even so, sometimes expectations are unrealistic. For example, it isn't possible for medical students to read all of the assigned material.

The bottom line is that we can discover a great deal about learning using experiments in which those images presented on screens are sequenced and timed. Creating our thoughts often takes much longer than sensory perceptions leading to them, and the report of the voice in our head follows the other processing by what is a still longer time.

By the way, when a physical scientist measures their speeds, light travels around the earth about seven times when a sound has traveled only a few blocks.

      

 

   Awareness

The figure below (or some facsimile) is often used to illustrate the notion of consciousness.

Awareness vs Wakefulness

Graphic model of awareness versus wakefulness.

Sensory inputs join with memories to enable us to achieve states called awareness (or conscious access). You "feel" as if you know what's going on around you. Wakefulness refers to being reactive to stimuli.

Your brain works until you are brain dead. It does not stop during sleep or when you are anesthetized. When you are consciously awake, your brain generates moments of awareness, sometimes as many as four or five such moments per second.

Most brain processing involves relatively small amounts of brain tissue. Scores of processes are undertaken concurrently: Am I hungry? What was that flash? Am I in danger? or What is the cube root of 27?

One of the processes seems to 'win.' How a "winning" process emerges remains unspecified. We all think that there is a CEO (us) making the choice, but this is not the case. It is more likely that, at a given moment, one of the processes achieves some minimum threshold such that it "wins." At that moment, it becomes the "first" response. Meanwhile, the other processes keep working away.

The outcome of winning has an electrophysiological consequence; many, many other regions of the brain are informed of the "victory" of the one process. Presumably, this is done so that those regions will be able to act upon the outcome if such action falls in their bailiwick. For example, you smell some fresh bread, that triggers an awareness event, and you next are reminded of a delightful morning spent at a bakery while visiting Paris. Or you hear a friend tell you of a medical symptom, and you become filled with sadness recalling how that same symptom presaged the long and fatal illness of a family relative.

ConsciousAccess

Graphic model of conscious access.

These awareness events are detectable with currently available noninvasive (but intrusive) instrumentation such as electroencephalography.

They appear to spread in a language as yet not understood. This is a universal brain-wide language, one that lends itself to storage and subsequent recall.

One thing that leads to considerable confusion is that we tend to conflate consciousness with our sense of self.

… Conscious access is also the gateway to more complex forms of conscious experience. In everyday language, we often conflate our consciousness with our sense of self — how the brain creates a point of view, an "I" that looks at its surroundings from a specific vantage point. Consciousness can also be recursive: our "I" can look down at itself, comment on its own performance, and even know when it does not know something. The good news is that even these higher-order meanings of consciousness are no longer inaccessible to experimentation. …

Dehaene, 2014, from his Introduction

As we interpret learning, we make extensive reference to the notion of awareness events. In particular, we think we can understand teaching better if we can understand awareness.

It is difficult to convey the science behind this M3 book. We anticipate that most of our readers will be teachers, students interested in becoming teachers, or students otherwise learning about learning. In general, this audience has not had 'strong' backgrounds in science. It also is true that most of the scientists adding to our understanding of the biology of learning usually don't have a 'big picture' view of learning. An especially noteworthy exception is Stanislas Dehaene whose books on reading and mathematics learning are especially enlightening.

We have included information about the core science underpinning the assertions made in M3, and these 'Chaplets' are included in Appendices. For those interested we suggest reading them and exploring some of the relevant sources mentioned in them.

      

 

   Shannon and Information

Prior to talking about information we need to determine "the alphabet" used to represent it.

CShannon

Claude Shannon, 1916-2001, Father of Information Theory

The idea that information can be quantified has had a long history. Putting this idea onto a solid foundation was the work of Claude Elwood Shannon, who almost single-handedly created what is now known as information theory. Shannon's description of information assumed the existence of an experiment with a set of outcomes. The "information" associated with each outcome was defined as the logarithm of the inverse of the probability of that outcome.

We've used "equations" to represent our models for information. For scientists and engineers, this is a universal language. If this is not one of your languages, please read past the equations as if they were a foreign language. The words we use should make sense.

Suppose you conduct an experiment with possible outcomes a1, a2, … , aN, then the information associated with an arbitrary outcome ak is given by

Log

In this expression, i(ak) is the information associated with the outcome ak and p(ak) is the probability of the outcome ak.

The logarithm of one is zero, so the information associated with an outcome which is certain (that is, where p(ak) equals 1) is zero.

Suppose your experiment was the observation of the sun, and the possible outcomes are the sun rising in the east or rising in the west. Because the sun always rises in the east, that particular outcome has zero information. The sun rising in the west would have a significant effect on how we view our world.

If we take a weighted average of the information associated with each outcome, with the weights being the probability of the outcomes, then this is the average information H associated with the experiment.

H

The average information H is maximum when all outcomes have equal probability. When this happens, the average information is log N where N is the number of outcomes. Thus, for equally likely outcomes, the more the number of possible outcomes, the higher is the average information associated with the experiment. The minimum value of average information is obtained when one of the possible outcomes has a probability of one (which forces the other outcomes to have a probability of zero). In this case, the average information associated with the experiment is zero. When neither extreme case is active, the more skewed the probabilities, the lower the average information.

If you conduct the experiment multiple times, the information associated with an outcome may also depend on the outcome of previous experiments. If your experiment was the observation of the weather and the outcomes are rainy, cloudy, and sunny, then the outcome today may well depend on the outcome yesterday. Suppose your experiment was opening a book written in English and randomly picking out a letter. The probability of picking the letter u is quite small (0.0275). However, if your experiment consisted of looking at consecutive letters and the previous letter was q, then the probability of the observing the letter u is quite close to 1 (0.987). When discussing information, context matters!

      

 

   Learning and Information

WorkMem

Model of working memory

Working memory is the system that is responsible for the transient holding and processing of new and already stored information, an important process for reasoning, comprehension, learning and memory updating. …

Wikipedia, October 27, 2015

Teaching and learning involve the transfer of information. Quantifying this information is very difficult. The definition of information requires specifying an experiment over a set of outcomes. Let's think about working memory. One way in which working memory capacity is "measured" is to recite digits 0-9, one per second, and see how many the listener can repeat back correctly. (A typical value is seven, but with practice most of us can quickly increase that to fifteen or so.) Since there are ten digits and if we recite a sequence selecting one digit at a time randomly from among these 10 with all digits being equally likely, the amount of information contained in one digit is log 10 or 1 (if we use base-10 logarithms). In this case, the working memory capacity is just the number of digits recalled.

This leads us to two predictions about digit span that can be tested:

DigitsPredicted Relative
Working Memory
Capacity
0-91.0
5-81.7
5-8, 82.0

We don't want to make a bet on the absolute outcome of such an experiment. We will bet, however, on the order of working memory capacity determined from such an experiment.

If working memory can only take on average information of a certain size, we can teach more efficiently when we make sure that the average information of a concept fits into the working memory capacity. While it would be exceedingly difficult to compute the average information of every concept that we want to teach, we can use general guidelines to minimize the size of the average information. Because the maximum value of the average information is the logarithm of the number of distinct possible outcomes, to keep the average information value low we must make sure that the concept we are trying to teach fits into a set with a small number of outcomes. The larger the possible set, the more likely it is that we will not be able to process it.

When the number of possible outcomes cannot be reduced, we can reduce the average information by making some outcomes much more likely than others.

Tononi et al. have developed a formulation that can be useful when applying information theory to teaching and learning. If you think of a red tomato, the information associated with that thought is the log of the number of items that you could have thought of from which you selected the thought of a red tomato given the context. The set of all possible thoughts is effectively infinite, however, given the context that you ended up with the thought of a red tomato, the set of likely thoughts that could end up making you think of a red tomato is much more limited than the set of all possible thoughts The context narrows the realm of possible thoughts. Since retrieval cues have been shown to improve access to autobiographical memory, we can postulate that any concept can be "indexed" through perceptual cues.

A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and storage space to maintain the index data structure. Indexes are used to locate quickly data without having to search every row in a database table every time a database table is accessed.

Wikipedia, October 27, 2015

When we see a red tomato, we might store it in memory using attributes such as color, shape, and edibility. We can think of attributes as partitioning the space of all possible thoughts. When we look at the tomato and we identify a color red, we restrict our attention to only those elements within the subspace "red." Identifying the tomato as edible partitions the subspace "red" into the subspaces "red and edible" and "red and not edible." As we keep reducing the size of the space containing the various alternatives, we will eventually reach a point where the number of elements N is small enough for log N to be smaller than the capacity of working memory.

As our expertise in an area grows, the number of elements we can deal with grows. In one long-term study, a man whose working memory capacity based on recall of digits started out at seven, an undistinguished capacity. After months of practice, he worked up to a remarkable 79 digits. That means you could read him a list of random digits for over a minute and he could recite back whatever list you had read to him. While having such a skill may win accolades at cocktail parties, it is of little practical value. Being able to recognize insects or trees or minerals or suspicious skin growths are the sorts of valued skills that professionals (entomologists, botanists, geologists, dermatologists) develop over time. Context matters and an enormous part of context involves prior learning.

However, during the process of learning, the concepts that need to be held in memory should belong to a set that consists of a relatively small number of distinct elements. As teachers, we try to adjust this. If the set containing the concept being introduced consists of a large number of elements, the information cost of placing the concept in working memory may be too high. This can result in a number of possibilities. The learner finds something else to place in working memory, such as imagining what s/he will do once the "lesson" ends.

When a set of concepts seems too large for a learner, she acts so as to reduce the size of the set. For example, she might divide the too-large set into subsets and then represent each subset with an exemplar. At this point, she is certain that the concept is in place and that it needs no further elucidation. When a later application of this exemplar concept leads to problems, the learner will look for the source of the problem somewhere else.

SubError2

Common subtraction errors

For example, the two errors illustrated above are common among children learning subtraction and having issues understanding what to do when the digit in the minuend (top) is smaller than that in the subtrahend (bottom) and you need to "carry."

When we give an exam, the answers are the outcomes of the experiment. While there is not much information in the correct answers, by looking at the incorrect answers and situating them in the set from which the student selected them, we can understand what experiment the student thought was being performed. Pointing out incorrect answers might help us modify our teaching approach. In STEM areas, providing answers that are 'almost correct' or 'sound correct' is not only a powerful way of deepening a student's understanding of a concept, it is also useful for teachers to troubleshoot the way the concept is being taught.

      

 

   Remembering

Memory studies are usually divided into two categories: working memory (short-term memory) and long-term memory. A short-term memory test might go like this. Someone recites numerical digits at a rate of one per second while you listen. You then recall the recited digits. Most of us can recall about seven. With just a bit of practice, we can build this to about 15. The reported record was 79, and informal reports suggest that the individual studied ultimately could recall close to 120. The initial number, seven, often is thought of as a sort of magical number.

Going back to human brain storage, just how good can we become? Suppose we have two new decks of playing cards, 52 cards each. We take one deck, shuffle it, and hand it to you. We start a timer, you look at the deck for as long as you like, and then you give it back. We measure the time you spent studying the shuffled deck. We then give you the second deck and ask you to sort the cards to be in the same order as they were in the deck you just saw. How long will you need to look at the first deck to be able to arrange the cards in the second deck?

SReinhard

See Simon Reinhard perform the shuffled card memory task.

Simon Reinhard's 2010 record was 21.9 seconds. If you have a typical memory and you are willing to invest the time, you probably can get that time below 5 minutes with days of practice. If you want to get below 2 minutes, however, your required time investment may become enormous.

Some people assert that there is a phenomenon of photographic memory. Essentially no one has a photographic memory. Some young children have excellent memories, but their abilities tend to fade with age. That does not mean they go away, but that they decline from exceptional to just excellent.

The way in which we increase our capacity is to develop strategies that are called chunking strategies. You can think of a chunk as a neurological net of related things developed over time. In terms of your brain, this network might extend over distant tissue masses. The word bread, an image of a baguette, and the odor of fresh bread might be stored in very different tissue masses yet be connected within a single memory chunk.

Today it is thought that most of us can keep 3-4 chunks active at once.

Information theory allows us to quantify aspects of a concept or a process. If we postulate that working memory storage capacity can also be quantified in terms of information, we can perhaps extract some strategies to help learning.

During the process of learning, the concepts that need to be held in memory should belong to a set (chunk) which consists of a relatively small number of distinct elements. If the set containing the concept being introduced consists of a large number of elements, the information cost of placing the concept working memory will be high, possibly too high. This can result in a number of possibilities. The learner finds something else to place in working memory, or the learner quantizes the set of concepts being considered to reduce the size of the set. We are all familiar with the former case, as the student drifts off to private reverie, sometimes including snoring. It's the latter case though that can be more problematic. These are the cases where learner partitions the too-large set into subsets and then represents each set with an exemplar. At this point, the learner is certain that the concept is in place and, therefore, needs no further elucidation. When later application of this exemplar concept leads to problems, the learner will look for the source of the problem somewhere else. It is very important at this initial stage to test how the learner has categorized the concepts and make whatever corrections are necessary.

Using a chunking strategy allows us to reduce the possible elements associated with a concept. A common chunking strategy is to build a story that weaves together the elements of the concept. A story by definition restricts what can happen in a particular sequence. Each element in the sequence of the story is a constraint on what can follow, thus reducing the number of possible outcomes at each step. Here the danger of an inaccurate recollection also increases. An inaccurate understanding that fits the story will more easily become part of the learners "knowledge."

Our view of short-term memory capacity is explicit. We see short-term memory in terms of awareness events, and our capacity as being how many of these we can keep track of in a short period of time, say 1.5-2.0 seconds.

The limitations of working memory are crucial for designing instruction. When we teach new concepts that may not be familiar to students, we need to organize chunks purposefully in ways meaningful for them. If instruction helps chunk the information in valid ways, students will attend to the correct feature of the chunk (set) and will be able to use the learned content beyond that lesson.

In the design of computer-based instruction, we recommend reducing cognitive load by limiting the number of chunks the learner needs to attend to during instruction.

      

 

   Quantization

At least two commonly encountered situations exemplify how we manage memory. One is in prototypes. When asked to think of a bird, we turn to our "model" for birds and are more likely to think of a robin than a penguin. The other is in stereotypes. When someone mentions the word surgeon, we are more likely to think of a male than a female — and that exemplifies the real issue with stereotypes.

Storing information for later recall is a problem that people have addressed, and one approach used is known as quantization. The visual senses receive more variations between white and black than the brain can distinguish. This fact is most helpful when we want to store an image in a digital form. Instead of attempting to attach a value to each shade of grey in a pixel we can divide the range between black and white into a small number of intervals — 256 seem to suffice — and then represent each interval with a digital code. When the image has to be displayed we can translate the code into a shade of grey that lies within the range of the interval the code represents. This process of representing a range of values with a representative value is called quantization. The quantization process always results in a loss of information, just as it does with prototypes and stereotypes.

Image

Image quantized using 256, 2, 4, and 8 intervals

During the process of quantizing the pixel values, we have lost the precise shade of grey in the original image. Once we replace the precise value with the label of the quantization interval we can never get the original value back. However, we get something in return. We can store more images, process more images, and transmit more images than we could if we did not sacrifice this information.

Our brains handle the perceptual world outside in a similar way. The quantization process becomes the process of conceptualization. We see a tree, and we store in our minds the concept of a tree. If we were asked to remember the tree, we would surely fail long before we could bring into our minds all that was that particular tree. The next time you look at a tree notice how the level of quantization changes the longer you look at it - the more mental resources you devote to the task. The tree initially may be simply a blob of green on a brownish column. The blob may resolve itself in time — with processing — into leaves, then leaves of a particular shape, then leaves of a particular color, then of a particular shade. The trunk may resolve into a textured shape and so on. In a sense, at each step we are increasing the number of quantization intervals. Each increase provides more information about "tree" and requires more mental resources. We use the same kind of mental process with many of the inputs to our senses. We see a shape, we see a person, we see a woman, we see an Asian woman, and the process goes on.

How we perform this conceptual quantization depends to a large extent on context. However, whether we partition the space of people by gender or by pigmentation, by ethnicity or by all of the above, in each case we are giving up information for efficiency. And this loss of information can lead to misconceptions. Suppose we plot the height and weight of a number of individuals and then partition the two-dimensional height-weight space into four regions as shown below:

Graph

Four classes

Now, instead of referring to an individual's weight and height we can just refer to the region in which the height and weight fall into — there are only one of four labels to remember. So if a person's height, weight combination fall into region 2 we have a pretty good idea of the physical stature of the person - about 4' 8" and 90 pounds. But what if we encountered a tall, slender person. They might still fall into region 2 leading us to picture them completely inappropriately.

Quantization (conceptualization) is the replacement of actuality with models. The larger the number of quantization intervals we have, the more models we have for representing reality. But in the end we have to remember that there is a difference between models and reality. Awareness of our conceptual quantization and the pitfalls associated with it should inform our teaching. Whenever we use a concept, we are using a model. The level of quantization we are using may not be the level of quantization the student is using. Our set of models may be richer than the student's set of models, and it is entirely possible for misconceptions to arise.

Exams are useful to ascertain what students are learning. It is particularly valuable to use exams to detect when students are using an inappropriate or wrong model (i.e., wrong conceptual quantization) and then adjust our teaching appropriately.

SubError2

Common subtraction errors

As noted earlier, the two errors illustrated above are common among children learning subtraction and having issues understanding what to do when the digit in the minuend (top) is smaller than that in the subtrahend (bottom) and you need to "carry."

      

 

   Stories

Iliad

Image depicting event from the Iliad, a story in Greek culture

Before the printing press, much human endeavor was based on stories. The Iliad and the Odyssey are famous stories passed down from Greek oral traditions. Cathedrals built around 1000 AD were known for their elaborate murals and windows depicting biblical stories that were largely passed down orally.

The design of a window may be abstract or figurative; may incorporate narratives drawn from the Bible, history, or literature; may represent saints or patrons, or use symbolic motifs, in particular armorial. Windows within a building may be thematic, for example: within a church - episodes from the life of Christ; within a parliament building - shields of the constituencies; within a college hall - figures representing the arts and sciences; or within a home - flora, fauna, or landscape.

Wikipedia, November 17, 2015

StainedGlass

From Chartres Cathedral

While writing has been around since about 3000 BC, it really didn't become widely accessible until the invention of the printing press (~1570). Initially writing was largely about business. After the printing press, writings became much more accessible. Other changes ensued. For example, many people realized that their vision was not good enough to read certain typefaces, so reading glasses became extensively developed. Many other inventions (microscope, telescope) sprang from that.

Today reading is essential, but it still is rooted in neural systems evolved for stories. Nearly every human learns to speak; not every human learns to read. We seem wired to learn to speak. It doesn't seem to matter which of the thousands of languages we are destined to speak first, our neural capacity for speaking is in place.

It should not be surprising, then, that stories fit well in the human's scheme of things.

Quantitative scholarship has moved away from stories. Such scholarship tends to be a dispassionate description, often involving interpretation of numbers. We as authors are comfortable with such scholar-speak.

The Unified Learning Model and early versions of this book were written in a more traditional book format. When two of us were giving a seminar about the book to a group of "discipline-based education researchers," a question came up after the talk about why students learned "better" from stories. That question has an answer: stories offer connections and connections aid learning. Stories use context to reduce the number of outcomes allowing them to be held in memory. As just noted, we evolved to live in worlds of stories.

Because stories serve to reduce the set of outcomes one has to consider at each point in time, the following folklore poem is not how stories work:

     One fine day in the middle of the night,
     Two dead men got up to fight,
     Back to back they faced each other,
     Drew their swords and shot each other.
     If you don't believe the story's true
     Ask the blind man, he saw it too!

     Anonymous

When the day is fine, we know a few things that may be happening — the sun is shining, birds are chirping — and a lot of things that are not happening — the moon is bright in the sky, a thunderstorm is in progress, the wind is moaning in the eaves. The storyline at that point partitions the world of the possible into two sets, a smaller set of likely events and a much larger set of unlikely events, with the focus on the set of likely events. The average information associated with the smaller set is less allowing it to be held more easily in working memory. Because the second part of each line violates the set established by the first part, the poem (and its ilk) have found a home in folklore.

We decided to break from our roots and write a book made up entirely from stories. In our case, we've written chaplets — very brief chapters.

So-called qualitative research amounts to stories. In this research, most stories are underpinned by other stories. In this book, the stories we present most often are based in complex experiments requiring a great deal of know-how; for example, understanding how to interpret the squiggles in an EEG or "reading" MRIs.

We've tried to write each chaplet so that it stands alone as a story. That is, as much as we've thought possible one chaplet does not depend upon another.

The "big picture" story that we are trying to tell is extremely complex and depends upon knowledge from many areas of science. In each "chaplet," essentially all of the details have been set aside. At the end of the book, we list resources for each chaplet. You can access them by clicking on the 'button' labeled References. During the writing, every time one of us created or edited a draft and noted some detail he was required to put a copy of one or several supporting references in a folder. You, the reader, don't get to see that. We are asking you to "trust" us. We, on the other hand, are typical academics. We don't trust one another without verification, and we demanded access to original sources.

We are teachers and our intended audience consists of teachers. Few teachers have anywhere close to the background knowledge required to read and comprehend the supporting documentation for the book. If you are teaching oboe, why in heaven's name would you be expected to know how to interpret the squiggles in an electroencephalogram (EEG)? Nevertheless, those squiggles contribute an important piece to understanding how to become a better oboe teacher; it's that simple.

We operate on the assumption that teachers intend to bring about learning. That's the explanation for why teachers should know about how human learning works.

      

 

   Smell and Context

What we store (remember) are the models we construct rather than the incoming data itself. The remarkable thing is what our model may end up containing. More than two decades ago an interesting experiment was performed on about 100 college students studying psychology. Half of them received instruction in a room filled with the pleasant odor of chocolate. The others were instructed in a room pervaded with the rather unpleasant odor of camphor. When they were tested, they were again divided into two groups. Half of the 'chocolate' students were tested in camphor odor while half of the 'camphor' students were tested in chocolate odor. The outcome was striking: taught in chocolate, test better in chocolate; taught in camphor, test better in camphor. How can we explain a result like that? To most teachers, odor would seem irrelevant in that context; after all, content is content. To brain scientists, that result makes tremendous sense. Everything in a learning context contributes to learning. Let's back up just a bit to consider what we know about smell.

AllegoryBrueghel

"Smell," from Allegory of the senses by Jan Brueghel the Elder, Museo del Prado

We often say that we have five senses: sight, sound, touch, taste, and smell. It is readily argued that taste is trivial without some concurrent smell in that we seem to detect only six tastes (sweet, sour, salty, bitter, umami, and oleogustus). Humans are estimated to detect a trillion nuances of smell.

Nearly all of the sensory information coming into our brains crosses over from the side of detection to the side of processing and storage. A stroke on the right side of the brain may lead to some left-side paralysis, for example. Odor does not cross over; it comes straight in, right-to-right and left-to-left. The olfactory connection is the most direct to our brains.

OlfactoryBulb

Model of olfactory bulb in human brain

The olfactory system remains plastic throughout life because of continuous neurogenesis of sensory neurons in the nose and inhibitory interneurons in the olfactory bulb.

Tsai & Barnea, 2014

We suggest that when we need to recall information, the smaller the set over which we have to search the easier it will be to recall information. The smell of chocolate or camphor was associated with a small set of information. When the students smelled the appropriate smell their minds could isolate the small set associated with that smell thus making it easier to retrieve the information. Research does tell us that unique experiences can be used to highlight learning and increase recall. The strength of this effect is maintained only if the new stimuli (smell of chocolate) is tied to a small subset of learning. If we were to use the smell of chocolate often in the learning context the effect would most likely disappear.

Can we respond to odors while asleep? It happens that about eight out of ten of us cannot detect smoke or acrid odors (like burning rubber) while sleeping. That's why smoke alarms are so important.

Does our olfaction turn off while we sleep? Apparently not. Pairing odors with tones during sleep leads to the tones alone being able to elicit the same response as do the odors.

What about a dog's ability to detect odor? Dogs seem to be able to detect odors (certain molecules in the air) at a sensitivity about 10,000 times that of a human. Dogs can be trained to respond to specific odors. Dogs are used to detect drugs and explosives.

Why did we discuss the sense of smell at this point? In learning, context matters. A teacher's awareness of context — and what a student may or may not think of as being a part of what is to be learned — is critical. Laboratory safety is taught as part of chemistry laboratory instruction. We used to teach students that, if your clothing catches fire, lie down and roll on the ground. A student once was severely burned running down three flights of stairs and outside a school building while his clothing was afire. When asked after this tragic incident what he was doing, he was going outside to find "ground" to roll on. Context matters!

      

 

   Emergence of Consciousness

snowflake

Snowflake.

Emergence can be viewed as a property of a system which is not present in its components. The beauty of the snowflake is not a property of the water molecules which make up the snowflake.

From an information perspective we can look at emergence in two different ways; in terms of the process itself, and in terms of the emergent property. In terms of process, emergence provides a means for increasing the number of outcomes far beyond what seems possible when we take a narrow view of the atomic outcomes of an experiment.

Reading comprehension is difficult to study because the many possible ways it can flower. For the teacher comprehending this emergent process with a multitude of outcomes requires dealing with a huge amount of information.

The emergent property itself can be viewed as a significant reduction of information. Consider viewing a person in front of a blank wall with a high powered telescope. As the telescope scans over the person, we see toes and eyes and ears — a myriad of components and hence a lot of information. If we were asked to remember all the things being viewed, we might have a difficult time. When we reduce the resolution of the telescope, these components coalesce into a person. Now the image consists of only two components, the person, and the wall. To keep in mind what we are seeing becomes a trivial act.

For the student, learning to read the word cat consists of three events each one of which could take on twenty-six different values. When the three letters coalesce into the concept "cat," there is a reduction in the number of possible outcomes (assuming cat to be a common item in the everyday life of the student).

The process view of emergence tells us why teaching is so difficult. The property view tells us why it is so important for learning.

Consciousness is an emergent property. The stuff of our minds most often is more than the sum of the parts that go into it. This book has been written from the perspective of consciousness as an emergent property. Consciousness emerges not just from current sensory input but also from the recall of earlier situations during which similar inputs had been perceived. Our pasts help shape our presents — and futures.

      

 

   Mind's Eye

As the result of the Human Connectome Project we have improved notions of how various human abilities map onto our brains. Each human brain hemisphere appears to contain 180 areas.

Mind

Parcellation of Human Brain Areas

We accept that consciousness is a unique coming together of our senses and memories in what we call our minds. Mind, then, is an emergent property.

The prevailing thought used to be that all of the incoming information from our senses was synthesized to make up our mind models. From an information perspective, that requires synthesizing a great deal of information. Today a very different view is emerging.

Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions.

Clark, 2013

This is very consistent with what we know about information. When a prediction is close to current perception, the incoming difference in perceptive information is small. When what you see is what you expect to see, little information comes in through vision. On the other hand, when the magician makes the 5-ton elephant disappear and your prediction is way off, the incoming information is large — large enough that you make great note of this.

Look at the tablecloth trick:

Tablecloth.

Remove the Tablecloth leaving dishes in place.

Magicians, of course, do this sort of thing all of the time. They get us to make some mental prediction of what will happen, and then do something that wreaks havoc on this prediction.

A very readable explanation has been developed by Chris Frith in his book, Making Up the Mind.

ChrisFrith

Chris Frith, Making Up the Mind.

A detailed and scholarly presentation of this model has been provided by Andy Clark.

Andy Clark

Andy Clark, Professor, University of Edinburgh

A good question would be "Why does it work this way?" Awareness (conscious access) is a big deal event. It takes something between 180-450 milliseconds, and most inputs are shut down during the event. Suppose you were engaged in a simple task, say reaching for a piece of bread on a plate on a table. Once you decided to reach, you would have a mental model of the required trajectory, etc. Rarely do we get this perfectly correct in our first model. So, we would need to make corrections. If we needed to replot the course of our hand this would require a lot of conscious access and processing time. However, any system that feeds in just the error (the apparent difference between where we think we need to be in order to be successful and where we actually are) is all that would be needed for success.

There is an abundance of evidence that this actually is what we do. We make small corrections not perceived by us (but nevertheless detectable and measurable by instruments).

Suppose we lift a piece of bread and an unseen mouse appears and scurries away? Well, that certainly would not have been a part of our immediate prediction, and we would need some new awareness to deal with that. Our routine processing isn't geared for that. The departure from expectation is so large that we often have a startle reaction that goes back to our brain stem until the new information is processed properly.

      

 

   Welcome to My World

DifferentAnswer

A Different Answer

We are teachers and we'd really like to think that we can get all of our students on the "same page." That turns out to be impossible. Sometimes there are very simple reasons for this. While DWB and KS might have a conversation about "red," GT will be left out … he's colorblind. But there are more basic reasons as to why people can't think alike, and it has to do with how we achieve consciousness.

The last three decades have seen a remarkable coming together of thinking about human consciousness. Among those studying and writing about the subject, Axel Cleeremans provides some of the clearest and most easily understood thinking.

AxelCleeremans

Axel Cleeremans

There are many things we do over and over. We walk. We speak to a friend. We answer the phone. We enter and sit at a favorite restaurant. Once we've done something many times, a remembered history of what we have learned kicks in. "No, it's cold out, and that table always gets a cool breeze everytime someone enters or leaves the restaurant. We want a table in that back corner." … And so it goes. This has been well stated by Clark:

To perceive the world just is to use what you know to explain away the sensory signal across multiple spatial and temporal scales. The process of perception is thus inseparable from rational (broadly Bayesian) processes of belief fixation, and context (top down) effects are felt at every intermediate level of processing. As thought, sensing, and movement here unfold, we discover no stable or well-specified interface or interfaces between cognition and perception. Believing and perceiving, although conceptually distinct, emerge as deeply mechanically intertwined.

Clark, 2013

Cleeremans contribution is to describe how these "metarepresentations" become learned and evolve dynamically with our life experiences. All related experiences become connected through these metarepreentations. In fact, triggering just a part of one of them can bring out much of all we have learned about the rest of them.

The classic example is about a person walking along in the desert, hearing the rattle of a rattlesnake, and jumping away. The jumping is a reflex reaction, a reaction that does not involve thought. After we've jumped, we usually make up a story about why we jumped. Again, as demonstrated through experimentation, these stories are after-the-fact and made up.

As an outcome of this, there is a progression how we respond in given situations. Cleeremans refers to the "Quality of Representation," and posits that the mental availability or awareness we have in a context changes over time as situations are repeated and refined.

Quality of Representation

Cleereman's Quality of Representation

Three different entities are plotted in this graph. One curve plots availability to action. Before we start something, we really don't know what to do so we can't act. Let's think about decoding printed English words. We were not born knowing how to read. Over time, we learned to read nearly anything written in English. Early in our reading experience, we learned how to decode words in the English alphabet. There was a time when the decoding process we were going through was very available to our conscious awareness. Today we just read. Our ability to control this reading is minimal. That is, we are no longer aware of our decoding; it has become automatic.

Consider driving to work for an experienced driver. This act can be done with little awareness. The driver has a goal, and small adjustments keep the car running down the road safely. Next, consider the same act of driving during stormy or icy weather. Now the information coming in often indicates a big discrepancy between that input and the predicted model. In these cases, drivers allocate more if not all of their attention to the driving.

Your allocation of any attentional resources you have depends on your sense of how different the inputs are likely to be from the predicted models. The bottom line is that your mind's eye is all about predicted models of the world.

      

 

   Under the Hood

Connectionism has been regarded as a branch of cognitive science for decades. Rumelhart and McClelland published a seminal work in 1986; we have been influenced strongly by the work of Elman et al. in Rethinking Innateness (1996). Taking a brief look under the hood of connectionist approaches, we have adapted a figure used by Cleeremans.

When you first start to learn something — take your first step, say your first word, bake your first pie — there are perceptive inputs but no outputs. Suppose you see a fly and want to swat at the fly. Your eyes see something that ends up being processed at the back of your eye and moves into the optic nerve system. Ultimately neurons connected to your arm cause muscles in that arm to contract — to make the swat. All sorts of processing takes place in between. While you may be consciously aware of some of these processes, there will be many other events learned without conscious awareness.

Implicit Action

Perception leading to implicit action

You keep trying and, after a while, you start performing actions. You take steps. You walk. You learn how to keep the dough temperature correct as you roll your piecrust. In between your actions are events you may consciously learn about. "Where must my hand be to actually hit the fly — if I aim straight at the fly, it will elude me."

Explicit Action

Repeated actions are monitored

You are keeping score of most of the things you are doing. Remember, your brain is creating models it uses to predict outcomes. So, it is keeping track of the perceptions, the actions, and also those things that are hidden from us but are involved in the processing. So, as we engage in some repeated task, we often are able to become aware explicitly of things that are a part of that task. That's how golfers try to change "their swings." That's how speech therapists try to get clients to improve their pronunciation.

It is important to note that we are rarely conscious of this kind of "metalearning." We can't sit down to study it as we might learn about Chester Arthur or the Suez Canal. In fact, sometimes this metalearning includes aspects that are hard to purge out — such as self-talk like "I can't do math."

Automatic processing

After time, metarepresentations lead to automated shortcuts

Once you've done something often enough, there is very little need for monitoring of the sub steps. Your drive into work is automatic. Sometimes you don't even recall the drive. Sometimes you promise to perform an errand on the way to work, but you find yourself at work having forgotten the errand. There have been many tragedies where a parent picked up a child on the way to work and simply forgot and left the child in their car.

Some connectionists actually create computer models (programs) in which there are hidden layers. As we write this book, there is enormous effort throughout the world to develop self-driving cars. Yes, there are hidden layers in which these cars make decisions that the creators of the cars can't explain. There also are decisions to be made in developing the software. For example, suppose a pedestrian does something really stupid and acts in such a way that either the car will injure the pedestrian or the car's occupants. What should the self-driving car do — injure the pedestrian or its passengers?

The metarepresentations bring everything together including some features such as emotion often overlooked by educational psychologists. This aspect was not stressed in our book with Shell and others. The potential impact is well summarized by Cleeremans:

… Emotion is crucial to learning, for there is no sense in which an agent would learn about anything if the learning failed to do something to it. Conscious experience not only requires an experiencer who has learned about the geography of its own representations, but it also requires experiencers who care about their experiences.

Cleeremans, 2011

      

 

   Waves Keep Coming

Waves

Waves

How do things in our minds change? We experience brain-wide neural oscillations or brain waves. These occur with regular frequencies. Consider the following report of the detection of a type of brain cell that seems devoted to maintaining some of these oscillations:

When we are awake, purposeful thinking and behavior require the synchronization of brain cells involved in different aspects of the same task. Cerebral cortex electrical oscillations in the gamma (30-80 Hz) range are particularly important in such synchronization. In this report we identify a particular subcortical cell type which has increased activity during waking and is involved in activating the cerebral cortex and generating gamma oscillations, enabling active cortical processing. …

Kim et al., 2015

      

 

   Autopilot

Not only does awareness run late in terms of mental processing, but it also is costly when we try to control it. While reading this book, you are controlling your attention and the subsequent awareness. While one loud noise may not distract you (say the sound of a gunshot), several such noises close together almost certainly will capture your attention, distract you from reading, and "break your train of thought."

Distraction

Cartoon model of distractors

For that reason, our systems try to automate as many of the routine processes that we engage with as possible. For example, you probably paid little attention and did not try to achieve conscious access of your walk to your office or home or place of business. In the Nebraska winter when icy, you'd better believe we authors (especially the oldest one) pay attention to that.

Repetition enhances the automaticity of processes. Repetition increases neuronal firing speeds so that condition-action links are executed faster. Although we don't completely understand how this happens, at some point these more streamlined and speeded up procedures no longer appear to need active working memory; they no longer need to be controlled. Bassett et al. provided an account of this process:

The dynamic integration of distributed neural circuits necessary to transform the performance of a motor skill from slow and challenging to fast and automatic has evaded description because of statistical and mathematical limitations in current analysis frameworks. Here we used dynamic network neuroscience approaches to expose the learning-induced autonomy of sensorimotor systems and uncover a distributed network of frontal and anterior cingulate cortices whose disengagement predicted individual differences in learning. These results provide a cohesive and statistically principled account of the dynamics of distributed and integrated circuits during cognitive processes underlying skill learning in humans.

Bassett et al., 2015

You are reading this book. Chances are you've not spent time in decoding most of the words. For most of our readers, there is a stop and decoding for the word electrophysiological. For nearly all of the words we use, however, recognition is automatic. That is, the outcomes of those processes engaged in decoding never achieve a unique state of conscious access.

When you start a process that you have automated, those brain processors that are involved in that process continue to work, but they do not need to achieve conscious access. You can drive safely while thinking about something else. However, that process does need to continue. Even though the process does not achieve conscious access, it is ongoing. Should it be cut off by some other process (say texting that uses some of the same resources such as vision), then the process can fail. That's why texting while driving is so dangerous.

What happens when two automatic processes conflict? Suppose you are asked to determine the color of some text as in the figure below.

Stroop

Text for a "Stroop" experiment.

Although you couldn't always read as you do today, your current reading is automatic. Also, it runs faster than the system used to determine color. So the content of the word gives you a color response (a semantic response) before your color detection system tells you what the color is. For some of the words shown, these are in conflict. So, when asked to determine the color of the text, we slow down in our responses, and our error rate goes up.

Automatic really means automatic. A tachistoscope is an instrument that displays an image for a fixed period of time. When baseball players (not including pitchers) were shown pictures of various types of pitches for 0.2 seconds, the correlation of correctly identifying the pitch with batting average was 0.648 (p < 0.01).

Eagleman expresses this outcome quite clearly:

As long as the zombie subroutines are running smoothly, the CEO can sleep. It is only when something goes wrong (say, all the departments business models have catastrophically failed) that the CEO is rung up. Think about when your conscious awareness comes online: in those situations where events in the world violate your expectations. When everything is going according to the needs and skills of your zombie systems, you are not consciously aware of most of what's in front of you; when suddenly they cannot handle the task, you become consciously aware of the problem. The CEO scrambles around, looking for fast solutions, dialing up everyone to find who can address the problem best.

Eagleman, Incognito, 2011

Eagleman describes the situation where our prediction is so far from our sensory input that we stop to reconsider the prediction. His metaphor suggests that a CEO is "dialing up everyone" when, in fact, every one of those predicting systems is always predicting. When a selected prediction fails, the "next best prediction" emerges as the new model. What does "next best" mean? Well, that depends both on what you know and your current context. Remember, too, that there is no CEO and that all of those competing processes remain silent — until one "wins" at which time the press secretary (the voice in your head that you think of as a CEO) steps up to the podium to give us our new prediction or explanation.

Finally, we should note that not all automatization is "good."

Saying

About habits

We want many things in our lives to become automatic. The results often are called habits. Of course, there are "good" habits and "bad" habits. Anyone who has worked on changing a bad habit can tell you that such change does not come easily if at all.

      

 

   Temptation

Forbidden Fruit

Original Sin

Forbidden fruit is a phrase that originates from Genesis concerning Adam and Eve in Genesis 2:16-17. In the narrative, Adam and Eve ate the fruit of good and evil in the Garden of Eden, which they had been commanded not to do by God. As a metaphor, the phrase typically refers to any indulgence or pleasure that is considered illegal or immoral.

A great deal of what we try to accomplish in teaching is to bring about changes in learners' abilities to self-regulate. This is a part of a bigger picture commonly referred to as self-control. These notions are intertwined deeply in our beliefs and often are tied to notions of morality. We are "strong" if we can "resist temptation."

As already noted, the following quote is attributed to John Dryden during the late 17th century: "We first make our habits, and then our habits make us." If you are a teacher, you often are trying to deal with knowledge that your learners find boring, and keeping them at the learning task is a challenge. There are at least three things teachers need to know about temptation.

First, the environment: … "external factors contribute to how well people manage to resist and enact their current wants and longings." We need to try to control the environment as much as possible. So, for example, all smartphones in airplane mode.

Second, "want to" motivation is more effective than "need to" motivation in terms of helping learners avoid attractive temptations. Sometimes pointing out the utility of the sought after knowledge helps in this regard. "How would you figure out how much fertilizer you needed to buy?" "How long is the drive from Lincoln to Houston?"

Third, "one of the reasons individuals with better self-control use less effortful inhibition, yet make better progress on their goals is that they rely on beneficial habits." In this sense, structured classrooms seem to get much better learning outcomes than do unstructured ones.

Hofstadter sums our view of decisions:

In sum, our decisions are made by an analogue to a voting process in a democracy. Our various desires chime in, taking into account the many external factors that act as constraints, or more metaphorically, that play the role of the hedges in the vast maze of life in which we are trapped. We can will away all we want, but much of the time our will is frustrated.

Our will, quite the opposite of being free, is steady and stable, like an inner gyroscope, and it is the stability and constancy of our non-free will that makes me me and you you, and that also keeps me me and you you. Free Willie is just another blue humpback.

Hofstadter, 2007, p. 341.

      

 

   Self-Talk

Selftalk

Model of self-talk

You talk to yourself. We know that. We talk to ourselves. You really couldn't read this book if you were not able to talk to yourself. Even people with serious issues of hearing and speech who can read have a means of self-communication.

Self-talk runs after the fact. That is, it is not real time but follows subconscious processing. This is well-expressed by Gazzaniga:

No doubt you will still feel pretty much in control of your brain, in charge, calling the shots. You will still feel that someone, you, is in there and making the decisions and pulling the levers. This is the homuncular problem we can't seem to shake: The idea that a person, a little man, a spirit, someone is in charge. Even those of us who know all the data, who know that it has got to work some other way, we still have this overwhelming sense of being at the controls.

Gazzaniga, 2011

Here's a sketch of how the system works. The processors are in place. Most of the work of creating new processors involves adjusting, merging, or otherwise tinkering with existing processors. We take little note of tinkering with processors. Merging processors often leads to what we call "Aha" moments.

Aha

Cartoon of "aha" moments — from merged processors

Every time you achieve awareness (conscious access) of something, that something has a chance to be buffered or saved because there may be some follow-on processing. You can access and work with almost anything that achieves your awareness. For example, you can repeat it to yourself. There are circumstances when you will repeat it to yourself in the hope that you will achieve long-term storage of whatever you are repeating.

We find that repeating something to ourselves several times improves the chance that we will be able to recall it later. Have you ever been counting something, saying to yourself " …, forty-three, forty-four, forty-five, …" and had a "friend" come along and say "twenty-two, twenty-three, twenty-four"? Chances are you lost count and had to start again unless you focussed on keeping your count and paid no attention to your friend.

Why do we work this way, that is, with delayed awareness of processing outcomes rather than real-time awareness as we process? This probably is a matter of surviving. Remember, many of your processes may be working at once. If you are focussed on the "What was that kind of bird?" processor when a lion steps up behind you, not operating your "Run like hell" process may cost you your life. In fact, lots of those survival processes seem to run outside of the cerebral cortex and can, therefore, run faster than those involving delay.

Yes, we can self-regulate and yes, we do have free will. In fact, early measures of self-regulation (willpower) turn out to be excellent predictors of behaviors later in life. It's just that they don't operate the way we think (thought) they operate. We learn about choices we have, and we learn how to make choices.

      

 

   Who Says I'm Not the Boss?

Who's the boss?

Who's the boss?

That those voices in our heads are reporters, an idea that has been around for nearly five decades and enjoys considerable experimental support, remains one of the most controversial ideas today. We turn again to Gazzaniga:

No doubt you will still feel pretty much in control of your brain, in charge, calling the shots. You will still feel that someone, you, is in there and making the decisions and pulling the levers. This is the homuncular problem we can't seem to shake: The idea that a person, a little man, a spirit, someone is in charge. Even those of us who know all the data, who know that it has got to work some other way, we still have this overwhelming sense of being at the controls.

Gazzaniga, 2011

Plants cannot move; they have no brains. Some organisms that move also don't have brains. Recently it has been shown that Physarum polycephalum (slime mold), a single-celled organism that can be as large as 4 inches in diameter, is capable of a type of learning called habituation. Most of the multicellular animal organisms we think of have neural tissues, and many have brains. It is not too far-fetched to think of brains as the organs that tell us when to run away. When we touch something very hot, our hands immediately pull back through what we call reflex reactions. Only our brainstems become involved. Brains have models of their environments. When some input comes in that does not bring about a reflex, brains contextualize that information in terms of the model and, based on that, determine what to do. Do we run away? Fight? Approach? One of the possible outputs wins because it happens to be the strongest in terms of our model at that moment in time. It may be very fast — involve the brain (i.e., not be a reflex) but not involve the entire brain. It may go brain-wide, draw upon memories, and take some time before a behavior emerges. The behavior is something we have always thought of as our "choice." In fact, it was the winner of a neural competition about which we often were entirely unaware.

Consciousness emerges, and this emergence is reported to us by the voice in our heads. This does not mean that we do not control what we do! Those brains of ours started out with the job of deciding — quickly — whether we should run away. That wiring in our heads — our model — comes both from inheritance (who our human forebears were together with our in utero and life development) and from learning. Learning depends on all of our environmental histories — and extends far beyond what we've learned in school. We do (or at least can) self-regulate. We most often can choose as to whether or not we will engage in criminal behavior. Consciousness emerges and reports; it does not control. The model, that thing that has developed through learning and prewiring, controls. The various options are weighed, and one usually wins out. The winner's neural responses are, at the given moment in time, strongest. Our voice usually tells us which one won. That voice may tell us what the alternatives were, but that report may be both incomplete and inaccurate.

All of our behaviors seem to be rooted in our minds. They are, therefore, almost always learnable and manipulable. We use feedback to adjust our models of what we are doing (how we are behaving). For example, if the language laboratory indicates that our Spanish pronunciation is off, we try to say again what we've just said such that our new utterance becomes more acceptable.

Sometimes what is fed back to change behavior is some instrumental output. That is, rather than see or hear our target behavior, we see or hear something that comes from a device measuring one of our outputs. For example, certain EEG feedback has been found to improve outcomes in the treatment of a variety of substance abusers.

Perhaps the most convincing experiments by Shibata et al. (2016) involve changing what we think by external manipulation under circumstances where we have no prior knowledge of what it is that is being changed. A very clever recent series of biofeedback experiments has shown that one's preferences for a particular face can be manipulated. It turns out that more-preferred rated pictures of faces activate different brain cingular cortex regions than do less-preferred rated faces. A large number of faces were rated by subjects. Our brains experience what seem to be random excitations of groups of cells. For example, a long lost memory might seem to come to mind as if from nowhere. When these occur, we sometimes choose to dwell on them. (Sometimes we just can't seem to make them go away.) Inside the MRI devices where the subject's regions of brain oxygen consumption were being monitored, a display was created such that the size of a circle was maximized as the subject's active regions approached matching a pattern in the cingulate cortex that was related to either the more or the less preferred. Each subject was assigned to either a higher-preference or lower-preference group. When shown facial images, sometimes the patterns were made more favorable and at other times less favorable. When re-rated, the images scored in the direction (more or less favorable) that the subject had been asked to match the feedback. (Shibata et al.'s description of the experiment is reproduced in the references for this chaplet.) The ratings had been manipulated. The subjects had participated by trying (thinking) about maximizing the image of a circle on a screen; they had no idea that their efforts would end up affecting their ratings.

Executive is a word that is used in the educational psychology literature. It implies some form of active decision making and lends itself easily to being connected to those voices in our heads where we do say to ourselves things like — "I see A & B, which one will I pick?" Time and again it is shown that external machines can tell what our choice will be before our voice tells us. The psychology literature would benefit from either forbidding the use of the term executive or at least demanding evidence that the voice is concurrent with making the choice.

There are times, especially in the home fields of the book's authors, where two or several possibilities should be considered when solving a problem, and that each takes effort. So, there are times when you work on A, you work on B, and then a choice is made between them. The notion that you are actively controlling this as an executive is what we reject.

      

 

   We Are Born That Way

Published in 1999, The Scientist in the Crib described in young children the same sorts of processes we describe here in M3.

Crib

The Scientist in the Crib

Here are five quotations taken from the book:

And they [babies] also use them to make predictions about what the world will be like. But once babies have done this, they can compare what they experience with what they predicted. When there are discrepancies, they can modify their representations and rules. When they see a new pattern in their experiences, they can create new representations and rules to capture that pattern.

Babies start out linking their own internal feelings to the expressions of other people.

Babies start out believing that there are profound similarities between their own mind and the minds of others.

In each case the things babies already think influence where they will go next. They determine which events will engage them, which problems they will tackle, which experiments they will do, and even which words they will listen to. Then babies change what they think in the light of what they learn.

If we are right that this is how babies and young children work, then maybe we adults work this way, to.

Developing models is not something we develop as adults; instead, we are born that way.

      

 

   Why Now?

Libet

Benjamin Libet, 1916-2007

Developmental psychologists used The Scientist in the Crib (1999) to assert that our "minds" work by developing models and using predictions from those models to navigate through life. The notion that self-talk follows rather than supervises decision also is not new. Work by Benjamin Libet published in the mid-1970s and based upon EEG data began to conclude that there were at least some situations in which a person's final behavior could be determined before that individual knew for themselves what the behavior would be. Why, then, has it taken so long for a model based upon those results to take hold? We can hear it even today — the issues have to do with a construct called free will.

Free will is the ability to choose between different possible courses of action. It is closely linked to the concepts of responsibility, praise, guilt, sin, and other judgments which apply only to actions that are freely chosen. It is also connected with the concepts of advice, persuasion, deliberation, and prohibition. Traditionally, only actions that are freely willed are seen as deserving credit or blame; if there is no free will, there is no retributive justification for rewarding or punishing anybody for any action. …

Wikipedia, October 27, 2015

If experimenters can predict reliably what you will do before you yourself know what you will do, how can free will operate? You are not privy to the immediate processing of the various processors you have in your brain — a few of which have been built in but most of which you have developed through learning. In a given situation, you may opt for A or B, or you may decide to go to the bathroom or refrigerator. Your consciousness tells you about the processors outcomes — I will choose A, I will choose B, I am hungry.

Much of your behavior is very predictable. Some people are systemically honest and others dishonest. There are times, however, when even the most honest cheat a bit and when the least honest behave honestly. Although certain routine human behaviors are improbable for an individual, they are not impossible.

Gazzaniga

Michael Gazzaniga

In his book, Who's in Charge? Free Will and the Science of the Brain, Gazzaniga argues persuasively that we are responsible for our behavior, and that some transgressions against the norms of society are not tolerable. We are not entitled to an occasional murder. Bringing about behavior change remains the holy grail for many situations. For example, for obese people making a trip to the refrigerator less frequent is often a remarkable challenge. After all is said and done, humankind is not good at getting people to change inappropriate habits.

It is time that educational psychologists acknowledged the fact that self-regulation is a before-the-report event. If we expect our students to behave in a manner described as well self-regulated, then our instruction must include efforts to bring that about. That is, we must help them put those processors into place that are involved in self-regulation. For example, suppose you want to encourage class participation and have students better able to decide how their own participation stacks up against expectations. Ask students to create a "participation rubric" for the class. As the semester starts, have them define what participation should look like. Finalize a rubric based on their group work. To encourage self-regulation, have students evaluate themselves once or twice during the semester to ensure sure that they have examined their mental models of their participation performance and have calibrated that performance with the expectations set by the rubric.

As with nearly all cases of instruction, it doesn't matter that self-talk runs behind decision. What we do during instruction rarely needs to take running late into account.

      

 

   The Eyes Have It

ChasWheatstone

Sir Charles Wheatstone, 1802-1875

Vivid demonstrations of how our mental models dictate our perceptions result from split visual fields experiments. Such experiments were described by Sir Charles Wheatstone in 1838. Normally, our left and right eyes see two slightly offset versions of the same scene. These slightly different images allow us to perceive depth or how far away something is. Our brain model is strongly biased towards this outcome. In split visual field experiments, a physical barrier is placed between the left and right eyes and two very different images are presented to each eye, for example, the left eye sees the image of a face and the right eye receives the image of a house.

binocularrivalry.

An example of binocular rivalry experiment.

If we look at the lower levels of the visual cortex (brain tissue called the V1 region) both the set of neurons that would be triggered by a house image and the set of neurons that would be triggered by the face image show excitation. We might guess that what we will perceive will be a blending of the house and face image. However, our life experience has taught us that the left and right visual fields are very similar. What we end up reporting seeing is either the house or the face for a period of time after which our perception switches, from house to face, or face to house. When this experiment is performed with monitoring of the neural pathways, the forward information traffic immediately before the switch in perception is seen to be very high. The forward information traffic is the difference between the model and the perceptual input. Our brain model might say house but the input from one-half of the visual field does not agree with the predictions based on this model, and we get a large prediction error. The brain processes this large signal and after some time switches the model to that of a face, which in turn causes an increase in a different prediction error — followed by another switch.

Because models are so important in our understanding of the world, it is important that teaching involve both the generation and validation of models. Students make "conceptual mistakes" when they are using an invalid model. By presenting information showing the invalidity of the model the student can be helped to correct the model.

      

 

   Deception

There are situations where we have a strong model in place of how something works. In these cases, in spite of incoming data, the prediction of the model prevails. We are bringing into question the adage, seeing is believing. Here are three simple examples of where incoming data are overruled to make the context fit our model.

TouchNose

Touch your nose with your forefinger

Touch your nose with the tip of your forefinger. You sense this "touching" in your nose and your finger simultaneously. That can't be so. The nerves in your finger must transmit a signal for about a meter before they reach your brain, so those signals can't arrive concurrently. The "awareness" you have for this event is constructed from your model of the event since we all know that there is just one event, an event in which there is a "touch."

MullerLyer

Example of Müller-Lyer illusion

Examine the figure above. It seems as if one of those central lines is longer than the other, but they are equal in length. This effect is learned, as evidenced by its dependence upon culture, with people from some cultures perceiving a larger difference than others. An explanation by Gregory is the visual system processes that judge depth and distance assume in general that the "angles in" configuration corresponds to an object that is closer, and the "angles out" configuration corresponds to an object that is far away. For people living out of doors all of the time, their differences are small or zero. If you were such a person, you would not be reading this book.

McGurk

McGurk effect

The last of the three examples we are providing to illustrate differences between the actual incoming information and what our awareness tells us is in the McGurk effect as illustrated in this video. Clearly, what we see affects what we think we hear.

What our awareness tells us is "information" also depends on previous experience. In psychology, this is referred to as priming. In Wikipedia, one of the illustrations is that seeing the word "doctor" makes our response to the word "nurse" faster.

From an information theory perspective, priming corresponds to identifying a smaller set of likely outcomes and contexts. The notion of "deception" instructs us to be careful about the content of our instructional messages. We should view all such messages including visual, textual, and auditory from the perspective of our students and their personal experiences

      

 

   I Didn't See That

There are several ways in which some input can escape our awareness. For example, follow the instructions in this video.

Awareness

Awareness test video

The original copyrighted video version of this experiment involved a "gorilla" coming out of an elevator, beating its chest, and walking away. The majority of viewers did not "see" the gorilla. Perhaps more surprising, when a relatively large image of a gorilla was inserted into an x-ray, 83% of the examining radiologists failed to notice the gorilla. Eye tracking revealed that most who missed the gorilla looked directly at it. The authors concluded, "Thus, even expert searchers, operating in their domain of expertise, are vulnerable to inattentional blindness."

The world provides a continuous stream of information to our senses. The information connected with 'the bear' is large, almost akin to the sun rising in the west, yet most of us make no notice of the bear. Why? The brain seems to organize incoming information in terms of models and updates the model based on the deviation of the sensory input from the predictions of the model. The mechanism by which particular models are active at any given time is not known. However, when a particular model is activated on instruction ("count the number of passes ...") the brain looks for inputs from the environment that correspond to that particular model. The specificity of the instruction helps delineate the set of sensory inputs to be processed.

Still another type of "blindness" occurs after we start processing some sensory input. This is the so-called "attentional blink." For periods on the order of 180-450 milliseconds, once we are in a state of experiencing conscious access, visual sensory inputs don't appear to be detected. While quite a bit of input seems to be able to be "crammed" into a processing event, once it starts it is as if all of the visual action we missed took place while we were blinking — hence the name.

AirTraffic

Air traffic controller at work

A problem arises when some critical task requires vigilance, as in the case of air traffic controllers. These folks must focus on screens and note changes, dramatic as well as subtle. Brief switching of tasks seems to enable sustaining vigilance.

Since attention is often driven by preconceived goals (e.g. count the number of passes in the video), teachers can help students improve their learning by setting goals for students ("What I want you to pay attention to is … "). While setting expectations and goals for students is very useful to teach content, teachers also should teach students to set their goals for learning and reading so they can become independent learners.

      

 

   It Seemed Like Forever

We can measure time and know how long events take. Calendars tell us when 365 days have passed, and we are again at our birthdays or wedding anniversaries or 9/11. Clocks tell us when it is bed time; clock alarms awaken us. In general, however, we are bad at telling time. Prisoners placed in solitary confinement lose timing; they report that only 40 seconds have gone by after 100 seconds have elapsed.

We can distinguish two sounds separated by about 2 milliseconds. Two momentary sparks separated by only 30 milliseconds are usually perceived as a single spark. This slow response is the basis of TV, for example, where screens are usually refreshed at rates of 30 or 60 frames per second. Sixty fps is about 17 milliseconds per frame.

Our sense of time is usually thrown way off when events are very emotional.

Accident.

Moments of Extreme Danger

In threatening situations, an area of the brain called the amygdala kicks into high gear, commandeering the resources of the rest of the brain and forcing everything to attend to the situation at hand. When the amygdala is in play, memories are laid down with far more detail and richness that under normal circumstances; a secondary memory system has been activated. After all, that's what memory is for: keeping track of important events, so that if you're ever in a similar situation, your brain has more information to try to survive. In other words, when things are life threateningly scary, it's a good time to take notes.

Eagleman, 2015

An explanation for this observation is that when we are clearly in dangerous situations, our brains automatically start recording aspects of the situation that might otherwise go unrecorded. When these are "played back," they seem to take longer because there is more information.

Our sense of timing, then, should be thought of as subject to interpretation and deception as would missing a bear walking through a scene where were we counting action events or misjudging line lengths in an illusion image.

      

 

   Bingo

Bingo

Bingo involves matching numbers

When you see, hear, touch, or taste something, what is it that brings up your memories related to that input?

A sensory input is a cue that partitions our memories. Some sensory inputs are associated with many memories. The sound of a car is probably associated with many memories. However, the number of times this occurred when the smell of freshly baked bread was in the air is probably much more limited. This set being smaller has a smaller average information and is, therefore, easier to recall. You might find it easier to recall the color of the car that passed by you when you were standing outside the bakery if you are cued with the smell of freshly baked bread.

A sensory input creates a series of neural events (spikes) over time. These can be thought of in terms of a pattern they form. An example of such a pattern is in this video. Aspects of this pattern are stored when we remember things. Recalling something that was stored involves matching the current incoming pattern with the stored pattern.

This can be illustrated using simple experiments in priming:

Priming is an implicit memory effect in which exposure to one stimulus influences the response to another stimulus.

Wikipedia, October 27, 2015

For example, see Priming (and more).

      

 

   Beyond the Bounds

Boundary Extension

Illustration of Boundary Extension

Boundary extension "is an error of commission in which people confidently remember seeing a surrounding region of a scene that was not visible in the studied view … ." In the image shown, a subject might sketch C when they had seen A, or sketch D when shown B. We construct mental models of our world. To be helpful to us, we create these models in ways in which allow us to predict beyond what our perceptions actually provide as inputs. The phenomenon of boundary extension illustrates this.

We often are confident that we've seen or heard or felt something that, in fact, we've not seen, heard, or felt. It is important that teachers be aware of this as an aspect of normal human behavior rather than something that is dishonest.

      

 

   Liar, Liar, Pants on Fire

What do we know about human memory?

Brian Williams & Ben Carson

Brian Williams and Ben Carson

Brian Williams had served as the principal onscreen person for NBC News and noted pediatric neurosurgeon Ben Carson was a leading Republican Presidential Candidate for the 2016 U. S. election. Both had their integrity questioned by news reports. This brings into question both how human memory works and how most people believe that it works.

The everyday mental model that most of us have of our memories is that of a printed page in a book. Book content, when printed using quality materials and stored appropriately, really is invariant. Human memory is not like that. It is more like pages in Wikipedia (edited over time) or pages of Google searches (that can be very different over time).

Our memories are reconstructions. Every time we bring up a memory, it alters the next time we'll bring up that same memory. Our memories evolve over time, and they tend to be altered to fit our model of the person we are today as opposed to the one we held when the memory was first created.

You can have a perfect recollection of something. For example, you can recite prayers, pledges, and even some especially noteworthy speeches or poems. Many young Muslim children take on the formidable task of memorizing the Quran. To do this, you need much practice with many repetitions. Most of the things we remember about our lives were one-time events — first kiss, trip to Paris, automobile accident, and so forth. These don't get repeated. There usually is no "perfect" record that allows us to repeat and practice the memory.

When the press "vets" a person, they have access to the memories of many other people and often to clear records — print, video, audio. When the press holds the view that a person should have a memory that reflects such records, they buy into the "printed page" model of human memory, and that's something none of us humans have. (Some of us do have much better memory abilities than others.)

Memories are always reconstructions. Furthermore, they are not reconstructions from 'stored data' but, instead, from stored models that were constructed from data (including prior memories). Animal model studies suggest that, for two memories to be linked together, they must have some "overlapping neuronal ensemble." In the absence of such an overlap, each memory can be recalled separately without connection.

… Donald Thomson, an Australian psychologist, was bewildered when the police informed him that he was a suspect in a rape case, his description matching almost exactly that provided by the victim. Fortunately for Thomson, he had a watertight alibi. At the time of the rape, he was taking part in a live TV interview — ironically, on the fallibility of eyewitness testimony. It turned out that the victim had been watching Thomson on TV just before the rape occurred and had confused her memory of him with that of the rapist.

The Guardian, 2010

Some memories seem to stick with us for a lifetime. For example, most New Yorkers can recall with some detail their activities on September 11, 2001. Experiments show that the details often become changed in spite of the certainty we have about the accuracy of our recollections.

Johnson

Marcia Johnson and Elizabeth Loftus, memory experts

Sometimes our memories are reconstructions of events that never happened to us. Two renowned memory experts, Marica Johnson and Elizabeth Loftus, each tells a personal story of how they were convinced in their own memories of an event involving them that later proved to be incorrect. Unfortunately, these are labeled false memories — but the memories are very real to the people that have them, and they were very real to both Johnson and Loftus until some strong evidence came along indicating that they were inaccurate. Loftus, in fact, has spent much of her career showing how choices of words can influence memory, and shows how courtroom lawyers can choose words that influence eyewitness testimony.

Before the printing press, memory was critically important. In the times of ancient Greece, the likes of Homer as poets depended upon recall for their stories as in the Iliad and the Odyssey. Reading of words (not just numbers) was introduced around 3200 BC, but was restricted to scholars. Much of what was recorded were business records; ancient buried texts most often reveal numbers of sheep and goats and such.

While memory remains important, it is clear that technologies (language, writing system, printing press, Internet) change the demands on human memory. What was essential a thousand years ago in order to discuss a text effectively (memory of the whole text) is potentially less critical now when we can easily refer back to texts in paper or digitally. This does not mean that students are learning (memorizing) less; instead it means that they need to memorize a different subset of knowledge linked to more complex operations and procedures.

Memory theorists use a wide variety of labels in an attempt to categorize different types of memory. So recalling a prayer is categorized differently than riding a bicycle. At its roots, however, the underlying mechanisms of all kinds of memory involve the same neural changes that allow aspects of an electrophysiological event to be reenacted within a new electrophysiological event that we sense in some ways as being the same as the original event.

We tend to be very good at recalling things in our everyday lives wherein we use our so-called autobiographical memories. These events usually enjoy a great deal of help. For example, the contexts tend to be similar: you return to the same home, you eat from the same dishes, you eat similar meals, and you hear the same voices. The connections that support retrieving these stored memories are many and are frequently used.

RedFireTruck

Red fire truck

Even the nature of what is stored is in some doubt. For example, suppose you were trying to recall "red fire truck"? Is part of this stored as "red truck" or as "red" + "truck"? There is evidence that:

… the language system is sensitive to the distribution of linguistic information at grain-sizes beyond individual words.

Janssen & Barber, 2012

Especially in schools, there are advantages to establishing accurate memories from the start. This will require fewer repetitions to solidify the core elements of the memory or automatize the procedure than working from more random experiences. If the incidental learning from a hands-on activity produces a sub-optimal knowledge representation, it will be harder to move the student to a more accurate and productive knowledge structure in subsequent instruction. This is one reason direct instructional approaches like studying worked examples or guided practice produce better learning of problem-solving skills than pure incidental learning from some hands-on activity.

Putting all we know about remembering together in one biophysiological model of human recall is extremely challenging.

The neuroscience of memory is a complex and contentious area, but most researchers agree on a broad-brush account that goes something like this, at least for episodic memories, or memories of events. These memories are initially encoded and stored mostly in the hippocampus, deep inside the temporal lobe of the brain". For long-term storage, memories are filed away to other areas, including the neocortex, the thin sheet of tissue on the surface of the brain. A memory of any given event, the thinking goes, is represented by a sparse and scattered network of neurons, such that the sights, sounds, and emotions associated with the experience may each reside in a different location. To recall that memory, the brain must somehow reactivate just the right subset of neurons. Many details of this process are not known (or are disputed). Even so, some researchers say it's time to revise some aspects of the standard view such as the notion that the hippocampus is not involved in retrieving older episodic memories, and that memories become fixed and unchangeable once transferred to the neocortex. Newer work suggests a far more fluid role of memory, and one in which retrieval plays a crucial role in shaping memory over time.

Miller, 2012

The above quote fits well with the model we published in our earlier book, the ULM. This book seeks to stress two additional aspects — a model in which we predict outcomes and then measure errors (differences between the predicted and observed outcome), and a model in which consciousness follows rather than mediates decision. The prediction aspect of M3 plays a role here. As already noted, best strategies allow us to reduce the possible elements associated with a concept. One such strategy is to build a story that weaves together the elements of the concept. A story, by definition, restricts what can happen in a particular sequence. Each element in the sequence of the story is a constraint on what can follow, thus reducing the number of possible outcomes at each step. The danger of misremembering increases. An inaccurate understanding that fits the story will more easily become part of the learners "knowledge."

Think about some isolated fact uttered by a teacher in school just once. That has the context of school, but little else to connect to. In the absence of repetitions, it has little chance for long-term storage. That can be changed. For example, if we have a deep personal interest in something, a seemingly stray fact that we happen to perceive of as key can fit into a place where connections abound. There also are emotional factors; we are likely to remember when we are greatly embarrassed or unusually happy.

Putin & Clinton

Vladimir Putin & Bill Clinton

Finally, there are such things as lies. Vladimir Putin was denying Russian troop involvement in Ukraine which required a special definition of troop, and Bill Clinton depended upon an unusual definition of sexual relations. In both cases we suspect that their denials were not the result of faulty reconstructions. The following abstract reflects important research about lying:

Dishonesty is an integral part of our social world, influencing domains ranging from finance and politics to personal relationships. Anecdotally, digressions from a moral code are often described as a series of small breaches that grow over time. Here we provide empirical evidence for a gradual escalation of self-serving dishonesty and reveal a neural mechanism supporting it. Behaviorally, we show that the extent to which participants engage in self-serving dishonesty increases with repetition. Using functional MRI, we show that signal reduction in the amygdala is sensitive to the history of dishonest behavior, consistent with adaptation. Critically, the extent of reduced amygdala sensitivity to dishonesty on a present decision relative to the previous one predicts the magnitude of escalation of self-serving dishonesty on the next decision. The findings uncover a biological mechanism that supports a 'slippery slope': what begins as small acts of dishonesty can escalate into larger transgressions.

Garrett et al., 2016

      

 

   Forgotten

ForgetMeNot

… Me Not

In a German legend, God named all the plants when a tiny unnamed one cried out, "Forget-me-not, O Lord!" God replied, "That shall be your name."

Wikipedia, October 27, 2015

Sometimes we really want to remember a task, so we have all sorts of reminders. Today there are numerous electronic reminders. Once there was the notion of tying a string around your finger as a reminder.

ReminderKnot

Cartoon model of reminder string

We do forget. We use strategies to keep us from forgetting.

On the other hand, forgetting sometimes is useful. We have a way of losing obsolete facts. For example, when we move we may soon lose the mailing zip code of our former home or job. Phone numbers are labile in this way. Phone numbers are also an example of technology making certain aspects of memory obsolete, as most mobile phone users do not need to remember as many numbers.

Context means a great deal. We may see someone out of context but not be able to place him or her. A person working on the Kiwanis pancake line introduces herself as your travel agent with whom you deal several times each year.

All of us seem to be better at remembering everyday events from our lives. What did you have for dinner last night? Most of us can come up with this. What did you have for dinner 10 days ago? Well, that's a bit shakier. What did you have for dinner 30 days ago? Now we are getting out of normal range. Of course, if that were a Sunday and you always have meatballs and spaghetti on Sunday, well, no problem. How about five years ago? Well, here the chances are very slim. Suppose five years ago were your birthday and you were taken to a special restaurant for that event. In that case, you might recall dinner details from soup to nuts.

Much of what we do in this kind of recall is to use one model to reconstruct another model: "It was Sunday, so it must have been meatballs and spaghetti." A very small number of people have an incredibly good long-term recall for events. They have been labeled as having highly superior autobiographical memory (HSAM). They have been the subject of a few television programs, and one has written a book.

Henner

Marilu Henner

Marilu Henner, an actress, has written about memory. She describes a person who had a good memory but chose to develop it into an excellent memory. Most HSAMs engage in strategies to help their memory. None of them is known to have converted their rather remarkable skills into something particularly economically profitable.

As it happens, if you were at some event with one of the HSAMs, your recall of that event would be about the same as theirs two days later. What's different is that, two months later, their recall is quite likely to be as good as in two days while yours is likely to be thin if not completely forgotten. HSAMs have been described not so much as good rememberers but as really poor forgetters.

When testing knowledge, some test items are more powerful than others. For example, when seeking the term to attach to a given definition, consider fill-in-the-blank and multiple-choice. Fill-in-the-blank reduces the possible number of outcomes by providing a context. Multiple-choice further reduces the number of outcomes by providing the term along with some distractors; you give a context and limit the number of outcomes.

Another measure of learning is time to relearn. It takes less time to relearn something that once was familiar than to learn it the first time. A brief refresher on some once familiar material may be all that is needed to improve performance on tests using multiple choice or fill-in-the-blank items.

There often are specific memory tasks connected with one's occupation. London taxi drivers are required to know routes in that large city, and the relative volume of brain tissue mass connected with storage of that information is larger for London cabbies than for most of us. Systematic studies of restaurant waiters show that they use strategies such as place to recall diners' orders. When the diners change seating positions, the accuracy of waiters' recall is lowered. Hours after the meal, their recall of orders is poor; they seem to have erased the old orders so as to make room for new ones.

Memories become labile when recalled. Some reports of events, though recalled with great certainty about their correctness, have evolved. Everyone reading this book almost certainly remembers something of 9/11. Everyone remembers that there was a disaster, but experiments show that recall of the details evolves. In fact, we seem to have greater certainty about those events in our lives connects with strong emotion — such as the 9/11 attack.

Can we erase a memory? Perhaps. It seems that, each time a memory is recalled, it is restored as if a new memory. Recent animal experiments have shown that, when bringing up a memory, it can be reconsolidated along with something that leads to its extinction. It is possible to use certain drugs to disrupt and thereby erase some memories. Needless to say, in 2015 there is a tremendous amount of research in this area much of which is focussed on treating posttraumatic stress disorder (PTSD).

      

 

   Sleep

Many people seem to think that our brains shut down when we sleep. This simply is not true. Your brain shuts down when it is dead, and brain death is irreversible. Some very specific tests for brain death are becoming accepted.

Sleep

Sleep

Being asleep is very different from being awake. A study of a creature with 302 neurons whose genome has been established showed that "sleep in C. elegans is a global brain state in which about 75% of neurons displaying activity during wakefulness become inactive. "

Our senses behave very differently. For example, an odor that would elicit a powerful response to a person who is awake is likely to escape notice during sleep. Auditory and visual inputs are processed differently during sleep. Parents can sometimes sleep through all sorts of sounds but awaken upon hearing the faint whimpers of their infant.

Awareness and wakefulness (arousal) are different.

Awareness vs Wakefulness

Graphic model of awareness versus wakefulness

Awareness means being aware, usually of your environment. Wakefulness involves being reactive to stimuli.

The new technologies offer unique ways of comparing awake and sleep states. It is possible to input energy into a human's brain using transcranial magnetic stimulation (TMS). When a person is awake, TMS usually leads to a spread of activity across many regions of the brain. When asleep, the stimulation remains local and fades quickly.

Sleepwalking is an unusual phenomenon in which people perform activities as they might while awake but are, nevertheless, in certain stages of sleep. They display a high level of wakefulness but a low level of awareness.

SleepWalk

Cartoon model of sleepwalking

Sleepwalking is contrasted with REM sleep, or rapid eye movement sleep.

REM

Cartoon model of REM sleep

During REM sleep, there is high awareness but low wakefulness. You sense that you are aware of your environment — living the dream so to speak. We often run through events of our day during REM sleep. Nightmares occur during REM sleep.

Even though we don't respond to odors during sleep as we would while awake, we still perceive them.

We cued new memories in humans during sleep by presenting an odor that had been presented as context during prior learning, and so showed that reactivation indeed causes memory consolidation during sleep. Re-exposure to the odor during slow-wave sleep (SWS) improved the retention of hippocampus-dependent declarative memories but not of hippocampus-independent procedural memories. Odor re-exposure was ineffective during rapid eye movement sleep or wakefulness or when the odor had been omitted during prior learning.

Rasch et al., 2007

As much time as we spend in the act of sleeping, one might think that we would know more about that process. We don't know that much. Only recently it was shown that mice use sleep to "cleanse" their brains of spent metabolites. The study authors conclude that "the restorative function of sleep may be a consequence of the enhanced removal of potentially neurotoxic waste products that accumulate in the awake central nervous system."

Because of the interdependence of what we call learning and memory, any and all connections between sleep and memory take on special importance in models about learning. Storing something directly and immediately in memory is rare. Nearly everything we remember goes through a stage called long-term potentiation on the way to a stage called memory consolidation.

In a technical sense, sleep is divided into phases. So-called slow-wave sleep seems to involve a transfer of memory information from the tissue mass called the hippocampus to regions of the tissue mass called the neocortex. So-called REM sleep (rapid eye movement sleep) seems to involve synaptic changes in the neocortex. While the specifics remain elusive, there is direct evidence that the structures of synapses change during sleep.

Based upon EEG measurements during sleep, certain patterns are related to activities during wakeful learning. The "playback" rates of these patterns during REM sleep are much faster than they were during their real-time acquisition. This might even form the basis of one of many reasons why we need sleep. If storage in memory requires replay over the same neuronal networks that achieved long-term potentiation, the process of strengthening those connections might well interrupt otherwise normal life. If we were awake, activating those same neural circuits might have different consequences than doing so while we sleep.

Pulling an "all-nighter" in preparation for an examination may lead to success in terms of loading fast hippocampal memory. Having that learning "stick," which amounts to moving it into neocortical-based memory, probably requires several exposures with intervening periods of sleep. In fact, 42-hour sleep deprivation experiments have shown varied impacts on different brain regions. They suggest larger negative impacts on the anterior regions, the ones where we suspect that things are put together and from which consciousness arises.

There is a well-known saying often heard as advice given when someone faces a substantive decision: Why don't you sleep on it. This is consistent with the notion that, during sleep, we put things together. The notion that sleep promotes insight in problem-solving has empirical support. As early as 1932 it was reported that we sometimes "reorganize" memories during sleep. Reorganizations can lead to both desirable and undesirable consequences. We often integrate related but initially distinct memories into more powerful wholes. Some of the worst examples of false memories occur as the result of reorganizations during sleep.

This is one of the reasons a good night's sleep is so important for a child's long-term success in schooling. Consistent good sleep will increase the transfer to long-term memory and problem solving. It suggests that having concept instruction stretch over multiple lessons is better for long-term retention than longer single session instruction. It also helps explain why one-shot professional development for teachers is so ineffective.

      

 

   Waking Up

WakingUp

Waking Up

We all are familiar with the phenomenon of waking up. Many notions and rituals are connected with this daily event. While a great deal is known about awakening, a great deal also remains to be discovered.

There is a phenomenon called sleep inertia:

Sleep inertia is a physiological state characterised by a decline in motor dexterity and a subjective feeling of grogginess immediately following an abrupt awakening.

Wikipedia, November 19, 2015

Devices have been developed (so-called "smart" alarm clocks) to wake us when we are in light stages of sleep or in transition to REM sleep. The mechanisms used by these devices vary. They include motion detectors and even headbands measuring brain waves. Awakening from deep sleep seems to lead to increased sleep inertia.

Our interest in this chaplet is to remind you that, when you awaken, you do not usually have a model loaded and running in your brain.

How do you load the model for your day? Well, you might remember some special event for the day and begin your day by loading that. For many of us, the day is started with a ritual on autopilot. We engage in a series of acts, often connected with maintaining personal hygiene, and all or nearly all of which are automatic. We might use that time to think about the day — much as we might while walking or driving to work. We may include as part of our ritual the review of some calendar information. In many families, scheduling involves "writing it on the calendar."

When you arise in familiar surroundings, a variety of visual, auditory, and even odorous clues might lead you to choose a particular model to load. In unfamiliar surroundings, say while traveling, your first "thoughts" might be to recall how you got to where you are, and start loading your model from that point.

Waking up is a time when our minds become reloaded — a new model becomes active. There likely are other times during the day when our minds reload. For example, after a period of daydreaming (mind wandering). As teachers, we should appreciate that our students may need some help in reloading during instruction. This amounts to thinking about how we hand breaks during long instructional sessions.

      

 

   Mind Wandering

daydream

Daydreaming

Most books about learning and teaching overlook something essential to how we behave as humans; our minds wander. In M3 we stress the notion that our minds develop a model and use that model to do whatever we are trying to do. When we awaken after sleep, we need to load in a fresh model. What about when our minds wander? Some estimates place the percentage of time during which our minds wander at nearly 50%.

Our data suggest that when trying to engage attention in a sustained manner, the mind will naturally ebb and flow in the depth of cognitive analysis it applies to events in the external environment.

Smallwood et al., 2008

Some studies of daydreaming have led to the conclusion that daydreaming leads to unhappiness:

In conclusion, a human mind is a wandering mind, and a wandering mind is an unhappy mind. The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost.

Killingsworth & Gilbert, 2010

Claims about happiness not withstanding, there is evidence that creativity is connected to daydreaming. Later chaplets will discuss this in detail. We store models rather than data, and we recall those models (not the data). During daydreaming, we are more likely to bring back two models not heretofore connected, and things brought into working memory together stand a chance to become connected. There is evidence supporting this:

The observed parallel recruitment of executive and default network regions — two brain systems that so far have been assumed to work in opposition — suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation.

Christoff et al., 2009

In school, one of the most common conditions that lead to mind wandering is boredom.

We all experience days in which it is nearly impossible to generate interest in activities; nothing maintains our interest, focus, or attention, and everything seems boring. It is such a universal human experience that we dedicate very little time or attention to actually defining what it means to be bored.

Goldberg et al., 2011

Goldberg et al. distinguish boredom from other states such as depression or apathy. Boredom in schools is usually not a good thing. Boredom is not always a bad thing, however:

Boredom is both a warning that we are not doing what we want to be doing and a "push" that motivates us to switch goals and projects. Neither apathy, nor dislike, nor frustration can fulfill boredom's function … .

Elpidorou, 2014

As a teacher, you need to assume that your students will engage in daydreaming, and you need to include ways to bring them back as part of your instruction. While there are many ways to do this, producing a loud but unexpected noise can work wonders in a large lecture class.

      

 

   I Believe

Gandhi on Beliefs

Gandhi on Beliefs

There is a sense that the neural structures most deeply involved in establishing who we think of ourselves as being — our minds — are involved in mind wandering. These structures may also "contain" the core of our beliefs. If you are a teacher, there are some beliefs your students have that you will try to change.

In The knowledge illusion: Why we never think alone, Sloman & Fernbach stress the gaps in our knowledge and how much we come to depend upon our communities to fill those gaps. For example, we as scientists believe in the reports of other scientists in the scientific literature such that we don't need to repeat all of their work for ourselves. When something is reported that we doubt, we may try to repeat the experiments or to design other experiments to elucidate what be think might be discrepancies. In other words, we are members of a cultural group called scientists.

This is the power of culture. Our beliefs are not our own. They are shared with our community. And this makes them really hard to change.

Sloman & Fernbach, 2017

Teachers might think that beliefs in something not consistent with larger knowledge bases can be addressed by providing information. This is called the information deficit model. As it happens, providing information is often not nearly sufficient to change beliefs. A recent study addressed the ease with which these changes can be made. Essentially all such experiments are contrived — how would you feel if you were wrapped up in some huge MRI machine?

A recent study suggested that, in an environment where people seem to hold there own beliefs in a milieux where separate sets of "facts" can be brought to bear to support opposing beliefs, science curiosity helps address bias. Consider the abstract of that paper:

This article describes evidence suggesting that science curiosity counteracts politically biased information processing. This finding is in tension with two bodies of research. The first casts doubt on the existence of "curiosity" as a measurable disposition. The other suggests that individual differences in cognition related to science comprehension — of which science curiosity, if it exists, would presumably be one — do not mitigate politically biased information processing but instead aggravate it. The article describes the scale-development strategy employed to overcome the problems associated with measuring science curiosity. It also reports data, observational and experimental, showing that science curiosity promotes open-minded engagement with information that is contrary to individuals' political predispositions. We conclude by identifying a series of concrete research questions posed by these results.

Kahan et al., 2016

The Kahan et al. paper suggests ways in which science curiosity can be measured.

Still another study looked at the impact of "moral arguments" in deciding about contentious issues. As it happens, people may use different moral standards when viewing such issues. The study concludes:

… Recognizing morality's influence on political attitudes, our research presents a means for political persuasion that, rather than challenging one's moral values, incorporates them into the argument. As a result, individuals see value in an opposing stance, reducing the attitudinal gap between the two sides. This technique not only substantiates the power of morality to shape political thought but also presents a potential means for political coalition formation.

Feinberg & Willer, 2015

The notion of fake news emerged in 2016. Clearly beliefs depend on one's trust in their community. In this context, the results of an experiment published by the American Press Institute are not surprising:

The experimental results show that people who see an article from a trusted sharer, but one written by an unknown media source, have much more trust in the information than people who see the same article that appears to come from a reputable media source shared by a person they do not trust.

The identity of the sharer even has an impact on consumers impressions of the news brand. The study demonstrates that when people see a post from a trusted person rather than an untrusted person, they feel more likely to recommend the news source to friends, follow the source on social media, and sign up for news alerts from the source.

American Press Institute, 2017.

For the most part, teachers deal with the nonpolitical statements and resistance to change in belief is lower. However, some issues cross into the political arena. For example, a biology teacher discussing evolution may cross into the political zone.

      

 

   Motivation

Motivation.

Every teacher has a definition for the word motivation. The problem is that, when asked for these definitions, teachers in a group don't report the same definition. The photo suggests some features of the teacher-reported landscape of motivation.

Pintrich and Schunk offered a classical definition of motivation:

Motivation is the process whereby goal-directed activity is instigated and sustained.

Pintrich and Schunk, 1996

Notice that their definition includes the word instigated:

Instigate: to cause (something) to happen or begin

Merriam Webster, October 28, 2015

In an attempt to tie the definition of motivation more tightly to learning, Brooks and Shell defined motivation in terms of working memory allocation:

... motivation is the process by which we consciously or unconsciously allocate working memory resources.

Brooks and Shell, 2006

In the current work we are trying to tie motivation to consciousness explicitly, and we suggest that the test of working memory is whether something achieves awareness (conscious allocation). In other words, we are asserting that motivation leads to awareness. The process leading to that awareness can be a top-down process, one started by our brain processing and about which we received an after-the-fact report. Sometimes our senses cause us to achieve awareness.

Explosion

Cartoon model of explosion

The sights and sounds of a nearby explosion are almost certain to capture our attention and achieve awareness.

Rose

Rose

The smell of a rose might lead to some pleasant memory coming into our awareness.

Nothing better illustrates the importance of motivation (instigating goal-directed activity) than crowdsourcing, situations wherein people opt to participate in an effort larger than themselves. Today scientists and artists turn to the public both for funding as exemplified by Kickstarter and participation as illustrated by the college level phage genomics experiments (students determining gene sequences as part of a real and important distributed research activity). A significant example of crowdsourcing is in the presidential political campaign of U. S. Senator Bernie Sanders, who supported his campaign through donations from over 3 million people.

Awareness can be achieved by providing the mind with a low information context. A student sitting in a classroom has a lot on her mind: the talk with her parents, the homework that still needs to be done, perhaps the bill that has not been paid. If the teacher simply provides one additional item to add to the things in her mind, her awareness of that item will be low. However, if the teacher can provide a replacement, a concept, an experience, a fact compelling enough to displace the complex, complicated thoughts in her mind, learning can take place. The teacher pours an odorless gas out of a container over a small flame that suddenly roars. The survival instinct of the student will make this incident central, perhaps exclusive, to the student's attention. An explanation of the chemical reaction will then be a pleasant alternative to the previous tiring attempt to keep the world in her mind.

As teachers, we try to control what achieves our students' awareness. It has become customary not to say this but to say, instead, that we motivate them.

The competition for student attentional resources means that we need to design their immediate environment to minimize distracting features (say a working TV screen with unrelated material) and maximize attentional supports such as a poster with steps in the procedure we just taught.

      

 

   I'm "Good" Today

During the past 10 years, however, there has been growing recognition that emotions are central to human achievement strivings. Emotions are no longer regarded as epiphenomena that may occur in achievement settings but lack any instrumental relevance. In this nascent research, emotions are recognized as being of critical importance for the performance and the productivity of individuals, organizations, and cultures.

Pekrun & Stephens, 2010

Reinhard Pekrun

Reinhard Pekrun

Reinhard Pekrun has researched many important issues related to emotion and learning. There are some days when we are not in the mood to learn, and others when we are anxious to learn. Pekrun regards moods as "Low-intensity emotions." Emotion has some aspects that are wired in (genetic) while others are learned and all can be affected by context.

The ability of organisms to build up a conscious representation of an emotion — what we call a feeling — is a complex dynamic phenomenon implying neuronal synchronizations at different levels … . We suggest to conceptualize the process of an emergent conscious feeling as the result of synchronizations of different subsystems at different levels.

Grandjean et al., 2008

There is a considerable interconnection between brain areas in the prefrontal cortex associated with working memory and lower brain areas associated with emotion such as the amygdala. When executive resources are required, inhibitory mechanisms are recruited to decrease the disruptive effect of emotional stimuli. More importantly, these findings indicate that the effect of emotional stimuli on attention is down-regulated both during cognitive conflict and after the conflict has already ended.

Emotional connections for autobiographical memories can be quite strong. In fact, a strong emotional attachment can keep an autobiographical memory from decaying. Anyone reading this book at the time it is published is almost certain to "remember" what he or she was doing on the morning of September 11, 2001. Note that, while we believe our memories of this event remain fixed, measurements suggest that they drift.

The emotional content of language affects one's comprehension. Until recently, it had been thought that these were "encapsulated and autonomous."

Internally valid and reliable scales have been developed to measure emotion in students' learning and performance. Student self-esteem does not appear to be a useful construct, however:

So, pretty much the jury has come back, and self-esteem is, as an empirical matter, a poor theoretical construct for predicting anything interesting in psychology, and this has been known for years and years.

Kurzban, 2010, p. 138.

Studies with college students suggest that the most salient emotion linked to low performance is boredom. When a person is bored, the mind flits from one vaguely defined subject to another vaguely defined subject to still another. In terms of information, a vaguely defined subject can be thought of as a set containing a large number of concepts and hence a large value for average information. This makes the subject difficult to keep in mind; the mind moves on. If instruction lacks clarity, it simply becomes another vaguely defined subject making it easy for the student to drift off into boredom. This highlights the need to keep students engaged and the need for clarity in instruction that will help keep students engaged. Long-term boredom has devastating effects on learning and achievement.

Happy & Sad.

Cartoon models of "good" and "not good"

Finally, it is worth noting that humans tend to signal their emotions. We most often show the state of our current emotion on our face. For this reason, looking at students while we are teaching is a good idea.

      

 

   You'll Like This

Situational interest is something presented by a teacher in an attempt to get students to engage with a lesson. Some would say that it is "an attempt to motivate them."

Interest

Graphic model for some dimensions of interest

Most of school runs in the lower left quadrant: common things done in a common way. This allows repetition and can lead to good outcomes. It also can lead to boredom. The top-left and bottom-right are areas where teachers might introduce situational interest. So an uncommon thing might be where a teacher brings a legless lizard into class. An uncommon way might be where a class normally run with individual work runs with work in dyads or triads. An uncommon thing in an uncommon way might be exemplified by a field trip.

There always seems to be a tradeoff between using situational interest for motivation and taking away time from possible conscious accesses that lead to learning.

Overall, our findings suggest that the dominating cognitive interpretations of multimedia effects should be supplemented by considering the interplay between cognitive and motivational factors.

Magner et al., 2014

With few exceptions (possibly Jobs and/or Einstein) the portraits of the researchers in this book could be discarded since they really don't convey germane information. They remind us that science is very much a human enterprise. They are in the book because we believe they create situational interest on the part of our readers.

One thing should be clear to all teachers. We essentially never find something that every student in a class finds interesting. Further, our tastes have evolved through knowledge and have become personal interests. It is extremely unusual for learners, especially those in early stages of learning a discipline, to find the same things interesting about that discipline as we do.

Caution should be used when creating situational interest to make sure that the interest is not just a superficially linked to the topic. Otherwise, there is a negative attentional effect through "seductive details" that draw attentional resources from the main idea. An example can be found in a meta analysis on digital books finding that a digital story including non-related interactive features actually decreased student comprehension.

Finally, there is a widely held consensus in the United States business world that 'rewards' improve performance. In fact, this is only true for rather simple tasks — simple, rule-based tasks. For tasks that require imagination and innovation, a different system operates. Once our basic needs are met, the three factors that drive performance appear to be autonomy, mastery, and purpose. By autonomy, we mean working on our own stuff. By mastery, we mean getting better (much better) at something as would be exemplified by spending spare time practicing a musical instrument. By purpose, we mean having a goal of doing something we perceive of as important. Wikipedia, for example, is cited throughout this book. This Web-based electronic encyclopedia is developed entirely through contributions of money and editorial time. For a simple presentation of the autonomy-mastery-purpose notion, take the time to view Dan Pink's video.

      

 

   I Liked That

Personal interest is something we bring with us. While we might be born with certain dispositions to prefer some things to others, our personal interests appear largely to be learned.

Einstein.

Einstein Quotation

When searching for an image to reflect the notion of motivation, we came across the above image that sets forth the essence of personal interest.

The quotation attributed to Einstein conveys a notion that the authors believe strongly: life goes better when you have several personal interests. For example, we may have a vocation and a hobby, and enjoy spending time with both. We take a break from one by turning our attention to the other.

Most people have a rather wide range of interests. They may have an interest in a particular sport, kind of music, favorite dessert, and place for vacations. Some find chess very interesting; others find it frightfully boring.

TigerWoods

Tiger Woods with his father

Two things need to be made clear about personal interests. First, they are personal, and it is not known how they come about. It is very clear that Tiger Woods' interest in golf was initiated and nurtured at home by his father. By the same token, sometimes early encouragement down a particular path backfires and ends up with all sorts of troubles and issues. There are many in the world that feel trapped in professional circumstances encouraged and/or enabled by a parent. Because expertise takes time to develop, the sooner one starts, the sooner one can become an expert. It is not clear where encouragement and affording opportunities turn into control and lead to rejection.

Second, it is not clear who will become an expert. A remarkable study of expertise in which top experts in six areas of expertise (e.g., Olympic swimmers, prize-winning mathematicians) were studied suggested that early on it was not at all obvious that they would ultimately achieve their status.

Many of us choose to follow paths similar to our parents. If our parents liked what they did, we could see that. Those same parents could offer tutoring, make introductions, and otherwise "help" a career develop

.

Kornbergs

Roger Kornberg, Chemistry, 2006; Arthur Kornberg, Medicine, 1959.

Think of how dinner-table conversations are likely to have inspired the work of Roger Kornberg. Both Roger and his father, Arthur, received Nobel Prizes.

A 1985 study showed that 59% of chemistry majors in the study developed this interest during high school. Teachers matter and they help students to make choices.

Teachers may be most important for a student whose parents may not convey satisfaction and joy in the student's interests. In this case teachers and other significant adults can provide an alternative example and provide the social capital that professional parents often afford their children.

      

 

   I Loved That

To our surprise, current thinking suggests that we should not make life choices based upon current interests. The argument is that our interests and values change in ways we cannot predict. We can find ourselves stuck in a life pattern that we don't like.

What about those cases where we really don't simply like something but instead have a passion for it. Nearly everyone described as being a "best in field" seems to exhibit such a passion: top musicians, scientists, soldiers, engineers, writers, etc.

Follow your passion

Cartoon model related to passion

A remarkable book by Bloom, Developing Talent in Young People, describes the paths of twenty acknowledged leaders in each of six areas: mathematics, neuroscience, tennis, swimming, piano, and sculpture. Each had achieved some acknowledged distinction. For example all swimmers were Olympians. Two things that stood out were the amount of focus and effort put into developing these levels of expertise, and that often observers would not have predicted that such high levels of expertise would ultimately be achieved.

Passion is a lot like self-efficacy in that it comes from within and cannot be instilled. Clearly, it can be nurtured.

Focus is something that has to be present for effective learning. In order to learn effectively there has to be attention to the present moment. This attention reduces the number of inputs to the mind — what are we going to have for dinner?, was that attractive person really looking at me? — and thus reduces the information load the brain has to process. Passion for a subject helps the student to focus on the subject.

Prodigies usually do well in life. Nearly all people who reach top levels do not start out as prodigies, however. In fact, few prodigies go on to greatness in their areas of early achievement.

      

 

   I'll Do That

Concentration.

Intense concentration

We almost always are confronted with choices. You can be reading so intently that you suddenly decide you need to take a bladder relief break. That might have been a choice for many minutes, but not one that you really became aware of while concentrating on your reading. The neurological basis for making decisions remains the focus of extensive research. Those processors of ours are always working. At a given moment, which one will win — and achieve conscious access?

Equivalent problems are not always solved with the same degree of success. Consider the problem of showing someone four cards with a single digit number on one side and one of 26 alphabet characters on the other, and asking them to test the rule, "If there is a vowel (a, e, … ) on one side, then the number on the other side must be even (2, 4, … )." Replacing this task with an equivalent task, "If a person is drinking beer, then the person must be at least 21 years of age," and having cards with beverage names on one side (beer, Coke®, … ) and possible ages (22, 16, … ) on the other side gives importantly different (better) results. Generally speaking, we have more success when problems are concrete than when they are abstract.

There can be too many choices. When 24 jams were offered in a grocery store sales demonstration, more people stop to try them. When only six jams were offered, however, sales increased six times more than when 24 jams were offered.

Choices of what we do often seem irrational. In this interview video, hear Nobelist Daniel Kahneman offer an explanation for this behavior.

There even is evidence that some decisions are best made fast:

Contrary to conventional wisdom, it is not always advantageous to engage in thorough conscious deliberation before choosing. On the basis of recent insights into the characteristics of conscious and unconscious thought, we tested the hypothesis that simple choices (such as between different towels or different sets of oven mitts) indeed produce better results after conscious thought, but that choices in complex matters (such as between different houses or different cars) should be left to unconscious thought. Named the "deliberation-without-attention" hypothesis, it was confirmed in four studies on consumer choice, both in the laboratory as well as among actual shoppers, that purchases of complex products were viewed more favorably when decisions had been made in the absence of attentive deliberation.

Dijksterhuis et al., 2006

When choices are made, people tend not so much to misattribute favorable features of the selected choice as to do so for those options not selected:

In recalling past choices, people expect the chosen option to contain more positive and fewer negative features than do its competitors. In recalling past assignments, in contrast, people expect the assigned option to be remembered better than the unassigned alternatives. This vividness heuristic leads to systematic misattribution of new features to unassigned alternatives, but not in a manner supportive of the assigned option.

Mather et al., 2003

There also is an issue of the immediacy of outcomes. Why, for example, is a great deal of a hospital's infection transmission rate attributable to poor hand hygiene of hospital workers? Infections from poor antisepsis practice come about days after the event in which failed antisepsis opened the opportunity for infection. There are usually many to share the blame. The outcomes are not immediate. So, the outcome of a poor "I'll do that" decision is removed from its consequence in time.

Unlike animals and young children, older humans (adults) often base future action on past action rather than on past outcome. That is, they do it the same way they did last time even though it didn't work. "I studied hard but still did not do well." Study smarter, not harder.

When a solution is not immediately apparent, they are asked whether the solution was accompanied by the "Aha" when it did occur to them. Indeed, such "Aha" moments are accompanied by unusual brain activity. Perhaps the reason for this is that new things that are noteworthy to memory might require special marking so as not to be lost. In other words, maybe fewer repetitions would be required for storage.

      

 

   ... Two Marshmallows

Gnostic Temptation

Painting modeling The Gnostic Temptation

There is a classical biblical story about temptation. As a result of eating a forbidden fruit from the tree of knowledge of good and evil, Adam and Eve are cast out of the Garden of Eden.

Marshmallow

The Marshmallow Study

A classic study involves placing a child in a room with a marshmallow and telling them they can eat the marshmallow but, if they wait 15 minutes, they can have two marshmallows. There are many often humorous videos of children being given this test. As is the case with so many such experiments, subsequent work suggests that what happens in the time leading up to the actual test has a great deal to do with how long children wait

Follow-up studies years later showed that those who exhibited more control were more likely to have higher SAT scores, complete school, and attend college, and less likely to enter special education, become a teen parent, or be arrested.

This issue has to do with setting goals. Some school programs attempt to teach students to extend their time waiting for gratification, obviously in the hope of improving a whole host of possible outcomes. They use the marshmallow experiment as a basis for this instructional effort.

While research in the area of "willpower" has been confounded by many issues, strategies that involve reframing a choice have met with some success. Here is how the reframing was conducted:

In our paradigm, instead of presenting choices in a traditional hidden-zero format (e.g.,"Would you prefer [A] $5 today OR [B] $10 in a month?"), choices are presented in an explicit-zero format, which references the nonreward consequences of each choice (e.g., "Would you prefer [A] $5 today and $0 in a month OR [B] $0 today and $10 in a month?").

Megan et al., 2014

When the offer is couched in the "explicit-zero" format, more subjects opt for the larger reward. The study cited also included a second experiment in which the questions were posed during an fMRI measurement. Indeed, there were significant neural differences depending upon how the choices were posed.

The authors of this book might be described as compulsives, strivers, and workaholics. We each have a passion for what we do. So, advising students to find and follow their passions would be understandable advice coming from us. Is this good advice for most people? Perhaps not. Dan Gilbert suggests that our passions may change over our lifetimes, and following an early passion may have us ending up in a place where we don't want to be. Instead, he suggests, it is better to find something in your life that you are good at, do that for a career, and let your passions, if any, be whatever they might be.

At one point late in his career, the basketball great Michael Jordan, known for his exceptionally extensive effort at practicing to improve himself, decided to try to follow his real passion — baseball. Jordan proved to be a good baseball player, but his ability in baseball was poor relative to that in basketball, and he returned to finish his athletic career as a basketball player. Anyone who studies expertise could have told Jordan that good-but-not-great would be a likely outcome of his attempt in baseball.

There also are many stories of people who follow their parent's passion and end up feeling poorly about their worth. Andre Agassi, famous tennis star, asserted that he disliked tennis even though achieving greatness in that area:

He also revealed that he had always hated tennis during his career because of the constant pressure it exerted on him.

Wikipedia, October 29, 2015

What about people who "inherit the family business"? A study of "family" businesses in which the business is passed on, it turns out that, when passed on to "outsiders" those businesses usually fare better than when passed on to a family member. However, when the inheriting family members are grouped according to whether or not they attended "selective" colleges, those who attended selective colleges did as well as outsiders in sustaining the business. Only those who attended nonselective colleges had the businesses drop off. Is that an ability issue, a passion issue, or most likely a wait-for-the-second-marshmallow issue?

In the last few years, there has been a growing interest in the so-called non-cognitive skills of persistence (grit), goal setting, and self-regulation. We agree with David Conley, who accurately describes these skills as metacognitive. The important thing to know is that metacognition can be taught in the classroom. By teaching goal setting, effort, and self-regulation we increase the chance that all students will eventually be able to capture the elusive second marshmallow of life.

      

 

   Happiness

Happiness

Saying about happiness

What happens when we don't get what we want and know that we can't get it? Does that make us unhappy? Turns out we tend to be happy with our lot in life. This is well explained in a TED Talk by Dan Gilbert.

Gilbert distinguishes natural happiness (where you get what you want) from synthetic happiness (where you make do with what you've got). Many experiments performed by Gilbert and others indicate that synthetic happiness is just about as good as natural happiness, in spite of the strong tendency in U. S. culture to think otherwise. When it's you experiencing synthetic happiness, you're good with that. When you asked if others will be, you think not. Those are typical responses.

When you are a teacher, this has a great deal to do with helping students to set goals. When goals are realistic, students will be happy when they achieve them. They will struggle when goals are not achieved. Ask any player on the losing team after a Superbowl game and they will not tell you how happy they are. Ask them years later and they'll gladly tell you about being on the AFC or NFC championship team.

You're always going to put students into a situation of struggle, where there is a goal for them to achieve. Teachers will do well to understand aspects of happiness, and to think of how satisfied humans are when they achieve synthetic happiness. It's when encouraging and supporting the struggle for "natural" happiness this understanding is important.

      

 

   Working Memory

Working Memory

Graphic model of working memory

Working memory is the system that is responsible for the transient holding and processing of new and already stored information, an important process for reasoning, comprehension, learning and memory updating. …

Wikipedia, October 27, 2015

In this book, we depart from traditional explanations of working memory. Here we see working memory as our ability to keep track of recent conscious awareness models. Each conscious awareness event makes us aware of a model. That model won a competition. Some would call the selector of that winner the 'central executive,' but we will try to avoid this notion. Instead, the selection was probably based on which of the scores of neural subsystems creating models and crying out "pick me, pick me" was strongest at that moment. In school, it might have been some learned model working in a math class. It also might have been the "I need to pee" model or the "I'm very bored here" model.

One way in which working memory capacity is "measured" is to recite digits 0-9, one per second, and see how many the listener can repeat back correctly. (A typical value is seven, but with practice most of us can quickly increase that to fifteen or so.) As our expertise in an area grows, the number of elements we can deal with seems to grow. In one long-term study, a man whose working memory capacity based on recall of digits started out at seven, an undistinguished capacity. After months of practice, he worked up to a remarkable 79 digits. That means you could read him a list of random digits for over a minute, and he could recite back whatever list you had read to him. While having such a skill may win accolades at cocktail parties, it is of little practical value.

Keep in mind also that the amount available can seem to be prodigious. Suppose we have two new decks of playing cards, 52 cards each. We take one deck, shuffle it, and hand it to you. We start a timer, you look at the deck for as long as you like, and then you give it back. We measure the time you spent studying the shuffled deck. We then give you the second deck and ask you to sort the cards to be in the same order as they were in the deck you just saw. How long will you need to look at the first deck to be able to arrange the cards in the second deck?

SReinhard

See Simon Reinhard perform the shuffled card memory task.

As noted in an earlier Chaplet, Simon Reinhard's 2010 record was 21.9 seconds.

The description often used for working memory involves the idea that some content related to a target may be activated but not chosen such that it may become chosen shortly after that when first or second choices are found wanting. Our description of working memory is that all of the subprocessors keep doing their thing and the one whose output is strongest at any given moment is chosen for conscious access. Sometimes making one choice suppresses the likelihood of other choices. The thing about the model of working memory we support is that other processors that were not selected keep working away. One of those processors might pop out an answer that awakens you at 3 AM. What working memory is doing is keeping track of recent conscious accesses, and those events are not of the same duration but depend on many things, the most important being the complexity of the model being constructed at that moment.

We can attend to something in a way that keeps it in working memory. We rehearse, etc., and the neurons keep firing. If you accept what we just said, somehow a 'working' memory must be retrievable after we've stopped activating it. The "reactivation of latent working memories" has recently been demonstrated in a series of experiments in which transcranial magnetic stimulation was used to get the memory to reappear. This plays a role in long-term potentiation through which a memory be recalled after considerable time has elapsed.

      

 

   Chunking

simon

Herbert A. Simon, 1916-2001, Nobel Prize 1978

The notion of a chunk evolved with that of working memory. The idea was that working memory held not items of a particular size but chunks — and a chunk could be huge. These ideas trace back at least as far as Nobelist Herbert Simon in 1973. When studying the game of chess, experts hold enormous chunks of what possible moves and outcomes might be.

Let's think of a chunk as the output of one of our processors. The chunks can be very large, connected pieces of knowledge learned previously. We can access several of these in a period of, say, 1000-2000 milliseconds. The number we can access depends on their size. The size (and efficiency) of the chunks changes with experience (learning) and allows us to perform increasingly complex tasks effectively.

Crystallized intelligence is the ability to use skills, knowledge, and experience. It does not equate to memory, but it does rely on accessing information from long-term memory. Crystallized intelligence is one's lifetime of intellectual achievement, as demonstrated largely through one's vocabulary and general knowledge. This improves somewhat with age, as experiences tend to expand one's knowledge.

Wikipedia, October 28, 2015

The size of the chunks determines our crystallized intelligence. Large chunks mean large crystallized intelligence.

      

 

   Fixed Ability

Spearman

Charles Edward Spearman

It is obvious that some of us are intellectually more able than others. Spearman was the first to identify a common factor underlying virtually all mental tests that he called the general factor or "g." A word commonly used to discuss this is intelligence. In the 1940s, Cattell introduced a terminology that distinguished two types of intelligence, fluid and crystallized. Crystallized intelligence has to do with how extensive and complex the output of a processor is, and it can grow with experience and effort.

Fluid intelligence or fluid reasoning is the capacity to think logically and solve problems in novel situations, independent of acquired knowledge. It is the ability to analyze novel problems, identify patterns and relationships that underpin these problems and the extrapolation of these using logic. It is necessary for all logical problem solving, e.g., in scientific, mathematical, and technical problem-solving. Fluid reasoning includes inductive reasoning and deductive reasoning.

Wikipedia, October 28, 2015

In the model we are using, fluid intelligence relates to the number of conscious accesses we can deal with over a relatively short interval of time, say about 1000-2000 milliseconds. Another way of saying this is how many processors outputs can we keep track of during this time.

As it happens, in spite of the names, fluid intelligence is something we can't change much. The nature-nurture controversy has a long history. Modern genomics allows some very specific probes into the "nature" side of this argument. It is possible to construct polygenic scores based on the presence of specific genetic features in an individual person. This has led to proposing the 4th law of behavior genetics:

A typical human behavioral trait is associated with very many genetic variants, each of which accounts for a very small percentage of the behavioral variability.

Chabris et al., 2015

With a high score, however, more behavioral variability can be accounted for. These scores can be used to account for amounts of success that are statistically significant, but the amount of variance accounted for tends to be rather small.

Crystallized intelligence, on the other hand, can be changed dramatically. This is extremely important since study upon study shows that prior knowledge is the best predictor of one's success at new learning. The more you already know about something, the easier it is to learn more about it.

Most people and especially Americans conflate the overall term intelligence with fluid intelligence. In the early 1900s, Alfred Binet developed the first test of intelligence.

Alfred_Binet

Alfred Binet, 1857-1911

Long story short, the concept of an intelligence quotient emerged. Since we expect an 8-year-old to be abler than a 4-year-old, the concept of an intelligence quotient (IQ) emerged as 100 times the ratio of one's mental ability to one's age.

An intelligence quotient (IQ) is a score derived from one of several standardized tests designed to assess human intelligence.

Wikipedia, October 28, 2015

IQ is a very controversial subject. We can make some general predictions, however. A person whose IQ score is 125 is much more likely to be a professional, wealthy, socially well adjusted, to find school easy, and to be healthy than is a person whose IQ is 75.

At one time, schools measured IQ scores as a matter of routine and did so at several times during a student's school career. Important academic placement decisions were based on IQ scores.

Einstein

Albert Einstein

Albert Einstein is regarded as one of the most intellectually able humans to inhabit the earth, and his name always evokes notions of intelligence. His caricatures connected with intelligence are legion. How often do we hear "She's a real Einstein" or "I'm no Einstein." Estimates of IQ are Einstein's at 160, noted chess player Bobby Fischer at 187, and Irish businessman Walter O'Brien at 197.

Today it seems that fluid intelligence is more-or-less the same as working memory capacity. That is, how many models can we keep track of in a brief period, say 1-2 seconds. It seems to be a matter of how many awareness events (conscious accesses) we can hold on to at once.

There do seem to be differences between men and women regarding cognition. Generally speaking, these differences are small. Several decades ago, gender differences in, say, mathematics scores were large and explanations for those differences many. The title of a recent Science Education Forum says it all: "Gender Similarities Characterize Math Performance." We do not doubt for a moment that gender differences exist and that they may be important. Many differences have been documented. We assert that the largest factor in creating these differences involves differences in knowledge (i.e., prior learning) and has little if anything to do with fixed ability. Gender stereotypes do exist.

… by age 6, girls were prepared to lump more boys into the "really, really smart" category and to steer themselves away from games intended for the "really, really smart."

Bian, Leslie, & Cimpian, 2017

In The Brilliance Trap (Scientific American, September 2017) Andrei Cimpian and Sarah-Jane Leslie discuss how these sorts of issues become pervasive. Both of these researchers do the same kind of work. Leslie is in a philosophy department where a certain "kind of person" (the brilliant superstar) is especially appreciated. Cimpian is in psychology where there is more of a view that hard work leads to success. They noted that philosophy finds it much harder to attract women and minorities, and they attribute this to the general sense in the field of the seeming requirement of brilliance for success. They conducted a multi-field survey of field-specific ability beliefs and compared that to the percentage of PhDs in those areas who are either African-American or Asian-American. They conclude that "the extent to which practitioners of a discipline believe that success depends on sheer brilliance is a strong predictor of women's and African Americans' representation in that discipline."

Although there have been some reports about increasing IQ or working memory capacity, any such increases have not proven to be dramatic, and all such reports are questionable. One longitudinal study over a 66-year interval "shows that mental ability differences show substantial stability from childhood to late life." At the same time, IQs, as measured by national standardized tests, appear to be drifting upward worldwide. A rather concise study demonstrated that so-called visual working memory capacity is fixed.

Terman studied a large group of people whose average IQ was 151. They were nominated by their teachers and tested for IQ. This group did very well after 35 years. They were not disproportionately represented among the outstanding people of comparable age, however. It has been argued that their success could be "predicted on their socioeconomic status alone."

It is worth noting that, at the lower end of the intellectual ability scale, there is a tendency to overestimate ability. This was summarized by Kruger and Dunning:

People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. …

Kruger & Dunning, 1999

Why can't we change core ability? Much of what core ability consists of is based in the genetic hand we are dealt. Ability can be lowered with poor prenatal care or a mother's drinking and smoking. It can be lowered with frequent, severe early childhood hunger. While it may not be possible to increase working memory capacity or IQ very much, we believe it is possible to teach in a manner that makes the best use of the working memory, leading to an increase in crystallized intelligence.

Why is IQ drifting upward? We bathe in ever richer information, and the effect is worldwide. We humans simply know more.

      

 

   Curious Cat

Curious Cat.

Curious Cat

Curiosity killed the cat,
But satisfaction brought it back.

Titusville Herald, December 23, 1912

Curiosity … is a quality related to inquisitive thinking such as exploration, investigation, and learning, evident by observation in humans and other animals.

Curiosity is thought to be among the attributes of the best known human thinkers. What can be said about curiosity that might help mentors? First, curiosity seems to be one of those personality traits (part of 'openness to experience') that appear to be more-or-less fixed, similar to fluid intelligence. Studies of curiosity are difficult, and the topic is less studied that one might have guessed. There are times when curiosity is actively discouraged. For example, it is not unusual for the governments of some countries to discourage access to certain categories of information.

God fashioned hell for the inquisitive.

St. Augustine

In a study of 52 human adults using games presented on computer screens, Gottlieb and collaborators found:

… The results suggest that intrinsically motivated exploration is shaped by several factors including task difficulty, novelty and the size of the choice set, and these come into play to serve two internal goals — maximize the subjects' knowledge of the available tasks (exploring the limits of the task set), and maximize their competence (performance and skills) across the task set.

Baranes et al., 2014

Curious Infant.

Curious Infant

In a study of human infants, Kidd et al. found:

… infants implicitly seek to maintain intermediate rates of information absorption and avoid wasting cognitive resources on overly simple or overly complex events.

Kidd et al., 2012

Each one of us seems to be born with a certain 'amount' of curiosity, and we seem to regulate that curiosity in such a manner as to be consistent with what we know and our capacity for change. We do push ourselves, but we do so within the limits that seem to determine how far we can push successfully.

Mentors need to appreciate that students vary in the areas and extents of their curiosity.

      

 

   Believe In Me

Many humans, especially Americans, believe that your fluid intelligence determines your fate. Yes, high fluid intelligence bodes well while low fluid intelligence bodes poorly for what are generally considered to be 'successful' people. Geography, timing, and luck are really important as pointed out by Malcolm Gladwell in Outliers.

Gladwell

Malcolm Gladwell

Fluid intelligence is made up of genetics and opportunities of time and place (aka luck); it probably can't be changed much no matter what we try to do. Crystallized intelligence is an entirely different matter. As Gladwell and others point out, it, too, includes lots of luck and opportunity. On the other hand, this is something all of us can work on regardless of our fluid intelligence.

A rule of thumb has emerged about expertise: behind every expert there are 10,000 hours of practice. Well, maybe that's overkill when learning how to roll cigars, and maybe it's too little to become a board-certified orthopedic surgeon.

Dweck

Carol Dweck

Carol Dweck is associated with the notion of intelligence being viewed as incremental versus being a fixed entity. We assert that crystallized intelligence is very much incremental while fluid intelligence is very much an entity.

Crystallized intelligence is something all of us can change with effort. The reason we can change this is that what we load chunks into working memory and these chunks can continue to grow through both study and experience, recently popularized under the term growth mindset.

The bottom line is that effort pays off, and effort expended on learning leads to increased crystallized intelligence. It also suggests that teachers emphasize effort over ability since effort leads to an incremental change in ability. We can't change fluid intelligence. We can and should use schools to increase learners' chunking, and to make learners better at understanding their chunking and how it works for them.

Why can we change chunking? Chunking depends upon repetition and connections. For nearly all people, if the connections are made and repeated often enough, learning takes place. This biological mechanism is available to essentially all of us. Some may be faster than others, and some may be too slow for a learning goal to be practical. For most people and most jobs, learning is within reach.

Our goal as teachers is to help our students develop large chunks. We can do this by making sure that the learning connected to larger ideas and larger principles is pointed out. If learning is just a discrete, disconnected set of facts or procedures, it is not very likely that most students will develop chunks that can be used effectively to deal with complex problems.

Recently Haimovitz and Dweck showed that parental attitudes toward failure play a large role in their children's view of intelligence as fixed or malleable. It seems that, when "failures become interesting, informative, and motivating rather than discouraging," children become more willing to persevere.

A comment about savants. This term (sometimes labeled savant syndrome) describes persons with abilities that would be considered significantly more than normal. The term used years ago was idiot savant and referred to persons with significant disabilities other than their area of expertise. The arguments that we are prewired for excellence sometimes drew support from the examples of savants. Detailed studies of savants, however, show that their expertise accrues through chunk building much the same as for the rest of us. There don't seem to be exceptions to building large knowledge chunks through awareness, repetition, and connection as means of acquiring expertise in any given area. What savants seem to possess is the ability to focus, almost obsessively, on a very limited set of chunks developing great chunks that can be then used to produce incredible results in one domain.

Finally, individual differences do matter. There is no doubt that effort matters. Keep in mind, however, that a small number of people achieve excellence while engaging in a small amount of deliberate practice — the sort of practice engaged in by experts. (Most professional golfers engage in enormous amounts of this sort of deliberate practice, for example.) A few excel without much practice, however. Walter Charles Hagen practiced little. For him, strenuous practice was "a waste of good swings." There are those who have invested their "10,000 hours" yet find themselves to be quite mediocre. We are quite sure that no amount of practice time would help any of the three authors become good at singing.

      

 

   Multitasking

We hear a great deal about multitasking. The bottom line is that humans don't multitask — we do one thing at a time. The reason it is important to understand just what our capabilities are is that we spend a lot of our learning lives stitching together formerly unconnected pieces of learning.

Yes, you can walk and chew gum at the same time. These acts involve quite different motor output systems, and monitoring those systems rarely requires any reallocation of mental resources. The errors reported between our mental model of the act of chewing and chewing gum, and our mental model of the act of walking and walking are usually small. On icy Nebraska sidewalks during the winter months, this is not the case, and full attention must be paid to walking. In those situations, we wouldn't chew gum at the same time that we walked.

Texting

Texting while driving.

We often are admonished not to text while driving. At the same time, many new vehicles offer features for hands-free phone access. Research is slowly convincing auto safety experts that hands-free phone access is almost as dangerous as texting while driving. Why? Driving requires little attention under routine conditions (your mental model comports with the reality), and a conversation may require minimal attention (the conversation goes as you more or less expect with no new information coming in that demands lots of attention). However, either the driving or the conversation can come to a point demanding more resources when the predictions of your mental model and reality — that is, what your sensory inputs tell you about reality — start to diverge (i.e., the error signals grow). You may need to recalculate the driving model or the talking model. If you happen to be working on your conversation at a moment when the driving demands more, well, boom.

There are some things we need to keep in mind. You are not a multitasker. In a study of 200 college students, five were able to perform two well-prescribed tasks concurrently. We're betting 39 to 1 that you are not a multitasker.

Multitasking has been studied many times in many settings over many decades. It just doesn't happen anywhere close to the degree that self-reports suggest.

We can, in fact, get better at performing complex, multifaceted tasks. This has been shown for videogames, for example.

EEG Video Game.

Study during videogame play

We usually can keep two different things going at once by switching back and forth between them. There is a degradation in performance, but not one that is huge. We seem to have working memory systems that can keep one thing in queue while we work on another. Add a third task and most of us go to pieces.

Substantial evidence is accruing that persons who claim to have multitasking skills perform poorly on the concurrent tasks when compared with how they perform on these tasks alone.

Students need to be aware that they can either attend to their Facebook friends or attend to their classroom activities; they can't do both at the same time. This also implies that a teacher cannot give directions while students are engaged in a cognitive task. If students are working and a teacher wants to add directions, she must stop the first process, redirect attention, and then deliver the new directions. Otherwise, it is very probable that some of the students will not attend to the new directions.

      

 

   Rewiring

Attempts to increase fixed ability (fluid intelligence) have failed. In MMM we assert that learning results from neural changes. We might ask whether being born restricts our pathways to those available to us at birth or after childhood development.

Earlier we've mentioned the prodigious memory feat of Simon Reinhard who successfully recalled the order of 52 playing cards after just 21.9 seconds of study. He is among persons often called memory athletes, and there are annual competitions among these folks to demonstrate memory prowess. A group of 23 memory athletes was studied, all of whom attributed their prowess to "deliberate training in mnemonic strategies."

If you gave them a list of 20 words to recall and 20 minutes to study the list, that task would be child's play. Dresler et al. (2017) did just that, and found that a group such athletes recalled 70.8 words. By comparison, a group of "naive" subjects recalled 39.9 words.

Both the memory athletes and the subjects were studied inside MRI devices. "We were interested in the functional organization of brain networks underlying mnemonic expertise in memory athletes in processing … ." There were differences. Three groups of well-matched naive subjects were then studied. One of these groups received training in the mnemonic recall strategies. A second received training in a different strategy, and the last group received no training.

Changes

Changes After Training

The results suggest the training in mnemonic strategies was successful. The subjects receiving that training showed important increases in their ability to recall words.

The purpose of these studies, however, was not to learn whether the mnemonic training is effective but to measure any attendant brain changes. The outcome was that, after mnemonic training, the MRI scans taken during the study showed those receiving the training to change and to look much more like those of the memory athletes. The bottom line is that we do rewire our brains as the result of learning. It suggests that those who think a certain way rewire in a way consistent with that way of thinking.

It remains unknown as to whether some of us are born with wiring well suited for certain tasks, or that we rewire our ways into them. For example, the size of the amygdala (a brain structure) seems to correlate to the size of one's social network who we know. Are we born with a big amygdala, or does that brain mass grow and adapt as we increase our social network?

We remain curious as to how long these changes are sustained. Will they last for years (be permanent)? Do changes depend upon whether use of the strategy is continued?

      

 

   I Know That

The best predictor of what a person can learn is what s/he already knows; the best starting point for instruction is based on what the student already knows.

Vygotsky

Lev Vygotsky, 1896-1934

Lev Vygotsky introduced the concept of the zone of proximal development (often called the ZPD), a situation where someone can learn but with help.

scaffold

Scaffolding US Capital during restoration, 2014

The term scaffolding was introduced after Vygotsky to describe learning in the ZPD. The notion of scaffolding is that we support a learner during the early stages of learning new material but gradually remove that scaffolding as the learner can support him- or herself.

It is possible to attempt learning outside the zone before the learner is ready. Grammar lessons, as such, rarely if ever take hold in three-year-olds. That's why schools develop curricula, the notion being that learning some things require "prerequisites" of other things.

Fillmore

Millard Fillmore, 13th President of the United States

Prerequisite knowledge varies. Assuming your knowledge of English is good and you have a rudimentary understanding of American History you could set out to become an expert on Millard Fillmore (13th President of the United States) and fairly quickly amass an enormous amount of knowledge in just a few days since Internet resources have become ubiquitous.

Gleevec

Graphic model of the drug molecule Gleevec

Starting with just solid high school science in chemistry and biology, investing a similar amount of time is unlikely to get you very far in becoming expert in modern drug design, skills used to design and synthesize the "miracle" drug Gleevec that controls chronic myelogenous leukemia (a formerly fatal disease). We simply do not possess enough organic chemistry knowledge to understand the details of how such a drug might work without an extensive and long course of study.

Three things are required for learning: awareness (conscious access), repetition, and connections. The ZPD is mostly about connections or chunking, connecting what we need to learn with what we already know. In learning, we grow neural chunks.

On occasion, we need to learn to tie two chunks together. That is where learning often fails. That is, we fail to recognize the attributes of a problem we face are the same as those of a family of problems that we already know how to solve.

There are times when we try to fit a solution to a problem at hand where some but not all of the attributes match up — the round peg in a square hole situation. As we gain expertise, we can fit more accurate solutions to problems but then face the problem of having tools and strategies that are so specific that they obscure connections to other domains.

There are times when a problem or context seems so familiar that we don't pay enough attention to notice small but important details. We encounter the "expertise reversal" effect — experts failing to solve a simple problem because, though simple, there were some small but significant details the expert in haste has overlooked. (Someone might say that their spouse "doesn't pay enough attention," for example.) We organize knowledge in our minds in terms of models. We process the difference between what these models predict we should perceive and what our senses perceive. When this difference is small, we may elect to go with the prediction of our model rather than the report of our senses. To put this issue into a real context, read Groopman's How Doctors Think.

When asked to name animals, that retrieval invariably comes in waves. For example, soon after lion in rapid succession is the naming of other animals that are connected to the first in long-term memory such as other cats often found as zoo animals (tiger, leopard, cheetah, etc.). Then there will be a pause as an animal from a different category is activated, and then another burst of names of animals of that type. This sort of word association task has been studied and modeled.

There is still another cloud looming over what we know that is sometimes called the transfer problem and involves context. A successful student learns something in mathematics classes but fails to apply it in a chemistry course. The familiar stirs up less of what we know than does the new. For example, novel animals in classrooms instigate "better" student writing than do ordinary animals. When subjects are primed with "new" content, they are more apt to detect subtle changes than if their most recent experiences dealt with familiar content.

MilitarySimulator.

Military vehicle simulator

Many professions have turned to the use of simulations in training. Simulations are very expensive to create, but they are effective. The patient's heart rate has suddenly doubled, and their blood pressure has dropped to 70/38. What do you do? You heard a bang sound, the instrument gauges show no power from engine 2, and nearby sensors indicate a fire. What do you do? It's a bad time to reread the chapter in the book. Simulations help us automate procedures — or at least be well prepared for uncommon but critical procedures.

Although expectatations are anchored in our experiences, they can also be influenced by vicarious experiences. We don't have to experience a car accident directly to learn that bad driving is contingently connected to having accidents. We could see someone else have an accident. Perhaps we could see a film about accidents or be told about what causes accidents in a defensive driving course.

      

 

   By The Numbers

Score

Keeping score

One of the things that most of us don't think about is statistical learning. Humans "keep score" of most things and the resulting scores predict, in part, what we will do in the future. The scores help us quantize information to create effective models.

MotherTalk

Motherese (baby talk)

Ten-month-old children learn to parse their spoken native languages into whatever the units of those languages are.

Two-year-old.

Two-year-old speaking

Statistical learning has been researched extensively in first language learning. When something is spoken, there is a sequence of sounds. For example, in the words pretty baby, ty follows pre but ba does not follow ty. That is, in our language, words rarely if ever include the sound sequence of ty ba. The statistical notion is that some sounds are often paired together while other combinations are rare. That's what we learn to keep track of, and how we come to say that something doesn't "sound right." Consider this abstract:

Learners rely on a combination of experience-independent and experience-dependent mechanisms to extract information from the environment. Language acquisition involves both types of mechanisms, but most theorists emphasize the relative importance of experience-independent mechanisms. The present study shows that a fundamental task of language acquisition, segmentation of words from fluent speech, can be accomplished by 8-month-old infants based solely on the statistical relationships between neighboring speech sounds. Moreover, this word segmentation was based on statistical learning from only 2 minutes of exposure, suggesting that infants have access to a powerful mechanism for the computation of statistical properties of the language input.

Saffran, Aslin,& Newport, 1996

Many effects of statistical learning are observable if you spend any time with young children. Because of their limited experience with language, young children often over-generalize patterns. For example, an English-speaking child is quite likely to say "I runned home" rather than "I ran home" because so many English verbs form the past tense by adding "ed" to their stem.

An infant rapidly learns to control parental behavior based on crying and fussing. That's because, when they cry or fuss, adults pay attention to them. It's statistical.

ClassClown

Cartoon model of class clown

A student predictably gets a teacher to "go ballistic" by engaging in some minor but forbidden behavior and takes great pleasure in the chaotic result.

We don't learn things that do not come into our attention. Many times we learn something that does come into our attention but that was not of part of our intention to learn.

Brains make predictions and they remember. To predict, a brain needs some sort of statistical basis for its prediction. One way to look at human development through the lens of building records of events (acquiring counts) such that we are able to make and improve our predictions.

The general principle that learning is statistical reinforces the idea that classroom learning should involve enough repetition to create a clear representation. For example, in vocabulary research it has been found that after 7-10 repeated-exposure words become part of a person's lexicon. This idea is critical because some learning (like vocabulary) must happen incidentally (through reading, listening) and not directly. As teachers/ instruction designers, we must make sure that students have repeated practice.

      

 

   Predicting Values

How do we make decisions? An important part of this process involves the ways in which we assign values to likely outcomes from our decisions. Different neural mechanisms have been identified as being used by humans contingent upon the circumstances.

Bernoulli

Painting model of Daniel Bernoulli, 1700 - 1782

Bernoulli published Exposition of a New Theory on the Measurement of Risk in 1738. Reducing this mathematical relationship to simple terms:

Expected value = (odds of gain) x (value of gain)

This relationship attempts to make quantitative something that is otherwise hard to measure, and it clearly is an oversimplification to apply it to life. Nevertheless, it's probably a good starting point. Dan Gilbert begins a more thorough exploration of human valuing in this video where he starts with Bernoulli's relationship.

The video shows the many complexities of predicting a value in real world situations. It is usually difficult to properly assess the odds of some outcome. Beyond that, it is often harder still to evaluate the value of the gain, especially over time. Some impoverished people choose to steal in order to eat. If successful they eat as a result of their theft. If caught, they are fed in prison. Either way, they don't starve. Of course, employment after prison will likely be very difficult to achieve, and the negative outcome is likely to be both dramatic and lifelong.

One thing that is difficult for those of us steeped in the traditions of (hard) science is to acknowledge the "emotional" and "hormonal" aspects of decision making. A study of lap dancers showed that their tips were significantly higher when they were ovulating. Somehow their male customers were able to take in the otherwise hidden signals. This result has to be buried deeply in human evolutionary history, and we are not conscious of it. Men react more positively to women whose pupils are dilated — another signal transmitted and received outside the range of consciousness.

Those of us teaching hard sciences really don't care much about lap dancer studies. But there are other so-called "ethical" questions. An out of control car running down some tracks looks as if it will kill four people, but will kill only one if you pull a switch. What do you do? Most people pull the switch. Next, an out of control car running down some tracks looks as if it will kill four people, but is you push a large man near you onto the tracks, only he will die. What do you do? Most people choose not to intervene. If you are asked to become involved in a one-for-four choice, a lot depends on what you are asked to do.

Even when cognitive logic systems appear to be working, when they become disconnected from emotion (say through brain injury) ordinary decisions become very difficult. Persons afflicted with such problems become very dysfunctional, as in the case of Tammy Myers described by Eagleman. Subsequent to a motorcycle accident, this engineer went from being able to make "ten decisions an hour to ten decisions a week."

It happens that our physiological systems signal emotional content to us enabling decisions. When we see a picture of someone smiling, our facial muscles tend to smile. This can be detected with instruments even when we, ourselves don't perceive this mirroring.

Finally, there is the now versus later issue. Studying circuits or synthesis may be hard work, but work that must be done if we are to become an electrical engineer or a chemist. When invited to join with friends for a beer, that choice portends a soon-to-be-had pleasant experience; little about studying circuits or synthesis is fun, at least in early stages of a professional trajectory. We have a model of what it is like to become an engineer and one of enjoying a few beers with friends. The two may compete; at a given moment only one will prevail. That's the way all decisions are — a competition among neural systems, each advocating its own model, and only one of which can become the winner.

When you are a teacher, it is very hard to say that someday you may need to know that K stands for potassium. Yes, some of your students may need to know that. But when? So getting students to invest in things distant-in-time almost always presents challenges. Most often we do this by changing the current value such as reflecting the outcome of their effort in a test score. The issue there is that we are externalizing something that most agree works better when it comes from within. Working for a test score usually does not work as well as working because you want to learn something. The most successful students marry these two — they do want to learn, but they also want their work to be recognized with good grades.

In teaching complex subjects, we often try to "entice" students into learning often tedious information by pointing to how much it will, at a later date, help them make sense of something they are actually interested in, or help them do something they want to do. Perhaps this strategy is wrong and we should try and see if there are some more immediate rewards available. In end-of-semester evaluations, students are remarkably positive about daily quizzes (something all three authors have experienced).

      

 

   Somehow I Know That

We often have functioning mental models that we are unable to describe using words. Sometimes we can demonstrate them even though our words fail us. Sometimes models operate even though we can neither describe nor demonstrate them.

Split brain experiments have shown that there are often things we know but cannot verbalize. To confine the definition of knowledge to only things that can be verbalized makes it easy to test for knowledge but may be highly misleading.

Thorndike

Edward Thorndike, 1874-1949, Teachers College, Columbia University

Some powerful experiments were performed decades ago. Thorndike reported an experiment about learning in 1934. Words were read to each subject and s/he was expected to respond as quickly as possible with words that came to mind. Each subject heard 320 to 640 words read, one at a time. There was a money reward based on the average speed of response. Having responded, the subject was told "right" or "wrong" based on a predetermined list of possible answers. As it happened, the "right" list was generated using rules that were not shared with the subject. The majority of subjects ended the sessions behaving as if they were following the rules even if few of them could articulate what the rules were.

Then there is this often-cited report for which the original paper is no longer available:

The example took place in a college psychology class where most of the students had decided to test the principles of reinforcement on their own professor. The students had noticed that the professor had the annoying and distracting habit of pacing back and forth in the front of the classroom during his lectures. Using the principles of Reinforcement Theory, they set out to end this habit. To do this, the students who were in on the experiment sat in the first two or three rows of the classroom as the lecture began. When the professor stood in the center of the classroom, the students in the experiment would act as though they were immensely involved in the lecture, eyes locked in. When the professor would wander off center, they acted disinterested and uninvolved. A quarter of the way through the semester the professor's habit was gone and he lectured only in the center of the classroom.

wikispaces.psu.edu, Reinforcement Theory, (Accessed November 5, 2015)

It certainly is true that teachers that handle student questions harshly experience very few questions when they call for them. Students quickly pick up on this teacher behavior and choose not to risk exposing themselves to any undesirable consequence.

porpoise

Porpoise trick

On the other hand, there is an old yet remarkable study of porpoises. Trainers were working on getting porpoises to do "tricks." Each time a porpoise performed a trick, they were fed a fish. One day, the trainer decided to give a fish only for the first time the porpoise performed a trick. On the second and subsequent tries, no fish. The outcome was startling. The porpoises started doing things the trainers had no idea they could do! It is almost as if one were training an artificial intelligence network and giving the fish represented a communication of right (and withholding a fish meant wrong). Obviously there was no explicit learning. The rule was implied through experience. Who knew the porpoises were statisticians just the same as we humans are?

Another recent study shows that pigeons can be trained to 'read' slides and mammograms prepared to determine whether tissues show evidence of malignancy. The authors write: " … we have shown that pigeons can be effective, tractable, relevant, informative, statistically interpretable, and cost-effective medical image observers. … "

Sometimes we know things that we cannot verbalize. Just what words do you use to inform someone as to how to play the French horn? At the same time, instruction almost always is more successful when instructions can be explicit. For commercial reasons, it is important to be able to determine the sex of newborn chickens. Experienced persons can "sex" 100 chicks with near perfect accuracy (98%). Novices given general instructions fail miserably (r = 0.21). After explicit instruction based on images of the more difficult cases, however, performance quickly improved to r = 0.82.

We also seem to have a built-in system for knowing how well we know something. A recent study found " … a role for read-out of confidence on memory in the prefrontal cortex … ".

Sometimes we have no basis for knowing something. Also, as Gazzaniga has shown (especially through so-called split-brain experiments) we human creatures make up something that seems to fit the data or context.

There are many things we know implicitly. That is, the voice in our heads cannot find or has not yet found words to describe what it is that we know. When we know something explicitly, our voice has found those words. Teachers need to keep in mind that explicit instructions are always more likely to lead to student learning and behavior change. Whenever possible while teaching, be explicit.

      

 

   If It Looks Like a Duck

Nearly all of us have the sense that we are usually "data driven" and that we use the information available to us to make decisions that are — for want of a better word — logical.

Homo economicus

In economics, homo economicus, or economic man, is the concept in many economic theories portraying humans as consistently rational and narrowly self-interested agents who usually pursue their subjectively-defined ends optimally. Generally, homo economicus attempts to maximize utility as a consumer and profit as a producer.

The reality is that we are often far from logical in our apporaches to decisions, and this has led to the emergence of behavioral economics as a field of study for which Daniel Kahneman received the Nobel Prize in 2002.

Tversky & Kahneman

Amos Tversky & Daniel Kahneman

Amos Tversky & Daniel Kahneman had a remarkable collaborative relationship that has been described by Michael Lewis in his book The Undoing Project: A Friendship That Changed Our Minds. There are numerous topics that this pair touched upon. Here is but one example taken from their 1973 paper:

A problem of training. The instructors in a flight school adopted a policy of consistent positive reinforcement recommended by psychologists. They verbally reinforced each successful execution of a flight maneuver. After some experience with this training approach, the instructors claimed that contrary to psychological doctrine, high praise for good execution of complex maneuvers typically results in a decrement of performance on the next try. What should the psychologist say in response?

Regression is inevitable in flight maneuvers because performance is not perfectly reliable and progress between successive maneuvers is slow. Hence, pilots who did exceptionally well on one trial are likely to deteriorate on the next, regardless of the instructors' reaction to the initial success. The experienced flight instructors actually discovered the regression but attributed it to the detrimental effect of positive reinforcement. This true story illustrates a saddening aspect of the human condition. We normally reinforce others when their behavior is good and punish them when their behavior is bad. By regression alone, therefore, they are most likely to improve after being punished and most likely to deteriorate after being rewarded. Consequently, we are exposed to a lifetime schedule in which we are most often rewarded for punishing others, and punished for rewarding.

Psychological Review, 1973

Maybe it's not a duck after all.

      

 

   They're All Like That

Stereotype

Cartoon model of stereotypes

In social psychology, a stereotype is a thought that can be adopted about specific types of individuals or certain ways of doing things.

Wikipedia, October 29, 2015

For us to work efficiently, we tend to fill in blanks in our knowledge using prototypes and stereotypes. These are models. We take some attributes of the thing under consideration, use them to identify a prototype or stereotype, and then add on the remaining attributes from our chosen model. While stereotypes often lead to us correctly predicting behaviors, they also often come with attached explanations that are fallacious. Stereotyping is really good when we talk about roses (the flowers, that is) or cats (generally very independent critters who "own" their owners). However, some things we call roses such as the Rose of Sharon aren't roses at all. Some house cats are really friendly — and could hardly be described as aloof.

As Dan Gilbert points out, when we lived in small groups of people very much like ourselves that competed with other different groups, stereotypes about others were a great mental shortcut. In the modern, diverse, and populated world these shortcuts can often bring more harm than good.

In the same way that walking on icy Nebraska sidewalks using only automatic processes is a risky business, dealing with people on the basis of a few (outward) stereotypical labels is risky business. You're often wrong — very wrong.

The way in which stereotyping happens can be thought of as an example of quantization. Consider these two images:

TwoImage

Left, 256 shades of gray; right, 8 shades of gray

To get the image on the right, 32 close shades of gray from the image on the left are reduced to just one. You can see both that the image on the right conveys much of the information as that on the left, but also that the loss of information between the two images is striking.

      

 

   That's Crazy

Straight Jacket

Straight Jacket

Mental diseases (schizophrenia, bipolar disorder, etc.) are often discussed differently than other diseases (cancer, diabetes, etc.) Persons with mental disorders often are stigmatized.

A main thrust our book is that humans create models of their worlds and process information based on those operant models. So the glances of another human may lead us to interpret a great deal about that person to guess about them and use that guess in our model of them.

We humans use Bayesian reasoning to build our models. That is, our models are probabilistic and informed by our past experience. The differences in processing of those with some mental conditions (schizophrenia, autism) as compared with "normal" persons has led to establishment of a new journal from MIT Press, Computational Psychiatry.

      

 

   Faces

Pretty Face.

Pretty face

We, humans, are prewired neurally in certain ways. It appears, for example, that we are prewired with brain regions dedicated to recognizing faces:

These findings show that working memory is specific for facial features that interact with a general cognitive load component to produce significant activations in prefrontal regions of the brain.

Beneventi et al., 2007

In the same way that humans recognize faces, chimpanzees (Pan Troglodytes) seem to be wired to recognize one another's rear ends and dogs to have extremely sensitive olfactory systems.

The bias for recognizing one's ethnic (race) group is related to personal experiences with other racial groups. It is not surprising, therefore, that humans interact more favorably with robots designed with humanoid characteristics similar to faces. Consider the conclusion of a recent study:

First, the robot with the more humanlike face display was preferred to the robots with the no-face or the silver face displays and was seen as less eerie than the robot with the silver face display. Second, people attributed more mind and more sociability to the robot with the humanlike face display than to the robot with the no-face display, and a more amiable personality to the robot with the humanlike face display than to the robots with either the silver face and no-face displays. Third, perceptions of sociability and amiability were negatively correlated with perceptions of eeriness in the robot with the humanlike face display.

Broadbent et al., 2013

About 2% of the population suffers from prosopagnosia, the problem of not recognizing faces. On the other hand, a few among us are exceptionally good at recognizing faces, even over the passage of time. They have been dubbed "super recognizers." We seem to be better at recognizing the faces of others from our own race of people.

Facial Asymmetry.

Model of facial asymmetry (Ronald Reagan)

Facial asymmetry does not portend ill health. We do seem to prefer those whose faces are mostly symmetric, however. In reality, very few of us have completely symmetric faces.

A recent study of face recognition in monkeys may provide us the best yet understanding of how our brains are able to store so much information.

FacialAsymmetry.

Facial Identity. See video.

Small groups of cells — called patches — respond to facial features in specific ways. The way in which these cells respond to inputs has been studied in macaque monkeys. For example, when the center of a patch is stimulated using electrodes, the response can alter the perception of an image. It also is possible to simply show the monkey an image and record how each patch responds.

Chang & Tsao have studied the responses of face patches to images in macaques. What they found is that the patches respond to features that might not stand out to typical human observers. For example, we might notice the shape of someone's lips. Instead, these patches respond to features like "the hairline rising while the face becomes wider and the eyes move higher up." Chang & Tsao identified 50 such dimensions. It turned out that the neural activity of each patch responded as if it were signaling the axis location for one of the 50 "face" dimensions. That is, for each face there is one point for each of the 50 dimensions.

Tsao and Chang also were able to recreate the process in reverse using an algorithm. When they plugged in the activity patterns of the 205 recorded neurons, the computer spat out an image that looked almost exactly like what the monkeys had seen.

Hamers, Science News, 2017.

Faces shown & Predicted

Faces shown and predicted.

In a supplement to their cell paper, Tsao and Chang have posted a video at the Caltech faculty Website.

      

 

   Libraries

Premack has summarized ways in which humans differ cognitively from other animals. We are prewired to learn to speak. Most of us have learned to use spoken language to communicate by age three years. In fact, we seem to be able to learn two or even three languages during that same time and to change fluidly from one to another.

Homer.

Homer (Greek poet)

Before written texts, what we learned from our ancestors was communicated by spoken language. Homer "was believed by the ancient Greeks to have been the first and greatest of the epic poets." Many Christian churches include images depicting the Stations of the Cross. Before written language and print, these images allowed clergy to tell some of the stories of Jesus Christ.

Written languages were developed around 8000 BC. Initially, they were used to facilitate business — as in keeping track of one's goats or slaves. As writing developed, the substrates for holding and preserving it emerged. Making written material was extremely human labor intensive.

Gutenberg Press

Gutenberg Press

The development of the printing press by Gutenberg (1440 AD) changed many things. Written materials became less labor intensive. People learned to read — something not prewired in humans — and all sorts of changes followed. For example, reading required adequate vision and lead to the development of reading glasses (which led to microscopes and telescopes).

Ultimately we developed libraries for storing and sharing books. Today this has largely given way to electronic formats and screen-based reading. In fact, we may have come full circle in that today the written texts can be read to us — we return to the oral language methods that dominated the time before the printing press.

Perhaps whales and porpoises, animals with enormous brains, use those brains to store language information akin to what the Greeks did with epic tales like the Iliad. In the absence of libraries, this might afford these animals a means of passing down information that is library-like. It's hard to imagine as to how effective their competition with print might be. Also, wouldn't their memories be susceptible to the same kinds of distortions and evolutions that human memories are? Even if other organisms start with the same capabilities as we humans regarding oral communication, it is clear that libraries give us an enormous advantage in terms of accumulating and transmitting knowledge.

We teach small children not to play with electric outlets. Before they can learn this, we childproof our homes by placing plastic inserts into outlets — making them far less convenient but far safer at the same time. We don't let them shock themselves to learn the consequences of playing with outlets. There are many ways in which we transmit knowledge from generation to generation. It is silly to say that any of these compare in magnitude with written text. We even use texts to teach one another how to love — and how to hate.

Those of us who teach also need to come to grips with the reality that world knowledge is much more accessible than was the case just one or two decades ago. Today what might have been an arduous trip to a library can be accomplished in seconds with a smartphone search. This means that we need to emphasize effective processes for retrieval in our instruction.

Much of the information on Web sites is unedited. On the one hand, an experiment published in the acclaimed scientific journal Nature suggested that Wikipedia is about as good as the Encyclopedia Brittanica in terms of accuracy, etc. On the other, there is good evidence that one-half billion dollars were spent between 2003 and 2010 to deny the existence of man's role in climate change. Separating the accurate information from advocacy-based information has become a great challenge for all of us.

Today nearly anyone can become a creator of information. This book is an example of information created by three people. Nearly anyone can create a blog or post to Twitter.

Today's teachers thus have to have two goals for nearly all students — how to become critical consumers of information, and how to become creators of reliable information.

      

 

   More Stories

Grit

Truly Grit

Humans are prewired for speech. Before printing, knowledge was largely passed from generation to generation through spoken stories. This book has been developed as an assemblage of stories.

For many reasons, some stories are more believable than others. When we say that those voices in our heads are reporters rather than executives, that's a story that does not ring true. After all, it is likely that you have believed your voice was an executive up until you read this book, and this book may have been the first time that notion was ever challenged for you.

Among the stories that are very believable, one is that we can multitask. We are surrounded by statements about our successful multitasking. For example, those gadgets that enable phone conversations while driving are being built into our cars — when they should be made illegal. The evidence that humans can't multitask is compelling. If exceptions are discovered, they certainly won't include driving while having a phone conversation.

One very believable story is that we are individually prewired with certain "styles." For example, we hear people say they are "visual learners," and that they need to draw a picture — at least a picture in their mind — to facilitate learning. Some companies purport to form working groups based on matching styles. In fact, most of us can learn to adapt ourselves to any so-called style. The notion of learning styles swept through education about three decades ago and has only really been downplayed in the last decade.

… We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. …

from abstract of Pashler et al., 2009

Sometimes a "new" property or characteristic emerges that amounts to a relabeling of an earlier one. The term "working memory" emerged in the early 1970s, and there were disputes as to whether this was different than an earlier phenomenon called short-term memory. After decades of minor skirmishes, the equivalence of these terms has been more or less accepted — although working memory has largely supplanted short-term memory as the accepted label.

A term which has recently emerged as some "new" dimension to help determine whether one is successful in persisting at achieving long-term goals is "grit." We three authors are pretty sure we have this "new grit." In fact, we also have strong aspects of an older usage of this term — abrasive. You don't want to mess with us unless you have your ducks lined up because at least one of us has probably has read and remembered the literature behind what you are talking about if it pertains to learning. If you can teach children to wait for "the second marshmallow" — which you can — you can teach them to have grit (whatever that is). Maybe we're thinking about perseverance? If there is one thing this book advocates for learners, it is perseverance.

We also are reasonably certain that you can't teach someone to have passion. Yes, the Chicago Cubs did win the 2016 Baseball World Series, and yes, during the time elapsed between the last Cubs Series victory and the recent one, many passionate Cubs fans were born and died without ever savoring such an event. Most of those now dead fans likely learned about the Cubs from their parents or other relatives, and one could easily take this as evidence that passion can be taught and learned. Usually, however, there was little price to pay for acquiring this passion. It's not the same as the effort in becoming a surgeon or an engineer. Passion is like self-esteem. You can create an environment to make it more likely, but you can't teach it as you might teach arithmetic.

At the end of the day, we humans like stories we can believe in, whatever their truthfulness or ability to increase our effectiveness.

      

 

   (Modern) Telepathy

We are using this book to try to convince teachers about the ways brains function. We could describe teaching in terms of neuron modification technology. As teachers, the tools we most often have available to us are instructional messages — sights and sounds. What about telepathy?

Telepathy … is the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction. …
There is no scientific evidence that telepathy is a real phenomenon. …

Wikipedia November 17, 2014

Experiments show that, if we have access to someone's head, we can tell something about what's going on in their minds.

Brain-to-brain communication is finally possible. It's just very clunky. …

To do this, the researchers used existing technology in novel ways. On the message-sending side, they took a mind-reading electroencephalograph, or EEG (which has previously been used to harness a person's thoughts to make a living rat's tail move). On the message-receiving side, the scientists used transcranial magnetic stimulation (which has been used to, among other things, make people's memories stronger).

Vox.com, 2014

Telepathy

The "telepathic" sender

Okay, so maybe that's not what we mean when we refer to mental telepathy, but it's not the only experiment. There are at least two take aways from these experiments.

(1) It is possible to detect changes from EEG devices that contain information about what a brain is thinking.

(2) It is possible to energize a brain using external sources that correspond to processing information in some useful way, such as in making a muscle move.

These early experiments are not very sensitive. That is, only gross effects are measured; we're not talking about reproducing a Michelangelo. Most of us humans do think we can read others minds, however. No wires; just by looking and listening.

      

 

   Mind Reading

Given that we can use electroencephalographic data to garner a great deal of information about what someone is thinking, modern mind reading is not that far from reality. The term mind reading often conjures up images of mysticism.

MindRead

Cartoon model of mind reading

Most of us don't depend on electrodes or brain waves to attempt to decide what someone else is thinking. Instead, we use their expressions and what they say to make our decisions about what's on their mind. Two things about this are true. First, we are pretty good at it. But second, we are not nearly as good at it as we think we are.

… These experiments suggested that people are pretty good, overall, at guessing how a group of others evaluates them, on average. The overall correlation in these experiments between predicted impressions and the average actual impression of the group was quite high (0.55, if you are quantitatively inclined).

… Although you might have some sense of how smart your coworkers think you are, you appear to have no clue about which coworkers in particular find you smart and which do not.

… the confidence you have in knowing the mind of a close friend or romantic partner far outstrips your actual accuracy.

Eppley, 2014

Eppley best sums up the situation:

The problem with our sixth sense about the minds of others is not that it is horribly flawed. It falls short of perfection when we test it under challenging circumstances, but it generally performs far better than chance guessing. And compared to the mental abilities of other species on this planet, our ability to think about the minds of others is what truly makes our brains superpowered. The problem is that the confidence we have in this sense far outstrips our actual ability, and the confidence we have in our judgment rarely gives us a good sense of how accurate we actually are.

Eppley, 2014

It is foolish to overlook the consequences of humans being social animals. We do many things just to "belong." It happens that there are animal studies showing several things related to this. The size of one's social network has been linked to neural networks in monkeys. Synaptic efficacy appears to be related to social dominance in some animals. The volume of the tissue mass of the human amygdala has been reported to correlate with the extent of the person's social network.

Among those best served by really knowing what goes on in someone else's mind are physicians, police officers, and especially teachers. We create models of what we expect that we are going to observe based on experience. We process the difference between this predicted outcome and our sensory reality of the interaction. Any error in prediction is used to modify our model. As circumstances become more complex, our models must grow to accommodate them together with the attendant increasing amount of information. Predicting the behavior of a patient or suspect or student can be very complex. With this as background, it becomes clear that training physicians and police and teachers about social networks and social interactions (i.e., teaching them aspects of mind-reading) makes a great deal of sense.

Although truthful responses may not always be forthcoming, the best way to find out what someone is thinking is to ask them what they are thinking.

What we learn from the studies of networks can point to the importance of attending to the social skills and networks of students (and teachers). That means that classroom organization should encourage peer interaction, changing groupings to allow networks to grow. Of special interest is the support for students who are naturally shy in expanding their networks without creating distress.

Much of the core science underlying mind reading and how we influence one another is described in the readable book The Influential Mind by Tali Sharot. From the Google books description of that book:

A cutting-edge, research-based inquiry into how we influence those around us, and how understanding the brain can help us change minds for the better. Part of our daily job as humans is to influence others; we teach our children, guide our patients, advise our clients, help our friends and inform our online followers. We do this because we each have unique experiences and knowledge that others may not. But how good are we at this role? It turns out we systematically fall back on suboptimal habits when trying to change other's beliefs and behaviors. …

Google Books, 2017

      

 

   Imagine This and That

We hear a great deal about entrepreneurism and creativity. Nearly all "great" ideas are built on pre-existing ideas of others; they do not start off de novo.

Jobs

Steve Jobs, founder of Apple

When asked by Walter Isaacson (his biographer) about what his greatest creation was (Macintosh, iPod, iPhone), Jobs indicated that it had been his development of the team at Apple, and that each of these innovations had come from the team.

An understanding about how our brains work allows us to say a great deal about the creative process. We almost never invent something that is new. Instead, we hook together things that already exist. It is nearly impossible to find things that are "new" that don't succumb to an analysis of being constructs from things that existed previously.

It happens that there is a simple way to "prove" this.

NeckerCube

Graphic model of Necker cube

Examine the Necker cube for several moments. Sometimes it seems as if it is the figure at the top right, and then, seconds later, the figure at the bottom right. Once it starts switching, try as you might you can't get your mind to settle on one or the other of those interpretations.

These days we can show images to people with very carefully controlled, short-duration periods. If we take the Necker cube and show it to someone for a short enough period, their mind settles on one or the other of the two interpretations. It just about never comes up with the alternative interpretations. Psychologists have developed scores of equivalent images.

Lady Illusion

Graphic model of young-old lady illusion.

Is this an old lady with a crooked nose or a young lady with a feathered hat? Once you see both, you can't stop them from switching. Again, shorten the period enough, and observers will see only one or the other but never report both.

The point is that, for us to see two things in our mind's eye, both need to be there. Given one, we just don't ever seem to come up with the other.

… The imagination is utterly uncreative. …

Frith, 2007

Why is this outcome found? When we first "see" an image, we take the data coming in and make sense of it in terms of some information; we create a model for the data. What we store is not the data information but the model. So, what we recall is the model, not the data. In the absence of the data, we are unable to create the alternative model (the other cube; the young or the old lady). That's why Frith calls us "utterly uncreative."

What happens when we show different images to the left and right eyes? If we show a face to the left and a house to the right, we end up seeing one or the other but we can't stop our minds from switching between the two. This is a different problem. Now one eye is comporting with one model but the other eye with a different model. At a given moment, one of these "wins"; you see a face, or you see a house. The incoming traffic from the right eye fits the face nicely (small error signal), but the incoming traffic from the left eye doesn't fit that model at all; the error signal is large. After a few moments, your brain realizes that it does have a model that fits the incoming, it switches, and that error problem from incoming left-eye data goes away — now, the right-eye error problem is reintroduced. The neural traffic before the moment of switching is high. This happens over and over; you can't stop it.

All of us have had what are called "Aha!" moments (Eureka, I've got it!). When strands of knowledge do exist in our minds but are not yet connected to one another, when we seek connection and find it after some time in deliberation, those moments do become marked in a special way for us.

For example, you might present someone with three words such as pine, sauce, and pie, and ask them for a word that connects them. So, apple would be a possible solution with pineapple, applesauce, and apple pie. If that happens immediately or quickly, no special neural events are connected to the solution. If there is a "struggle" and the person comes up with this result through what commonly is called insight, some attendant physiological neural correlates are detectable.

Dalmatian

Graphic model: what do you see?

The "Aha!" moment is one in which a seemingly random set of concepts gets absorbed into a single concept. We may start with seeing a set of dots in a picture and at some point the dots resolve into a dog. The dots are necessary — without them there would be no dog — but it is not necessary to know the individual location of each dot. In terms of information when we just see the dots we are dealing with a lot of information. Each dot has its location and relationship to neighboring dots. When we see the dog the amount of information is suddenly reduced. Perhaps the joy of the "Aha!" moment is simply relief that we do not have to hold on to a huge amount of information.

We cannot teach the "Aha!" moment, but we can provide opportunities for the moment to arise. We do not know which dots will trigger the coalescence into a dog. We don't know which ways of looking at something will elucidate a concept. In teaching, we should provide as many alternative paths as we can so the student can achieve her own "Aha!" moment.

The teacher might say, find the dog. This increases the chance of finding the dog but lowers the labeling of that moment such that later recall might not be as enhanced as with the "discovery" approach.

Pink Dog

Graphic model offering BIG clues

We have a resting mode in our brains when we are not tasked with something specific. Often "creative ideas" emerge for us during such moments. This occurs when two heretofore unrelated notions end up activated shortly after one another. Teachers plan to bring up different things in sequence during lessons so that students can connect them (form chunks) and thereby create what for a student is new knowledge. Creativity amounts to connecting heretofore separate notions, and this requires that the two follow one another in either consecutive or very closely spaced conscious accesses.

Sleep allows us to run many things through our brains at speeds that exceed those during periods of wakefulness. It is not surprising that we often report solving a problem after a night of "sleeping on it."

A technique often suggested for "creativity" is brainstorming. In spite of its extensive use at College of Education faculty meetings, brainstorming has poor research support and the conditions under which it is most likely to succeed are quite highly proscribed.

Can we teach creativity? We almost certainly cannot. What we can do is increase student knowledge and encourage them to finds ways in which they think they can hook together seemingly disparate pieces of knowledge.

      

 

   Imagine Now and Then

We humans are not creative at imagining things we have not previously experienced. Also, we are extremely poor at imagining own futures.

Past & Future

Dan Gilbert and the Prudential Past and Future Experiment

In one of a series of videos produced for Prudential, Harvard psychologist Dan Gilbert asked people to recall significant good and bad events from their most recent 5 years and to predict good and bad things likely in their future. The good things were in yellow, and the unpleasant things in blue. As you can see from the colors in the image above, we tend to predict good things in our future regardless of our past experience of both good and bad. See the video.

We measured the personalities, values, and preferences of more than 19,000 people who ranged in age from 18 to 68 and asked them to report how much they had changed in the past decade and/or to predict how much they would change in the next decade. Young people, middle-aged people, and older people all believed they had changed a lot in the past but would change relatively little in the future. People, it seems, regard the present as a watershed moment at which they have finally become the person they will be for the rest of their lives. This "end of history illusion" had practical consequences, leading people to overpay for future opportunities to indulge their current preferences.

Quoidbach et al., 2013

This obviously means a great deal for teachers. Those of us engaged in teaching STEM disciplines, for example, often are engaged in laying the intellectual groundwork for subsequent learning that may not take place for months or years if ever. A chemistry teacher covers that K is the chemical symbol for the chemical element potassium. One student in the class may go on to become an internist who is very concerned about maintaining potassium balance in a patient and must know this. The neighboring student may become a mechanical engineer, never explicitly concerned with potassium.

The student who today aims to become a mechanical engineer may become the internist, and the one aiming toward a medical career may become the mechanical engineer. The balance of what to cover is often if not always a tricky one because we don't know what the future might hold. Also, we simply can't cover everything that anyone in a class might someday need. At the same time, some things are too basic to be left for just-in-time learning.

      

 

   Mind the Load

The last three decades have seen the emergence of cognitive load theory originally introduced by Sweller.

John Sweller

John Sweller

Cognitive load theory has developed over the years and can be summarized in this way. Instruction places a demand on working memory capacity. In order for instruction to be successful, the load brought about by the instruction (and the instructional materials) must not exceed the sum of the three components of three sub-quantities: the load inherent in learning the new material (dubbed intrinsic load), any load added through the design of instructional materials (dubbed extraneous load), and load brought on by integrating the new knowledge with the learner's prior knowledge. The latter has been named germane load, and was added about two decades after the introduction of the theory.

Cognitive load theory (CLT) is a way of looking at how Vygotsky might have attacked the authoring of instructional materials with the zone of proximal development in mind. What is really important about CLT is the impact it has had on using experimental information to determine extraneous aspects of load introduced by instruction. For example, having two screens or a book and a screen instead of just one increases load — unnecessarily. It is not at all unusual today to have instructional materials in which allowing a pointing device to linger over a word pops up a definition of the word — one that appears in some sort of ephemeral text bubble on a screen. Also, you need to learn to "linger" since just popping up text fields for each word would be massively distracting (i.e., load increasing). Hundreds of papers have been published with the term "cognitive load" in the title. Searching Google Scholar for the exact phrase "cognitive load" in the title and with the year 2014 yielded 248 hits on 8/5/15.

In terms of processors and models, instruction should not require the success of a process that is not directly related to the target material. If you need to switch screens, you need to engage your screen-switching processor (your model for switching screens). That's not a part of what you are trying to learn.

Among the things that would emerge from CLT is that extraneous content should be removed. That is, any information not directly and importantly related to the intended learning makes the success of learning less likely. For example, what has the image of John Sweller or the fact that he started CLT got to do with getting you to learn about CLT? Little or nothing. So, in a book tightly following CLT his image would not have been included. Why is it there? This is not a course textbook. The goal of this book is to get teachers thinking about the neural bases of learning. Well-trained teachers have been told about CLT. Therefore, we should mention it and try to place it in the context of M3. We've decided to try to introduce M3 using chaplets, ones in which there are many images that would have little value were this a textbook. Our goal is to keep you reading in the hope that, once you've read enough of these chaplets, you'll have a condensed but reasonably accurate mental model of how your mind works, how our understanding of your mind's workings has come about and how it relates to teaching and mentoring.

      

 

   Load the Mind

Cognitive load theory tells us that, when preparing instructional materials, we should minimize anything that requires mental effort unrelated to the learning task. So, for example, avoid using two screens. Studies in disfluency come to the opposite conclusion: adding elements that reduce the fluency often lead to better learning outcomes. The title of one such paper is:

Journal Article Title

Copy of Journal Article Title

We think that both notions can be brought together in an understandable way, one that can be translated into effective instruction. The core of our explanation comes from the Under the Hood Chaplet.

Automatic processing

After time, metarepresentations lead to automated shortcuts

As we learn, we go through a phase where we may have some conscious control of the steps taken in the process of learning. An example is decoding alphabetic words. With time, we lose that conscious control over the component subprocesses and the process becomes automatic. This is because we develop (learn) metarepresentations of the process and we automate the various component subprocesses through these metarepresentations. We never gain conscious control of these metarepresentations. We can, however, sometimes focus on one of the component subprocesses in such a way as to modify its metarepresentation.

Early in the learning of anything, any process that competes with the learning is likely to impede the learning. So, lowering the load is a goal of developing appropriate materials. As processes start to become automatic, metarepresentations start taking over. At that time, introducing elements of disfluency will increase attention to the subprocesses in a way that is likely to lead to more effective learning; we won't tend to gloss over material.

This is one way to account for the reported "expertise reversal effect" in which experts stumble over simple problems that have some small wrinkle. In these cases, the experts are using well-developed metarepresentations and fail to note small details that might affect the outcome of their predictions.

      

 

   System-1, System-2

We humans try to make those models (schema) that we follow automatic. We walk. We chew gum. We drive automobiles. We make cell phone calls. The models we follow are well worn, so much so that changing them can be very difficult.

Kahneman

Daniel Kahneman, Nobel Prize 2002

In his book, Thinking, Fast and Slow, Daniel Kahneman distinguishes two type of thinking. There is automatic thinking that integrates all that you know about something and pops out in conscious access. (This is similar to what was described by Malcolm Gladwell in his book Blink.) Then there is slow, effortful thinking during which time we try to look at the output of constituent processes separate from one another and evaluate the veracity of their outputs.

These differences have been summarized as follows:

Compare 1 & 2

Graphic summary, Systems 1 & 2

While we very much agree that humans have the possibility of deliberate processing, we question the use of the label "conscious." Generally, when people speak of conscious processing and the attendant self-talk, they are talking about situations where one awareness event is purposefully held in working memory to compare with some immediate subsequent awareness event. The details of the processing always escape us. The reports, however, though late and sometimes incomplete, are sensed by us as palpable, storable, and able to be subjected to subsequent processing (which almost always has started and may even be "over" by the time we perceive the initiating report).

Students and teachers both use System-1 for most ordinary things. Excellent teachers with experience teach most of the time using System-1. In his review of expert teacher characteristics, Berliner includes multiple conceptions of automaticity in routine teaching tasks. For example he mentions: "expert teachers develop automaticity for the repetitive operations that are needed to accomplish their goals."

The regulation of emotion appears to involve what could be called System-2.

Can one memory chunk serve both systems, or does each system have its own set of memories? Some experiments support that one source is able to support the needs of both implicit and explicit processing.

Kahneman's explanations are rooted in so-called dual-process theories — differentiating unconscious, automatic processes from conscious, deliberate processes. In an attempt to consider individual differences in one's capacity to control one's attention to so-called working memory capacity, the authors assert:

Ironically, our consideration of WMC [working memory capacity] has led us to depart from the standard dual-process theories … in two ways. First, controlled processing allows people to flexibly interface with their environment, and the source of this flexibility is the ability to control attention in a goal-directed manner, whether or not those goals are represented in conscious awareness. Thus, we have defined controlled processing not by the phenomenology of control, but by the extent to which goal-directed attention is at play. Second, our brief discussion about the dynamics of attention makes clear that goal-directed attention is often the precondition that allows more automatic forms of attention deployment to occur. The interplay between these two types of attention allocation, especially when considered at the neuroanatomical level, may obfuscate the need for the distinction between automatic and controlled processing whatsoever, thereby drastically revising the dual-process story as we now know it.

Feldman-Barrett et al., 2004

Perhaps the numbering system used by Kahneman was poorly conceived. System 1 is the first one we turn to for nearly everything in our lives. In fact, it's not so much that we turn to is as it reports out automatically. For most things, however, the path to System-1 is via System-2. You once struggled to decode the words you just read; keep that in mind. In school learning, System-2 precedes System-1. In school behavior as in all human behavior, System-1 precedes System-2. System-1 reflects your past; System-2 most often is the gateway to a new and possibly different future. That is, to change System-1, you need to engage System-2 first.

For teachers, changing System-2 means more than changing just the content knowledge; you must deal with all of those potential components of the metarepresentations that have been learned and are automating the existing actions. This includes addressing student emotions and goals. Activating System-2 requires considerable effort and attentional focus, something that teachers must convince students to be worth doing. Schools want retrieval to take a specific form, one that results in high test scores. It is more important, however, that engaging System-2 will lead to critical thinkers, ones that first react with System-1 (the past) and then proceed to challenge the result when the task is important (say voting or making a life altering decision).

      

 

   Applications

We are entering an era where studies will include direct neurophysiological data acquisition during instruction. This will be brought about by the availability of wireless EEG devices, ones where we measure students' brain function without the need for being directly wired to recording devices. Once this happens, it will be possible to replace instruction based upon speculation about how brains work in classrooms with instruction that is actually brain-data based. What follows are our takes on how this is likely to turn out.

      

 

   Teaching

Apple

Teacher's apple

M3 is more about learning than teaching. We wouldn't have been interested in writing it were we not teachers seeking ways to improve our teaching. The principal ideas about learning and its relationship to teaching were set forth in an earlier book, the Unified Learning Model. Learning involves neuronal changes.

In this work, we seek to take special account of how our minds work. Our brains encompass systems from which our minds emerge, and our minds generate models. The models are what we "store" as memories. We do not store sensory input. When we process sensory input information, we ultimately process the difference between what we perceive and what our current mental model of the context predicts that we should perceive. When this difference is small, we pay no conscious attention to it but simply adjust what we are doing to make an accommodation. When you reach for the fork next to your plate, once you've started the reach you likely pay no further attention to that act, and your fork ends up in your hand. If you've had a bit too much wine, well, maybe things don't go well, and you need to spend time thinking about grasping that darned fork.

As authors, we also take into account that, in spite of all of our internal beliefs that our decisions are real-time, those decisions follow the processes that make them by scores or hundreds of milliseconds. Those voices in our heads, for the most part, tell us what we want to hear. Most of the time they serve us well. Sometimes their misrepresentations of the information in our contexts are outrageously distorted.

WirelessEEG

Wireless EEG

How do these two features of how our minds work impact our teaching? These ideas are too new for anyone to be sure. In the past, valuable brains studies have led to authors "jumping the gun." For example, when it became clear that our hemispheres divided tasks such that one or the other might play a dominant role in a particular process, the education literature was flooded with left-brain/right-brain speculation. In fact, most of this was pure hokum. The predictions we are making in M3 are testable, and likely will be tested before too long. The remote-monitoring EEG systems will get better and better, and their use in classrooms and other learning contexts more commonplace.

There are two things we can do at this point. We can try to interpret some of the well-studied aspects of teaching in terms of the newly available brain model. Also, we can try to use our experience to engage in some speculation about how new studies are likely to turn out.

      

 

   Teachers

Who are the teachers? Just about every parent is a teacher. Coaches, preachers, and politicians are teachers.

There are dozens of ways to become a teacher. This graphic sets forth three different career paths. We think these paths describe each of us regarding when we decided to "become something." Note that none of us started out with the goal of becoming a teacher even though that is a label that each of us currently embraces.

Careers

Graphic representation of authors' career paths

In Finland, a country whose students perform well based on international criteria, K-12 teachers are drawn from the top 10% or so of college students. Their college tuition is paid for by the state. Finnish teacher salaries are slightly above the median salary for Finnish college graduates. Persons in the teaching profession are highly regarded in society.

In the United States, a country whose students perform at about the middle of nations based on international criteria, K-12 teachers are drawn from the bottom half of college students. Students pay their own tuition. Teacher salaries run at about 90% of the median for U. S. college graduates. While individuals in the teaching profession often are highly regarded, the profession as a whole is often disparaged in society. Attrition subsequent to recruiting new teachers from among the "best and brightest" has been higher than for those otherwise typically recruited.

If you are reading this, the chances are that you are a teacher or a prospective teacher. As teachers, we wish we had started out with the understandings of learning provided in M3.

      

 

   The Yellow Brick Road

YellowBrickRoad

The yellow brick road

The yellow brick road leads to the "Wizard." There are two ways to look at learning in schools. One would simply say there is no wizard. The other would say there are n + 1 wizards, one for each student and one more for the teacher. The wizards are the minds of these participants in learning. Those minds are dynamic; the participants cannot experience an event that leaves their mind unchanged. And almost for sure, no two of those students' minds will be very much alike.

Minds are complicated things. Much of their work is implicit. Each has a goal of trying to create models that help their owners to survive if not thrive in whatever context they find themselves. They try to automate the tasks repeatedly found in the environment so as to save mental resources for other work that might come up.

While teachers clearly affect both the consciously and unconsciously learned aspects of their students' minds, they really try to focus on those things that are learned consciously. Kahneman called this System-2 learning. Recall Cleeremans portrayal of the Quality of Representation versus "Availability."

Quality of Representation

Cleereman's Quality of Representation

We teachers tend to work on the curve that reflects availability to control. An elementary school reading teacher will spend a great deal of time on decoding skills and trying to get students to automate those skills. The authors of this book work with college undergraduate and graduate students. We don't deal with teaching decoding skills to our students; we might try to teach them how to teach those skills to their students if they are elementary school teachers. We all have different jobs. We will tell you that one of the hardest things for us to do is to work with a student who has automated some learned process that is flawed — or even incorrect.

This is not a book about magic. There is no Wizard. There is just the work — often very hard work — of bringing about learning. What we offer here is a top-down view of how minds work. We try to do this while focusing on those challenges faced by teachers.

      

 

   Formal Instruction

Instruction (noun)

  • a statement that describes how to do something
  • an order or command
  • the action or process of teaching

Merriam-Webster, November 9, 2015

We are distinguishing formal instruction — the sort of thing that takes place in schools — from informal instruction that might take place at home or while just going about and living life. We believe that what is done during formal instruction matters, and matters a great deal. Successful instruction must take into account two broad issues. First, it must account for how brains work. Just as important, it must consider the current status of the brains receiving instruction.

For example, you might think delusionally that reciting the Declaration of Independence 100 times to a one-year-old will lead to learning. There are excellent brain-based reasons to believe that none of the content of the Declaration of Independence will be a part of whatever is learned. While that example may be way off, there are excellent brain-based reasons to question many aspects of current practice.

The objective of instruction is to create or alter some model within intended learners. These learners must construct that model — as of today there are no known ways to import learning models as pre-existing neural packages. The learners construct the new models starting with the pre-existing models that the learners have brought with them. The learner's response to an instructional message is based on the differences between that message and what the learner expects the message to be. Nothing in school can be "brand new." For example, the message almost always has some language content that is familiar to the learner. Constructing a model means rewiring the brain. That rewiring process is biologically limited — it depends on what's already there, and the rate of rewiring is limited. No one becomes a brain surgeon in first grade. No one becomes a brain surgeon on the first day they hold a scalpel.

      

 

   Lectures

The lecture model is based on the idea of information dissemination. The lecturer is thought of as a (relative) content expert. The messages are aural and visual — the latter often coming from a board or projected screen. Before the printing press, this was the only practical method of delivery. Textbooks became common during the 20th century, and the Xerox machine (Xerox 914 in 1959) introduced still another content source.

If you are reading this book, you almost certainly have experienced lecture instruction. It is remarkable how often lecture instruction is decried by teacher educators, yet experienced by high school students, college students, and professionals. The truth is that lecture is likely to be the most efficient way to transmit knowledge to an audience with an appropriate level of expertise.

JohnBurmeister

Professor John Burmeister (using molecular models)

John Burmeister, an outstanding general chemistry lecturer at the University of Delaware, once said that "lecture is where we tell them what parts of the book they are not responsible for." How true, even today.

The potential problem with lectures is simple. If the presentation does not match your Vygotskian ZPD (zone of proximal development), you probably won't or can't be actively engaged with the content. Active engagement is required to learn something. You can attend a lecture, but a mismatched ZPD can leave you as if you were living on a different planet. The main problem is that in most classrooms from elementary all the way to college it is rare to have students sharing the same ZPD, hence rendering lecture fairly ineffective for a large swath of students.

Recently, there has been much ado about flipped instruction. The flipped instruction model relies on a lecture to deliver the content. The only difference is that, because of technology, lecture happens outside the classroom and can be repeated and paced. Instruction on video is still lecture; it just leaves classroom time for discussion, experiments, and practice with feedback.

      

 

   Buddies

CooperativeLearning

Cooperative learning

Cooperative learning has been heralded as a major model for instruction. Johnson and his colleagues conducted a meta-analysis of the effects of cooperative learning that has shown that it is considerably more effective than competition or standard instruction. It turns out to have a long history, with some of the earliest efforts being in the teaching of immunology at the University of Florida Medical School by Parker Small. Students were given about one-fourth of the required information, and their success depended upon learning to share and teach one another.

When well-managed, cooperative learning can support two features of effective learning. First, students tend to be more engaged than they are when listening to a teacher. Also, they have more opportunities for performance-related behaviors for which they can receive performance-based feedback. Thus attention, repetition, and connection opportunities all are afforded, usually more so than in traditional "direct" (explicit, lecture-like) instruction.

This strategy has been applied in college level large class instruction by having students work in pairs during small segments of traditional lecture-theater classes. These methods often are associated with Eric Mazur of the Harvard Physics Department.

      

 

   Scripted vs Extensively Planned

Shakespearean Play

The term 'script' suggests the model of a theatrical play

Some of the deepest struggles in education revolve around the issue of "scripted" instruction. A large part of the argument hinges on what one means by the term "scripted." The success of students participating in Montessori programs is well-documented. Naive observers entering a Montessori classroom may perceive chaos. Every child might be working at something different, and there would be little obvious supervision.

Montessori.

Montessori classroom

Why is it, then, that Montessori is considered scripted instruction? It is because a Montessori-trained teacher, observing this classroom, would likely be able to predict interactions between each child and the teacher. Since the materials employed are "standardized," that prediction might include the options as to what would come next.

If script means having the teacher read a script as in playing a part in a Shakespearean play, then we want no part of it. If it means each student is actively engaged in a prescribed activity in which the teacher's responses to a student question or behavior are predictable, then we are for it.

One of the best works describing "scripted" instruction is Simply Better: Doing What Matters Most to Change the Odds for Student Success. The author describes tracking down a foul odor (a dead mouse) which led to his discovery of notes developed by his predecessor that had been developed over her 30-year teaching career. He regarded this find as miraculous. Several researchers report that new teachers embrace rather than reject opportunities for scripted instruction.

When we taught our first classes, no one handed us a script. Today we would resent being handed a script that we had not played a role in developing. At the end of the day, however, all early year teachers would benefit from engaging in scripted instruction. Further, for experienced teachers, all off script instruction should be accomplished in a setting wherein outcomes are documented such that any enhancements might be used to modify future course scripts.

In an age of abundant classroom technology, delivering materials and assessments can be done in a straightforward and economical fashion. Creating appropriate scripts is a different matter, however. This requires the work of experienced, successful teachers and needs to be crafted to fit the various extant student populations. In the end, however, just how many high school chemistry scripts might we need? Ten?? Five??

Time spent in developing lesson plans in colleges of education is poorly spent. Students with no experience are asked to speculate about what real-world classroom situations they might encounter, and then have these conform to the notions of a "methods" instructor. This time would be much better spent in teaching prospective teachers about finding and employing scripts (something likely to be done by their district employers), and getting them to diagnose when a learner is failing to achieve the desired learning outcomes. In a comprehensive system of scripted instruction, appropriate materials would be built-in both for lagging and accelerating learners.

An external script helps novice teachers routinize instruction so they can attend to students and individual differences in their learning. Once these scripts become internalized, however, imposing outside scripts may reduce a teacher's ability to teach to the best of her abilities. We would argue that for teachers past the novice stage, the introduction of new ideas must be approached differently, reactivating System-2 thinking to consider how new ideas for instruction can change routinized scripts to alter outcomes for students.

Finally, much if not most education research that is replicated fails to achieve the same results as presented in the original report. Consider the following from a recently published replicative study, however.

Large-scale replication studies typically produce smaller effects than efficacy trials, but in this case, the magnitude of the effect was similar. It is likely that the program's scripted nature (emphasis added) more easily allowed for large-scale implementation with minimal supervision and support.

Gersten et al., 2015

      

 

   Building Chunks

Links

Links

Teachers help students to build "chunks." An efficient teacher can layout what s/he believes to be a complete version of his/her own chunk connected to some topic in a short time. Transferring this to a student is essentially never successful. Chunk building is a slow process accomplished within the mind of the learner. For students in school — even for preschoolers — almost nothing is ever "brand new." Almost always whatever is presented through sensory input (instructional messages) is something that connects in one way or another to extant material. This implies that memories always are being activated. Some fundamental aspects of the way brains work always come into play. These include:

When we sleep and dream, memories can become active. Of course, this means they can change. If something comes up in conversation, it can change. In the absence of some documentation (written text, video, etc.), a memory can drift or evolve. Most of us have learned to forgive ourselves for such "errors." Even memory experts have cited cases where they were certain an event transpired one way when they subsequently learn it was not so. While presenting a paper on the subject, Marcia Johnson described such an event in her own life — being certain an event transpired one way only to learn that the memory was false. Most of what we expect to be learned in school is documented. For example, the chemical elemental symbol for gold is Au.

Schools depend upon repetition supported by documentation to help students acquire memories of school subjects.

Two aspects of chunking should be kept in mind — one related to what is called the 'transfer' problem, and the other to the sequencing within instruction messages.

Transfer of learning is the dependency of human conduct, learning, or performance on prior experience. Knowledge transfer refers to sharing or disseminating of knowledge and providing inputs to problem-solving.

When we teach something in school, especially in an area like mathematics, we expect students to be able to use what is taught in some practical context encountered later in life. So, if you learn about volumes in geometry and you end up being a cement worker, the notion is that you ought to be able to figure out how much concrete you need to fill some space given that you know the dimensions of that space. In terms of how memory works, "transfer" problem is that one learns about calculating volumes long before encountering the practical problem on the job. These memories, therefore, are not linked, and the chunk is not formed. In the broadest sense, many things taught in school are expected to lead to patterns of thinking. Again, mathematics is an example. A recent book by Inglis and Attridge describing this issue comes to several conclusions, among them:

The evidence for transfer of mathematical thinking as the result of studying mathematics is spotty at best.

Schooling is expected to lead to the acquisition of some skills that do generalize and transfer — such as reading. Again, success is less broad than you might think. As just noted, the transfer of thinking depends upon the real world problem being couched "in the manner most commonly used in mathematical texts."

Patio

Soon to be patio

If you are a teacher helping learners to build chunks, there are at least some things you can do to help address the transfer problem. For example, you might ask the problem: How many cubic yards of concrete are needed to make a patio that is 22 feet by 16 feet and 6 inches thick? That's different from: Find the volume in cubic yards of an object that is 22' x 16' x 6".

Starting a segment of instruction with an advance organizer is considered helpful:

There are two main types of advance organizer. First, an advance organizer can be an introduction to a new topic, with the goals of giving students an overview, connecting new information to what the students already know, and illustrating the organization of the new concept or information to be processed and learned. Second, an advance organizer can be a task planner designed to orient the learner to a task by providing organizational cues, like a sequence of steps to complete the task or a list of components of the task, or by showing what a product (i.e., the learning outcome) should look like (e.g., what a well-organized story or description looks like).

Advance Organizers

Tower of Pisa

Leaning Tower of Pisa (poor foundation)

The other problem is with sequencing or poor foundation. If you are teaching the middle of Manhattan, New York, there are few patios. A typical geometry student from that area may have no notion of what is meant by the word patio. Sequencing instruction almost always involves finding some starting point, and that is not a given. The geometric volume example given above doesn't work if the learner has no notion of what a patio is.

      

 

   Inquiry

inquiry

Inquiry

We authors are researchers and committed to research. Inquiry is what we are about. We also are teachers, and we are interested in teaching about the latest available knowledge. That's why we would choose to write a book like this, and to make it available in the way we have.

Needless to say, high on each of our lists of personal goals is to train students in becoming researchers. We do this in many ways, but in our cases the ultimate is in our supervision of graduate students and postdoctoral researchers as well as engaging in collaborations with peer researchers in many ways, especially sabbatical leaves.

Enquiring minds want to know.

This catchphrase was used by a U. S. tabloid, the National Enquirer. It appeals to an inner sense that we all want to know. Of course, there are many things to know and many ways of knowing. A good way to learn something is to practice it. We expect surgeons to practice in many ways before they take their first scalpel — as a student — to a living human patient. We expect engineers to design and test systems before they are brought into the public. We expect scientists to train before they go out into the world to do their thing. Practice nearly always is important.

Practice, however, is rarely the best way to begin learning something. We almost always are better off starting with something involving learning ground rules, for example. It probably is better to learn the moves of chess pieces before you start playing chess. Before we engage in inquiry, perhaps we should learn something about inquiry. What is a variable? How does something become classified as a variable? How do we control variables? In elementary school science instruction, we talk about the notion of a "fair test."

Inquiry instruction has become quite a fad in STEM teaching. There is considerable research supporting the use of inquiry strategies. However, instruction in which we ask students to "figure something out" has proven to be inefficient. Scaffolded instruction where we begin by offering a great deal of support and then start removing aspects of the support has proven far more efficient. One of the best examples of the latter kind of instruction is the so-called guided design approach developed by Wales and Stager among others at West Virginia University.

To anyone who takes this story to mean that we advocate replacing inquiry (laboratory) with lecture, our point has been missed. Instead, our point is this: study before you go into the laboratory. A rather amusing outcome of much qualitative research in education is that studies often are undertaken before a library search is completed so as not to "bias" the researcher's view. While this may sound good, it's really nuts. It is far better to learn how to read the details of a study with the intent of uncovering flaws in the design or in thinking of improvements to the design.

The key issue related to designing inquiry instruction is the same issue that underpinned whole language instruction of the 1990s. In reading, you improve more when you have the opportunity to pick your reading material, especially when your selection has been constrained to material that is at an appropriate level to stretch your skills. On the other hand, truly open choice of reading materials, especially in the absence of decoding skills, is a disaster. An oversimplification of the whole language successes (of which there were many) is that motivation is much higher when you choose what you read. The point we are trying to make about inquiry is the same as could be made for whole language — it works for the same sorts of underpinning motivational reasons for what you yourself discover but only when you have the decoding skills you need.

      

 

   Machines & The Challenge

iPad

iPad® in teaching

We remain surprised at the low penetration of machine-based learning in the United States. One might argue that the first device really set up for this learning format was the iPad®, first released in April, 2010. That looks past the fact that videodiscs with high-quality video could be run with computers in 1984, 30 years ago.

Much if not most of the computer assisted instruction developed to date has had a serious and obvious flaw: the object of the instruction is to get the learner to learn to do something without the computer that would be much easier to accomplish with the computer.

Donald Norman.

Donald Norman

Donald Norman is well-known for many of his contributions. His book, The Design of Everyday Things, is a sort of primer of applied psychology. In this book he speaks of "knowledge in the world" and "knowledge in the head," the point being to use design to minimize required knowledge in the head by placing it appropriately somewhere in the world — as in the design of stovetop burners, for example. Another of Norman's works was the paper, "Cognitive Artifacts," in which he asserts:

… As we shall see, however, artifacts do not actually change an individual's capabilities. Rather, they change the nature of the task performed by the person. …

We haven't caught up with the fact that Internet access has changed the nature of most tasks. We still tend to use machines to teach the "old" curriculum, the one that served us well before Internet access became ubiquitous.

The first book about Web teaching (1997) used solid learning theory and laid out a plan for putting the 1990 curriculum on the Web. It made no mention of commerce (for example, Amazon or eBay), immediate access to information (Wikipedia or Siri on a phone), or cybersecurity versus cybereducation. It took no account of how the interpersonal aspects of the teaching/learning environment could be maintained during Web-based instruction.

While all of these issues are recognized today as important, the one that stands out as a challenge for educators is Internet access. To deny this access today is silly. Why should we ever give any student a "test" during which there is no access to Internet information and tools?

These sentences appeared in an early edition of this book brought out a strong negative reaction from a reviewer and led to long and often heated discussions among the authors. Perhaps the best analogy to the problem is the so-called open-book test, something that all three authors use — extensively. It is acknowledged widely that open-book tests tend to be more difficult than 'closed-book' tests. Sometimes it is deemed necessary to used closed-book tests, however. For example, in the early stages of teaching complex problem solving, it is necessary to break the problem into parts and assess just those parts. For some things, just-in-time learning is not appropriate.

What also needs to be acknowledged is that it is very difficult to go from a world where much of what we once taught is now achievable using just a few keystrokes. Changing instruction will not be easy for any of us. Most of us have been raised in a pencil-and-paper world. Little is known about the "psychometrics" of a world of assessment that includes minimally-bridled Internet access. Many tests will be developed that actually reward keyboarding skill. Some will reward speed. While important, these are likely the most trivial aspects of modern tool-use skills. How we assess tool use with open access to the powerful tools of today remains a challenge for all of us.

      

 

   Designing Instruction

Instructional Design

Dick & Carey model of an instructional design process

Instruction always consists of messages — aural, visual, tactile — received by the learner. If we were teaching about cooking or wines, taste- and odor-based messages would be included. The stream of messages sent by the instructor to her students is intended for those students to create or change some model in their brain.

The late 1970s saw the emergence of cognitive load theory (CLT). In CLT, an emphasis is placed on adjusting instructional messages such that the content does not exceed the learner's working memory capacity. This notion was expressed in general terms by Vygotsky in his description of the Zone of Proximal Development (ZPD). What CLT does is add detail, often explicit detail, to this notion.

For example, when learners must use two screens, some working memory load is taken up by screen management. So designing instruction for a single screen leads to better results. Worked examples usually fit well into approaches following CLT. It is not surprising that, before the Internet, many supplementary books for college courses consisted almost entirely of worked out examples.

Recent research suggests that instructional materials might consider including content aimed at teachers and increasing teacher knowledge.

      

 

   My Pace

Pacing always is an issue. In a class of 25 students, the "pace" is probably the best fit for no one but an adequate fit for many. This is rather a silly situation since, using machines, the pace can be set by the learner regardless of the curriculum. In criterion-based systems, teachers are expected to "re-teach" material to those students who are not successful at first. Many students are left bored by the pace, one they perceive as slow.

Tracking

Cartoon illustrating tracking (ability grouping)

Three schemes are in existence to deal with differences in ability: tracking (ability grouping), enrichment, and acceleration.

Tracking involves dividing students into groups according to their learning success. While the labels are rarely revealed, when a 3rd grader tells you that "the bluebirds are the poor readers" it almost certainly is the case and that the bluebirds are the students in the slow track. There is considerable evidence to suggest that tracking is not an effective strategy to improving achievement for most students except very high achieving students. In practice, schools and teachers often make grouping decisions based on factors other than ability (including parental pressures and stereotypes) thus creating a potential mismatch between instruction and student ability that may lead to even more negative outcomes.

Enrichment involves offering additional and/or alternative materials to students who are learning quickly.

In acceleration, students can move as fast as possible. The stereotype of the accelerated student is portrayed by the character Sheldon Cooper in the TV series The Big Bang. The research support for acceleration is really strong and is much stronger than for either tracking or enrichment. Also, there is little research support for the Cooper stereotype; quite to the contrary, accelerated students appear to do well.

Interpret

Cartoon illustrating self-pacing

In a modern system, acceleration essentially means self-pacing and use of technology. It also is clear that not all students perform well under conditions of self-pacing.

      

 

   Scaffolding

NECapRest

Scaffolding during Nebraska Capitol Restoration

The term scaffolding is used in education in accord with notions suggested by Lev Vygotsky in his zone of proximal development. Starting from M3, it is straightforward to see how Vygotsky's philosophical notion would come about. The number of awareness events we can store temporarily is limited, perhaps three to five. Each awareness event represents a chunk. If the chunk we are heading for is much larger than those we start with, then putting them together is going to come up against this very real, biologically-based limit. We authors happen to be in businesses where the chunks are large. Especially in chemistry and electrical engineering, they tend to be sequential. For us there are many times when A → B → C makes sense, but A → C → B does not.

Learning to Read video

To help our students develop these large chunks, we try two different things. First, we try to break big chunks into smaller ones and then try to assemble the large from the small. An example would be early reading instruction related to decoding. We also try to support the various pieces of a big chunk with some scaffolding that we can remove as learner skills grow.

A favorite study practice of good students is the use of worked examples. Because tests are the best criteria for assessing what is learned in a course, files of worked-out exams for a previous course and instructor become prized possessions. Some instructors place these in libraries. One of us provided every course student with worked out exams on microfiche for each exam given during the previous six semesters when the microfiche format was in vogue

We are at or nearing a very interesting crossroads. Much of what we taught 10-20 years ago has been automated. If we know how to access it, we can have Internet tools do what we used to have to do ourselves. It is clear that we still waste a great deal of student learning time building skills sets that have been or soon will be automated. It is not at all clear how this works out in the future.

Solow

Robert Solow

Robert Solow, a wonderful person and renowned scholar, won the Nobel Prize in Economics in 1987 for his analysis of economic growth. In his productivity paradox:

You can see the computer age everywhere but in the productivity statistics

Solow, 1987

On the other hand, Visicalc, the acknowledged first computer spreadsheet program, was followed by Lotus-1-2-3 in 1983. It can be successfully argued that, once managers learned how to use spreadsheets, they could manage more people, and the need for middle managers diminished. What about Mathematica? By absorbing the tedium of computation from mathematical operations, Mathematica enables us to do more. What does this mean? It means that an engineer trained today must know and understand much more deeply about mathematics than one trained two decades ago. This is a Rubicon we have not yet fully crossed in education. That is, we insist on building skill levels for skills already subsumed by machines. A modern curriculum might present a learner with problems in which the goal is for that learner to use tools to solve those problems once solved "by hand."

      

 

   Learning Goals

Goal

Goal

The most productive goal for a student to have in an educational setting is a learning goal. This goal will direct successive conscious access events (awareness events). Effective instruction is rooted in helping students develop a learning goal orientation. Schools often set goals for students in terms of getting high scores on tests. Some children set goals of besting others. Learning goals involve deciding to know and understand the content at hand.

Humans don't come prepackaged with the kinds of goals that relate to schooling. Those goals are shaped by the learner's environment (family, home, school, community, etc.) They change with age: the goals for a five-year-old are very different from those of a fifteen-year-old. As a reader might choose to set and strive for some new goal, the nature of that process would be quite different from that of a six- or ten-year-old. It is different for junior faculty versus those approaching retirement.

It seems to matter as to how the achievement of mastery goals is presented to students:

Recent research has shown that, in a university context, mastery goals are highly valued and that students may endorse these goals either because they believe in their utility (i.e., social utility), in which case mastery goals are positively linked to achievement, or to create a positive image of themselves (i.e., social desirability), in which case mastery goals do not predict academic achievement. …

Dompnier et al., 2015

What students study in school matters. Curriculum matters! A systematic review of coursework taken in Florida showed consistently better outcomes associated with those students who took rigorous coursework. Early curriculum choices do end up becoming life channeling. For example, girls scoring well in 8th-grade mathematics led to them taking more high school mathematics courses that led to more STEM careers. For boys, taking high school physics had a significant impact on subsequently choosing a STEM career. The mathematics trajectories for successful completion of associate and bachelor degrees differ, but such trajectories remain a measure of "readiness."

Another example of what is taught matters:

Standard business training programs aim to boost the incomes of the millions of self-employed business owners in developing countries by teaching basic financial and marketing practices, yet the impacts of such programs are mixed. We tested whether a psychology-based personal initiative training approach, which teaches a proactive mindset and focuses on entrepreneurial behaviors, could have more success. A randomized controlled trial in Togo assigned microenterprise owners to a control group (n= 500), a leading business training program (n= 500), or a personal initiative training program (n= 500). Four follow-up surveys tracked outcomes for firms over 2 years and showed that personal initiative training increased firm profits by 30%, compared with a statistically insignificant 11% for traditional training. The training is cost-effective, paying for itself within 1 year.

Campos et al., 2017

      

 

   Prior Knowledge

Prior

Cartoon illustrating prior knowledge

Far and away, the best predictor of success in future learning is current knowledge. The more you know, the more you can learn. Many courses have prerequisites. Rarely would a high school freshman succeed in a college mathematics course at the senior level. For that matter, taking Calculus III without having Calculus I & II under your belt would be an extremely unwise undertaking. On the other hand, a college senior taking a graduate seminar in some history area would not be dead-on-arrival (but would likely be seriously disadvantaged).

In general, the average information associated with larger sets of concepts is more than the average information associated with smaller sets. Prior knowledge reduces the size of the set of concepts that need to be brought into awareness, and hence the average information, thus making it easier to process. For example, an alphabet usually restricts the number of phonemes possible for a language to a much smaller number. English has 12 vowels, 13 diphthongs, 24 consonants, and a total of 36 phonemes. By comparison, Greek, a language much easier to pronounce from text than English, has 5 vowels, 5 diphthongs, 18 consonants, and 23 phonemes.

Enormous differences appear at the outset of schooling. Some children arrive either reading or ready to read while others have no real experience with the concept of reading. Children who are read to at home will have a better understanding of implicit syntactical rules and thus will be able to partition the set of possible words more efficiently when presented a sentence during a reading or writing exercise. Prior knowledge differences tend to track socioeconomic status. A single parent with two jobs has little if any time to read to his/her child. In fact, they don't have much in the way of resources to buy reading material.

Following M3, if two learners arrive at school with differences in their knowledge — gaps — and both work at school learning, the gaps should persist. This is observed; gaps do persist. As noted, gaps correlate with socioeconomic status (SES) and ethnicity. One driver of these gaps appears to be self-efficacy. That is, when measures of self-efficacy are available together with those for SES and ethnicity, the differences based upon SES and ethnicity are minimized. A recent notion of "academic optimism" essentially ascribes environmental features to schools such that self-efficacy is likely to increase. For schools then, M3 suggests that eliminating gaps is an unrealistic goal, but creating atmospheres in which those gaps are minimized is realistic.

Lunch

Free and reduced lunch

Keep in mind that, in the current environment of frequent high-stakes testing in many areas for many states, almost universally the best predictor of lower school performance is the fraction of students attending the school who participate in a free or reduced lunch program. This also is the best single quantitative indicator of the socioeconomic status of the students in a particular school. While mailing Zip codes are good predictors of school performance, percent free and reduced lunch is even better. Socioeconomic status ultimately translates into prior knowledge for school children.

      

 

   Effort

Effort

Cartoon illustrating effort

For some time, Cattell's notion of two categories of intelligence has been accepted. Fluid intelligence is our sort of raw computing power and can't be changed much; crystallized intelligence is based on chunk sizes and can be changed. The bottom line is that untutored geniuses are not good at solving complex but well-known problems while those who deal with those problems routinely are. So, it's probably unwise to ask a cardiac surgeon to fix the radiator in your car, and equally unwise to have a car mechanic insert a stent into one of your cardiac arteries.

Belief in effort is a repeating theme. Some people believe that ability is an entity and cannot be changed by effort. Others believe that ability is incremental and can be changed. Outcomes are better when students believe their effort will pay off. Simply labeling students as "gifted and talented" generates the outcomes one might expect by reinforcing beliefs that abilities are innate rather than the result of effort.

Belief in the incremental view can be changed through instruction. Students, many of whom were "at-risk," who were taught or did computer-based lessons about how the brain is changed by learning, had more incremental views of intelligence and ability, and subsequently greater growth in their school achievement than control groups taught only good learning strategies.

The notion of what is fixed and what can be changed is nurtured in many places: the home, the school, the community, and the country. It's a good idea for parents to encourage their children, and for teachers to remind parents to do so. Schools should nurture an incremental view — change what you can, and don't assume you can't without really trying first. Communities should aspire to have their citizens exert effort, and so forth.

SmartList

Graphic from FaceBook page of the Metro Montessori School in Manhattan

Effort pays for most students. There are a very small number for whom little effort is required. A larger number, but still a relatively small fraction, find that almost no level of effort leads to success. For most of us, effort pays. Think of the practice put in by the likes of Rory McIlroy and Michael Jordan.

      

 

   Studying

Practice

Quote regarding practice

The notion of practice is raised often in this book. It is well-known and understood that virtually all athletes practice. For some of them like Tiger Woods and Michael Jordan, their efforts at practice are legendary. A famous paper by Ericsson spoke to deliberate practice, a planned and careful practice engaged by experts such as an instrumental musician practicing scales.

In Make It Stick, an excellent book about learning, Brown et al. (2014) assert that "learning is an acquired skill, and the most effective strategies are often counterintuitive." That's not really correct. Learning happens whether we want it to or not. We can do little to manage the details as these are controlled by neurological and chemical factors beyond our access. For a moment, let's replace just one word in that assertion:

Studying is an acquired skill, and the most effective strategies are often counterintuitive.

The revised assertion is one which we embrace fully. Studying, of course, is what we talk about we try to acquire the knowledge and skills that are taught in schools. It is in this context where "practice, practice, practice," is a phrase that implies a less than optimal strategy for study success.

Approaches to practice vary. Consider the recommendations for music practice set forth in this piece on National Public Radio. Note the recommendations for each session involving setting goals and mapping out a plan.

Schaum's outlines, started in the 1930s, consist of books containing outlines of course materials coupled with all sorts of worked-out practice problems. In times before the internet, one of the authors provided microfiche (very small film copies of documents that could be read using specifically designed reading devices routinely found in libraries) of six of the preceding semesters of worked-out course examinations. Today we would do that online with feedback offered only after the student had responded. In those days, it probably would have been better to provide just the tests with separate answer keys for each rather than have answered on the tests themselves.

Microfiche

Microfiche (worked-out past examinations)

It is not surprising, therefore, that the use of worked-out examples has been the subject of research as well-summarized in an abstract by the late Roxana Moreno:

Recent research has demonstrated that worked example based instructional designs can effectively foster learning of engineering concepts and are supported by contemporary educational theories, including cognitive load theory. However, a number of interrelated fundamental questions, which have neither been addressed in the educational psychology nor in the engineering education literature, remain open including: (A) What is the impact of means-ends practice? (B) What is the effect of backward vs. forward fading of worked example steps? and (C) What is the effect of adaptivity to learner performance? The goal of the present study was to answer these questions by comparing the learning and perceptions about learning of engineering college freshman who learned how to solve electrical circuit problems in five different computer-based learning conditions: (1) problem solving with step-by-step feedback, (2) means-ends problem solving with total feedback, (3) backward fading, (4) forward fading, and (5) adaptive feedback. Forward fading and adaptive feedback practice promoted more students' near problem solving transfer ability than backward fading practice. Furthermore, the adaptive feedback practice group outperformed students in the backward fading practice group on measures of far problem solving transfer.

Moreno et al., 2006.

Another study-related research area has " … demonstrated that practicing retrieval is a powerful way to enhance learning." This work has included efforts by Roedinger and has continued with his student, Karpicke. While much of the work has been done using college students as subjects, Karpicke reported a study completed with learners whose mean age was ten years.

As noted above, reading Make It Stick will help teachers, trainers, coaches, parents, and anyone for whom the efficient acquisition of new knowledge and/or skills is important. From that book:

• Some kinds of difficulties during learning help to make the learning stronger and better remembered.

• When learning is easy, it is often superficial and soon forgotten.

• Not all of our intellectual abilities are hardwired. In fact, when learning is effortful, it changes the brain, making new connections and increasing intellectual ability.

• You learn better when you wrestle with new problems before being shown the solution, rather than the other way around.

• To achieve excellence in any sphere, you must strive to surpass your current level of ability.

• Striving, by its nature, often results in setbacks, and setbacks are often what provide the essential information needed to adjust strategies to achieve mastery.

From Make It Stick, Brown, Roediger & McDaniel, 2014, p. 225

Finally, we add a note of caution. Frey, Cahill, & McDaniel speak to "individual differences in concept building." So-called abstraction learners focussed on learning functional relationships. Exemplar learners, on the other hand, "develop conceptual representations based on memory of studied examples and algorithms rather than abstractions that summarize and relate particular examples." In other words, some students approach practice using examples as a task of memorizing (learning to recall) those examples, while other try to learn the principles from which the answers are achieved. One of us provides prior tests for student use and posts answer keys after those tests have been available for a while. It is important to try the problems before consulting the answer keys, something we suspect was not done by the student who wrote this comment to Rate Your Professor:

Whenever I come into a test, I understand each practice test (previous 8 semesters tests) inside and out just to bomb it. If I took this class literally any semester before fall 2017, these tests would be an easy A, but he amps each test x10 for us. It is easy to ace every homework and quiz and then bomb a test because of this.

When the student says "understand," s/he probably means s/he saw the examples and believes s/he can recall anything that looks just like any of those examples posted. Even when we teach advanced courses, it may be worth spending some time pointing out that practicing problems without benefit of the answers and then looking up those answers is usually a better way to study than is taking in the problem concurrently with the solution provided.

      

 

   Games

Role Playing

Role Playing

In these application chaplets, we've always tried to establish some basis for our claims in the early description chaplets but not in this case. Suffice it to say that researchers distinguish between personality and temperament and that each of these constructs has several domains. Personality is measured by such instruments as the lengthy, well-documented Minnesota Multiphasic Personality Inventory (MMPI) as well as brief instruments such as the Ten-item Personality Inventory (TIPI). Because TIPI is very brief, it can be introduced into experiments without generating resistance from subjects. Using TIPI, video game players have been studied. The abstract from that study summarizes the results:

Video games constitute a popular form of entertainment that allows millions of people to adopt virtual identities. In our research, we explored the idea that the appeal of games is due in part to their ability to provide players with novel experiences that let them "try on" ideal aspects of their selves that might not find expression in everyday life. We found that video games were most intrinsically motivating and had the greatest influence on emotions when players' experiences of themselves during play were congruent with players' conceptions of their ideal selves. Additionally, we found that high levels of immersion in gaming environments, as well as large discrepancies between players' actual-self and ideal-self characteristics, magnified the link between intrinsic motivation and the experience of ideal-self characteristics during play.

Przybylski et al., 2012

This book has concerned not just those aspects of learning connected with increasing content skills. Instead, we are interested in mentoring — looking broadly at the learner's interests. We infer from studies of game playing that a key question to get students to ask themselves is "Where do I fit in?" Games tell us that players play so as to be able to "be all they can be." Mentors can (and we think should) act so as to have students try to envision themselves in life roles.

To the degree that is possible, we should permit students to role play. This can run the gamut and includes activities such as playing a musical instrument, using computer-assisted design software, constructing an electrical circuit from parts, and student teaching.

Generally, however, the often heard promise that games would improve learning performances has not borne fruit. Also, the notion that games can improve either fluid intelligence or working memory capacity has not panned out. For those reasons, the recent report from Mayer and co-workers deserves special notice:

… Results show the effectiveness of playing a custom-made game that focuses on a specific executive function skill for sufficient time at an appropriate level of challenge. Results support the specific transfer of general skills theory, in which practice of a cognitive skill in a game context transferred to performance on the same skill in a non-game context.

Parong et al., 2017

      

 

   Choices

Choosing a Career

Cartoon illustrating career choices

Most careers involve setting goals. It takes years to become a licensed electrician. You don't decide to be a physician and then become one a year later. Many challenging steps are involved, and these are not clear unless you have first-hand knowledge of the career (say your mother is a physician). The bias of personal experience can be seen in Norwegian higher education that despite being free is far more likely to attract students whose parents have gone to college themselves.

Deciding early on careers is not unusual. For example, many kids express interest in becoming firefighters. Some will eventually become firefighters. Examples like Rory McIlroy are often pointed to in terms both of early commitment and an early demonstration of ability. Michael Jordan failed to make the varsity high school team as a sophomore (too short) but ended up on an All-American team as a senior. The work ethic of both of these athletes is legendary, with countless hours devoted to practice and learning to increase the crystallized intelligence in their respective sports.

People often report that teachers, especially high school teachers, have played a major role in their selection of a career.

Most people don't decide what they "want to be" until much later. It is easy to find conflicting advice. For example, compare:

Economist Neil Howe says that only 5% of people pick the right job on the first try. He calls those people "fast starters" and in general, they are less creative, less adventurous, and less innovative, which makes a conventional, common path work well for them. So it's questionable whether you should even aspire to be one of those people who picks right the first try. But, that said, we all still want to be good at choosing paths for ourselves. So, here are some guidelines to think about — whether it's our first career or our fifth career. …

Look at the lives you see people having, and ask yourself whose life you would want. That's easy, right? But now look deeper. You can't just have the life they have now. You have to have the life they lead to get there. So, Taylor Swift has had great success, and now she gets to pretty much do whatever she wants. But could you do what she did to get there? She had her whole family relocate so she could pursue her dreams in Nashville. Do you want a life of such high-stakes, singular commitment? …

Gilbert says you need to try stuff to see what will make you happy. Do that. It's scary, because it's hard to find out that what you thought would make you happy will not make you happy. But then, it's true that being a realist is not particularly useful to human evolution either.

How to pick a career (Accessed November 21, 2015)

with:

Forget the old thinking that kids could wait until college to decide a major. Today, they really ought to be making this decision before their junior year of high school.

Choosing a College Major (Accessed November 21, 2015)

The reality is that a large amount of choice is changed during college years:

About 80 percent of students in the United States end up changing their major at least once, according to the National Center for Education Statistics. On average, college students change their major at least three times over the course of their college career.

Changing majors (Accessed November 21, 2015)

One of our children zipped along in mathematics being able to take junior level college courses as a freshman. As a result, practically no doors were closed to her, and she ended up with a Ph.D. in economics and an interest in international trade where understanding statistical models must be a sort of second nature. What happens early in life often ends up enabling outcomes much later.

Blattman recently posited ten suggestions "kids [should] know before going to college." We find three of them particularly meritorious:

   Minds, Models, and Mentors

An excellent general reference is Eagleman, D. (2015), The Brain: The Story of You, New York, Penguin Random House. There is a related series of 6 hours on DVD available from pbs.org.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press. (The William James Lectures)

Shell, D. F., et al. (2010). The Unified Learning Model: How Motivational, Cognitive, and Neurobiological Sciences Inform Best Teaching Practices. Dordrecht, Springer.

emergent property (Accessed December 8, 2015)


   Minds

Chesney, T., "An empirical examination of Wikipedia's credibility," Firstmonday (Accessed February 9, 2016).

Dehaene, S. (2014), Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts New York, Penguin.


   Models

Shell, D. F., et al. (2010). The Unified Learning Model: How Motivational, Cognitive, and Neurobiological Sciences Inform Best Teaching Practices. Dordrecht, Springer.

emergent property (Accessed December 8, 2015)


   Mentors

Linus Pauling (Accessed November 18, 2015)

Instructional design (Accessed November 18, 2015)

Life coaching (Accessed November 18, 2015)

Expert (Accessed November 18, 2015)

Michael Jordan and baseball (Accessed November 18, 2015)


   Brains

Tunicates eat their brains. (Accessed November 16, 2015)


   Information

Lossy compression (Accessed November 18, 2015)


   Learning

Edelman, G. M. (1987). Neural Darwinism. The Theory of Neuronal Group Selection. New York, NY: Basic Books.

Newborn zebra (Accessed November 16, 2015)

Tao Te Ching (Accessed November 18, 2015)


   Neurons

Sanders, L. (2017). See these first-of-a-kind living human nerve cells. Science News. Science News

Allen Brain Atlas. (2017). Retrieved November 14, 2017, 2017, from http://www.brain-map.org/

Cell (biology) (Accessed November 18, 2015)

Underwood, E. (2015). "The brain's identity crisis." Science 349(6248): 575-577.

Neuron (Accessed November 18, 2015)

Synapse (Accessed November 18, 2015)

Myelin (Accessed November 18, 2015)

Hawrylycz, M. J., Lein, E. S., Guillozet-Bongaarts, A. L., Shen, E. H., Ng, L., Miller, J. A., . . . Jones, A. R. (2012). "An anatomically comprehensive atlas of the adult human brain transcriptome." Nature, 489(7416), 391-399.

Marcus, G. F. (2004). The birth of mind: How a tiny number of genes creates the complexities of human thought. New York: Basic Books.

Wake, H., et al. (2015). "Nonsynaptic junctions on myelinating glia promote preferential myelination of electrically active axons." Nature Communications, 6(7844).

McKenzie, I. A., Ohayon, D., Li, H., Paes de Faria, J., Emery, B., Tohyama, K., & Richardson, W. D. (2014). "Motor skill learning requires active central myelination." Science, 346(6207), 318-322.

Franklin, R. J. and T. J. Bussey (2013). "Do your glial cells make you clever?" Cell Stem Cell, 12(3): 265-266.

Guzman, S. J., Schlögl, A., Frotscher, M., & Jonas, P. (2016). "Synaptic mechanisms of pattern completion in the hippocampal CA3 network." Science, 353(6304), 1117-1123.

Bittner, K. C., Milstein, A. D., Grienberger, C., Romani, S., & Magee, J. C. (2017). Behavioral time scale synaptic plasticity underlies CA1 place fields. Science, 357(6355), 1033-1036.


   Time Goes By

Pockett, S. (2003). "How long is 'now'? Phenomenology and the specious present." Phenomenology and the Cognitive Sciences, 2(1), 55-68.

The Perceptual Factors in Reading, F. M. Hamilton, 1907 from Google Books (Accessed November 17, 2015).

Motion pictures (Accessed November 17, 2015)

Klatt, E. C., & Klatt, C. A. (2011). "How much is too much reading for medical students? Assigned reading and reading rates at one medical school." Academic Medicine, 86(9), 1079-1083.


   Awareness

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Penguin.

Dehaene, S. (2009). Reading in the brain: The new science of how we read. New York, Penguin Group.

Dehaene, S. (2011). The number sense: How the mind creates mathematics. New York, Oxford University Press.

Graziano, M. (2013). Consciousness and the Social Brain. New York: Oxford University Press.

Graziano, M. S., & Kastner, S. (2011). "Human consciousness and its relationship to social neuroscience: a novel hypothesis." Cognitive Neuroscience, 2(2), 98-113.

Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J., & Sergent, C. (2006). "Conscious, preconscious, and subliminal processing: a testable taxonomy." Trends in Cognitive Sciences, 10(5), 204-211.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA, Harvard University Press.


   Shannon and Information

Claude Shannon (Accessed November 17, 2015)

A Mathematical Theory of Communication (Accessed November 17, 2015)

Information theory (Accessed November 17, 2015)


   Learning and Information

Cowan, N. (2005). Working memory capacity. New York, Psychology Press.

Ericsson, K. A., et al. (1980). "Acquisition of a memory skill." Science, 208: 1181-1182.

Balduzzi, D. and G. Tononi (2008). "Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework." PLoS Computational Biology, 4(6 e1000091): 1-18.

Young, R. M. and T. O'Shea (1981). "Errors in children's subtraction." Cognitive Science, 5(2): 153-177.


   Remembering

Rinehart remembering cards (Accessed November 17, 2015)

Ericsson, K. A., Chase, W. G., & Faloon, S. (1980). "Acquisition of a memory skill." Science, 208, 1181-1182.

Miller, G. A. (1956). "The magical number seven, plus or minus two: Some limits on our capacity for processing information." Psychological Review, 63, 81-97.

Eidetic memory (Accessed November 17, 2015)


   Quantization

Quantization (signal processing) (Accessed November 17, 2015)

Quantization (image processing) (Accessed November 17, 2015)


   Stories

Iliad (Accessed November 17, 2015)

Odyssey (Accessed November 17, 2015)

Writing (Accessed November 17, 2015)

Printing (Accessed November 17, 2015)

Reading glasses (Accessed November 17, 2015)

Stained glass (Accessed November 17, 2015)

Shell, D. F., et al. (2010). The Unified Learning Model: How Motivational, Cognitive, and Neurobiological Sciences Inform Best Teaching Practices. Dordrecht, Springer.


   Smell and Context

Schab, F. R. (1990). "Odors and the remembrance of things past." Journal of Experimental Psychology, 16(4), 648-655.

Taste (Accessed November 17, 2015)

Oleogustus (Accessed November 17, 2015)

Herz, R. (2008). The Scent of Desire: Discovering Our Enigmatic Sense of Smell. New York: William Morrow (Harper Collins Publishers).

Olfaction (Accessed November 17, 2015)

Olfactory bulb (Accessed November 17, 2015)

Olfactory bulb (Accessed November 17, 2015)

Tsai, L., & Barnea, G. (2014). "A Critical Period Defined by Axon-Targeting Mechanisms in the Murine Olfactory Bulb." Science, 344(6180), 197-200.

Can't smell smoke while sleeping (Accessed November 17, 2015)

Arzi, A., et al. (2012). "Humans can learn new information during sleep." Nature Neuroscience 15: 1460–1465.

Dogs' sense of smell (Accessed November 17, 2015)


   Emergence of Consciousness

Emergence (Accessed November 18, 2015)

Snowflake (Accessed November 18, 2015)


   Mind's Eye

Glasser, M., Coalson, T., Robinson, E., Hacker, C., Harwell, J., Yacoub, E., . . . Jenkinson, M. (2016)." A Multi-modal parcellation of human cerebral cortex." Nature. doi:10.1038/nature18933

Clark, A. (2013). "Whatever next? Predictive brains, situated agents, and the future of cognitive science." Behavioral and Brain Sciences, 36(03), 181-204.

Macknik, S. L., Martinez-Conde, S., & Blakeslee, S. (2011). Sleights of mind. What the neuroscience of magic reveals about our everyday deceptions. New York: Picador. Macmillan.

Tablecloth trick (Accessed November 18, 2015)

Frith, C. (2007). Making up the mind: How the mind creates our mental world. New York: Wiley-Blackwell.


   Welcome to My World

Clark, A. (2013). "Whatever next? Predictive brains, situated agents, and the future of cognitive science." Behavioral and Brain Sciences, 36(03), 181-204.

Cleeremans, A. (2014). "Prediction as a Computational Correlate of Consciousness." International Journal of Computing Anticipatory Systems, 29, 3-12.


   Under the Hood

Rumelhart, D.E., J.L. McClelland and the PDP Research Group (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, Cambridge, MA MIT Press, ISBN 978-0262680530

Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking innateness : a connectionist perspective on development. Cambridge, MA: MIT Press.

Cleeremans, A. (2011). "The radical plasticity thesis: how the brain learns to be conscious." Frontiers in psychology, 2, 59-70.


   Waves Keep Coming

Neural oscillations (Accessed December 1, 2016)

Kim, T., Thankachan, S., McKenna, J. T., McNally, J. M., Yang, C., Choi, J. H., . . . McCarley, R. W. (2015). "Cortically projecting basal forebrain parvalbumin neurons regulate cortical gamma band oscillations." Proceedings of the National Academy of Sciences, 112(11), 3535-3540.

Engel, T. A., Steinmetz, N. A., Gieselmann, M. A., Thiele, A., Moore, T., & Boahen, K. (2016). "Selective modulation of cortical state during spatial attention." Science, 354(6316), 1140-1144.


   Autopilot

The neurobiological mechanism underlying automaticity is not established. It may involve myelination of the neural paths activated during whatever process is being automated. For example, see Fields, R. D. (2014). "Myelin — More than Insulation." Science 344(6181): 264-266.

Bassett, D. S., Yang, M., Wymbs, N. F., & Grafton, S. T. (2015). Learning-induced autonomy of sensorimotor systems. Nat Neurosci, 18(6), 744-751.

Bassett, D. S., & Mattar, M. G. (2017) A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior. Trends in cognitive sciences, 21(4), 250-264.

Reichow, A. W., Garchow, K. E., & Baird, R. Y. (2011). "Do scores on a tachistoscope test correlate with baseball batting averages?" Eye & Contact Lens, 37(3), 123-126.

Eagleman, D. (2011). Incognito. The secret ilives of the brain. New York: Pantheon Books.

Duhigg, C. (2012). The Power of Habit. Why we do what we do in life and business. New York: Random House Publishing Group.

Stroop effect (Accessed November 18, 2015)

MacLeod, C. M. (1991). "Half a century of research on the Stroop effect: An integrative review." Psychological Bulletin, 109(2): 163.


   Temptation

Forbidden fruit (Accessed December 17, 2016)

Hofmann, W., Baumeister, R. F., Förster, G., & Vohs, K. D. (2012). "Everyday temptations: an experience sampling study of desire, conflict, and self-control." Journal of personality and social psychology, 102(6), 1318.

Milyavskaya, M., Inzlicht, M., Hope, N., & Koestner, R. (2015). "Saying "no" to temptation: Want-to motivation improves self-regulation by reducing temptation rather than by increasing self-control." Journal of personality and social psychology, 109(4), 677.

Galla, B. M., & Duckworth, A. L. (2015). "More than resisting temptation: Beneficial habits mediate the relationship between self-control and positive life outcomes." Journal of personality and social psychology, 109(3), 508.

Hofstadter, D. (2007). I am a strange loop. New York: Basic Books.


   Self-Talk

Gazzaniga, M. (2011). Who's in Charge? Free Will and the Science of the Brain. New York: HarperCollins Publishers Inc. p. 40.

Gazzaniga on free will (Accessed November 18, 2015)


   Who Says I'm Not the Boss?

Gazzaniga, M. (2011). Who's in Charge? Free Will and the Science of the Brain. New York: HarperCollins Publishers Inc. p. 40.

Physarum polycephalum (Accessed December 17, 2016)

Habituation (Accessed December 17, 2016)

Boisseau RP, Vogel D, Dussutour A. 2016 "Habituation in non-neuralorganisms: evidence from slime moulds." Proc. R. Soc. B 283: 20160446. http://dx.doi.org/10.1098/rspb.2016.0446

Scott, W. C., Kaiser, D., Othmer, S., & Sideroff, S. I. (2005). "Effects of an EEG biofeedback protocol on a mixed substance abusing population." The American Journal of Drug and Alcohol Abuse, 31(3), 455-469.

Shibata, K., Watanabe, T., Kawato, M., & Sasaki, Y. (2016). "Differential Activation Patterns in the Same Brain Region Led to Opposite Emotional States." PLoS Biol, 14(9), e1002546.

"… Subjects were randomly assigned to either a higher-preference (n= 12) or lower-preference (n= 12) group but were not informed of their assigned group. Each trial consisted of face, induction, fixation, feedback, and inter-trial peri- ods (Fig 1C). During the face period, subjects were presented with one of the induction faces. In the induction period, subjects were instructed to somehow regulate their brain activity to make the size of a solid green disk (presented in the subsequent feedback period) as large as possible. Subjects were encouraged to enlarge the disk size so that they would receive a payment bonus proportional to the mean disk size. Subjects were given no further instructions. The size of the disk presented in the feedback period served as a feedback signal and reflected an estimated preference rating from the CC, which was calculated by applying the preference decoder to the activation pattern of the CC [cingulate cortex] obtained in the preceding induction period of the trial (see Main Experiment in Materials and Methods for details). However, the computation of the disk size was opposite in its direction between the two groups, although the instructions given to the two groups were exactly the same. For the higher-preference group, the disk size was proportional to the estimated rating from the CC activation pattern. That is, if the CC activation became more similar to the patterns corresponding to higher preference, the disk size became larger. In contrast, for the lower-preference group, a lower estimated rating made the disk larger. This made the instruction and the range of feedback signals to both groups identical. Note that all other information, including the intended preference direction, the purpose of the induction stage, and the meaning of the disk size, was withheld from subjects so that knowledge of the purpose of the experiment would not influence subjects' rating criteria in the post-test stage. … " (from Shibata et al.)


   Why Now?

Libet, B., et al. (1979). "Subjective referral of the timing for a conscious sensory experience." Brain, 102: 193-224.

Gazzaniga, M. (2011). Who's in Charge? Free Will and the Science of the Brain. New York, HarperCollins Publishers Inc.


   The Eyes Have It

Tong, F., et al. (2006). "Neural bases of binocular rivalry." Trends in Cognitive Sciences 10(11): 502-511.

Wheatstone's discovery (Accessed November 18, 2015)

Einhauser, W., et al. (2008). "Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry." Proceedings of the National Academy of Sciences 105(5): 1704.


   Deception

Visual capture (Accessed November 18, 2015)

Four touch tricks (Accessed November 18, 2015)

Segall, M. H., Campbell, D. T., & Herskovits, M. J. (1963). "Cultural Differences in the Perception of Geometric Illusions." Science, 139(3556), 769-771.

Gregory, R. L., Eye and Brain, McGraw Hill, 1966.

McGurk effect (Accessed November 18, 2015)


   I Didn't See That

Drew, T., et al. (2013). "The invisible gorilla strikes again sustained inattentional blindness in expert observers." Psychological Science, 24(9): 1848-1853.

Attentional blink (Accessed November 18, 2015)

Warm, J., Parasuraman, R., & Matthews, G. (2008). "Vigilance requires hard mental work and is stressful." Human Factors, 50(3), 433-441.

Ariga, A., & Lleras, A. (2011). "Brief and rare mental 'breaks' keep you focused: Deactivation and reactivation of task goals preempt vigilance decrements." Cognition, 118(3), 439-443.

Kim, Y. J., et al. (2006). "Attention induces synchronization-based response gain in steady-state visual evoked potentials." Nature Neuroscience 10(1): 117-125.


   It Seemed Like Forever

How extreme isolation warps minds (Accessed November 20, 2015)

Pockett, S. (2003). "How long is 'now'? Phenomenology and the specious present." Phenomenology and the Cognitive Sciences, 2(1), 55-68.

Frame rates (Accessed November 20, 2015)

Eagleman, D. (2015). The Brain: The Story of You. New York, Penguin Random House.

Stetson, C., Fiesta, M. P., & Eagleman, D. M. (2007). "Does time really slow down during a frightening event." PloS one 2(12): e1295. Access (Accessed November 20, 2015)

Soares, S., Atallah, B. V., & Paton, J. J. (2016). "Midbrain dopamine neurons control judgment of time." Science, 354(6317), 1273-1277.


   Bingo

Sejnowski, T. and T. Delbruck (2012). "The Language of the Brain." Scientific American 307(4): 54-59.

Movie neuron (Accessed November 18, 2015)

Priming (and more) (Accessed November 18, 2015)

Priming (psychology) (Accessed November 18, 2015)

Takahashi, N., et al. (2012). "Locally Synchronized Synaptic Inputs." Science, 335(6066): 353-356.


   Beyond the Bounds

Boundary extension (Accessed November 23, 2016).


   Liar, Liar, Pants on Fire

Brian Williams (Accessed November 18, 2015)

Ben Carson (Accessed November 18, 2015)

Miller, G. (2012). "How are memories retrieved?" Science, 338(6103), 30-31.

Quran (Accessed November 18, 2015)

Tekcan, A. I., et al. (2003). "Autobiographical and event memory for 9/11: Changes across one year." Applied Cognitive Psychology, 17(9): 1057-1066.

Marcia K Johnson (Accessed November 18, 2015)

Elizabeth Loftus (Accessed November 18, 2015)

Loftus, E. and J. Palmer (1974). "Reconstruction of automobile destruction: An example of the interaction between language and memory." Journal of verbal learning and verbal behavior, 13(5): 585-589.

Printing press (Accessed November 18, 2015)

Homer (Accessed November 18, 2015)

Hafiz (Quran) (Accessed November 18, 2015)

Janssen, N., & Barber, H. A. (2012). "Phrase frequency effects in language production." PLoS ONE, 7(3), Paper (Accessed November 18, 2015)

Taconis, R., Ferguson-Hessler, M. G. M., & Broekkamp, H. (2001). "Teaching science problem solving: An overview of experimental work." Journal of Research in Science Teaching, 38(4), 442-468.

Shell, D. F., et al. (2010). The Unified Learning Model: How Motivational, Cognitive, and Neurobiological Sciences Inform Best Teaching Practices. Dordrecht, Springer.

Vladimir Putin (Accessed November 18, 2015)

Russian military intervention in Ukraine (Accessed November 18, 2015)

Garrett, N., Lazzaro, S. C., Ariely, D., & Sharot, T. (2016). "The brain adapts to dishonesty." Nature Neuroscience. doi:10.1038/nn.4426

Yokose, J., Okubo-Suzuki, R., Nomoto, M., Ohkawa, N., Nishizono, H., Suzuki, A., … Inokuchi, K. (2017). "Overlapping memory trace indispensable for linking, but not recalling, individual memories." Science, 355(6323), 398-403.

The Guardian


   Forgotten

Forget-me-not (Accessed November 19, 2015)

Roediger, H. L., & DeSoto, K. A. (2014). "Forgetting the presidents." Science, 346(6213), 1106-1109.

Marilu Henner (Accessed November 19, 2015)

Henner, M. (2012). Total memory makeover. Uncover your past, take charge of your future. New York: Gallery Books.

LePort, A. K. R., Mattfeld, A. T., Dickinson-Anson, H., Fallon, J. H., Stark, C. E. L., Kruggel, F. R., . . . McGaugh, J. L. (2012). "Behavioral and Neuroanatomical Investigation of Highly Superior Autobiographical Memory (HSAM)." Neurobiology of Learning and Memory, 98(1), 78-92.

Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S. J., & Frith, C. D. (2000). "Navigation-related structural change in the hippocampi of taxi drivers." Proceedings of the National Academy of Sciences, 97(8): 4398-4403.

Bekinschtein, T. A., Cardozo, J., & Manes, F. F. (2008). "Strategies of Buenos Aires Waiters to Enhance Memory Capacity in a Real-Life Setting." Behavioural Neurology, 20(3-4), 65-70.

Cao, X., Wang, H., Mei, B., An, S., Yin, L., Wang, L., & Tsien, J. (2008). "Inducible and selective erasure of memories in the mouse brain via chemical-genetic manipulation." Neuron, 60(2), 353-366.

Agren, T., Engman, J., Frick, A., Björkstrand, J., Larsson, E.-M., Furmark, T., & Fredrikson, M. (2012). "Disruption of Reconsolidation Erases a Fear Memory Trace in the Human Amygdala." Science, 337(6101), 1550-1552.

Karpova, N. N., Pickenhagen, A., Lindholm, J., Tiraboschi, E., Kulesskaya, N., Ágústsdóttir, A., … Castrén, E. (2011). "Fear Erasure in Mice Requires Synergy Between Antidepressant Drugs and Extinction Training." Science, 334(6063), 1731-1734.

PTSD (Accessed November 19, 2015)


   Sleep

Nichols, A. L. A., Eichler, T., Latham, R., & Zimmer, M. (2017). A global brain state underlies C. elegans sleep behavior. Science, 356(6344).

Walker, M. P. and R. Stickgold (2006). "Sleep, memory and plasticity." Neuroscience and Psychoanalysis 57: 93 139-166.

Massimini, M., Massimini, Ferrarelli, M., Huber, R., Esser, S. K., Singh, H., & Tononi, G. (2005). "Breakdown of Cortical Effective Connectivity During Sleep." Science, 309(5744): 2228-2232.

TMS (Accessed November 19, 2015)

Sleepwalking (Accessed November 19, 2015)

Rasch, B., Buchel, C., Gais, S., & Born, J. (2007). "Odor Cues During Slow-Wave Sleep Prompt Declarative Memory Consolidation." Science, 315(5817), 1426-1429.

Xie, L., Kang, H., Xu, Q., Chen, M. J., Liao, Y., Thiyagarajan, M., . . . Nedergaard, M. (2013). "Sleep Drives Metabolite Clearance from the Adult Brain." Science, 342(6156), 373-377.

Diekelmann, S., & Born, J. (2010). "The memory function of sleep." Nature Reviews Neuroscience, 11(2), 114-126.

Yang, G., Lai, C. S. W., Cichon, J., Ma, L., Li, W., & Gan, W.-B. (2014). "Sleep promotes branch-specific formation of dendritic spines after learning." Science, 344(6188), 1173-1178.

de Vivo, L., Bellesi, M., Marshall, W., Bushong, E. A., Ellisman, M. H., Tononi, G., & Cirelli, C. (2017). "Ultrastructural evidence for synaptic scaling across the wake/sleep cycle." Science, 355(6324), 507-510.

Diering, G. H., Nirujogi, R. S., Roth, R. H., Worley, P. F., Pandey, A., & Huganir, R. L. (2017). "Homer1a drives homeostatic scaling-down of excitatory synapses during sleep." Science, 355(6324), 511-515.

Long-term potentiation (Accessed November 19, 2015)

Memory consolidation (Accessed November 19, 2015)

Ji, D., & Wilson, M. A. (2007). "Coordinated memory replay in the visual cortex and hippocampus during sleep." Nature Neuroscience, 10(1), 100-107.

Muto, V., Jaspar, M., Meyer, C., Kussé, C., Chellappa, S. L., Degueldre, C., . . . Maquet, P. (2016). "Local modulation of human brain responses by circadian rhythmicity and sleep debt." Science, 353(6300), 687-690.

Wagner, U., Gais, S., Haider, H., Verleger, R., & Born, J. (2004). "Sleep inspires insight." Nature, 427(6972), 352-355.

Landmann, N., Kuhn, M., Piosczyk, H., Feige, B., Baglioni, C., Spiegelhalder, K., . . . Nissen, C. (2014). "The reorganisation of memory during sleep." Sleep medicine reviews, 18(6), 531-541.


   Waking Up

Awakening (Accessed November 19, 2015)

Alarms Decide When You Should Really Wake Up (Accessed November 19, 2015)


   Mind Wandering

Levinson, D. B., Smallwood, J., & Davidson, R. J. (2012). "The Persistence of Thought Evidence for a Role of Working Memory in the Maintenance of Task-Unrelated Thinking." Psychological science, 23(4), 375-380.

Killingsworth, M. A., & Gilbert, D. T. (2010). "A Wandering Mind Is an Unhappy Mind." Science, 330(6006), 932.

Smallwood, J., Fishman, D., & Schooler, J. (2007). "Counting the cost of an absent mind: Mind wandering as an underrecognized influence on educational performance." Psychonomic bulletin & review, 14(2), 230.

Christoff, K., Gordon, A. M., Smallwood, J., Smith, R., & Schooler, J. W. (2009). Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. Proceedings of the National Academy of Sciences, 106(21), 8719-8724.

Smallwood, J., Beach, E., Schooler, J. W., & Handy, T. C. (2008). Going AWOL in the brain: Mind wandering reduces cortical analysis of external events. Journal of Cognitive Neuroscience, 20(3), 458-469.

Goldberg, Y. K., Eastwood, J. D., LaGuardia, J., & Danckert, J. (2011). Boredom: An emotional experience distinct from apathy, anhedonia, or depression. Journal of social and clinical psychology, 30(6), 647-666.

Elpidorou, A. (2014). The bright side of boredom. Frontiers in psychology, 5, Article 1245, 1-4.


   I Believe

Sloman, S., & Fernbach, P. (2017). The knowledge illusion: Why we never think alone. New York: Riverhead Books.

Kaplan, J. T., Gimbel, S. I., & Harris, S. (2016). "Neural correlates of maintaining one's political beliefs in the face of counterevidence." Scientific reports, 6, 39589.

Kahan, Dan M. and Landrum, Asheley R and Carpenter, Katie and Helft, Laura and Jamieson, Kathleen Hall, Science Curiosity and Political Information Processing (August 1, 2016). Advances in Political Psychology, Forthcoming; Yale Law & Economics Research Paper No. 561. Available at SSRN: https://ssrn.com/abstract=2816803

Feinberg, M., & Willer, R. (2015). "From Gulf to Bridge When Do Moral Arguments Facilitate Political Influence?" Personality and Social Psychology Bulletin, 41(12), 1665-1681.

'Who shared it?': How Americans decide what news to trust on social media


   Motivation

Pintrich, P. R. and D. H. Schunk (1996). Motivation in education: Theory, research, and applications. Englewood Cliffs, NJ, Prentice Hall.

Brooks, D. W. and D. F. Shell (2006). "Working Memory, Motivation, and Teacher-Initiated Learning." Journal of Science Education and Technology, 15(1): 17-30.

Crowdsourcing (Accessed November 19, 2015)

Jordan, T. C., Burnett, S. H., Carson, S., Caruso, S. M., Clase, K., DeJong, R. J., . . . Hatfull, G. F. (2013). "A Broadly Implementable Research Course in Phage Discovery and Genomics for First-Year Undergraduate Students." mBio, 4(6). Implementable Research (Accessed November 19, 2015)


   I'm "Good" Today

Pekrun, R. and E. J. Stephens (2010). "Achievement Emotions: A Control-Value Approach." Social and Personality Psychology Compass, 4(4): 238-255.

Grandjean, D., Sander, D., & Scherer, K. (2008). "Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization." Consciousness and Cognition, 17, 484-495. (p. 493)

LaBar, K. S., & Cabeza, R. (2006). "Cognitive neuroscience of emotional memory." Nature Reviews Neuroscience, 7(1), 54-64.

Cohen, N., Henik, A., & Mor, N. (2011). "Can emotion modulate attention? Evidence for reciprocal links in the attentional network test." Experimental Psychology, 58(3), 171-197.

Hirst, W., Phelps, E. A., Buckner, R. L., Budson, A. E., Cuc, A., Gabrieli, J. D. E., et al. (2009). "Long-term memory for the terrorist attack of September 11: flashbulb memories, event memories, and the factors that influence their retention." Journal of Experimental Psychology: General, 138(2), 161-176.

Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton: Princeton University Press

Jimenez-Ortega, L., Martin-Loeches, M., Casado, P., Sel, A., Fondevila, S., Tejada, P. H. d., … Sommer, W. (2012). "How the emotional content of discourse affects language comprehension." PLoS ONE, 7(3), Paper (Accessed February 28, 2015).

Pekrun, R., Götz, T., Frenzel, A. C., Barchfeld, P., & Perry, R. P. (2011). "Measuring emotions in students' learning and performance: The achievement emotions questionnaire (AEQ)." Contemporary Educational Psychology, 36(1), 36-48.

Pekrun, R., Goetz, T., Daniels, L. M., Stupnisky, R. H., & Perry, R. P. (2010). "Boredom in achievement settings: Exploring control–value antecedents and performance outcomes of a neglected emotion." Journal of Educational Psychology, 102(3): 531-549.

Ekman, P. and E. L. Rosenberg, Eds. (2005). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), New York, Oxford University Press.


   You'll Like This

Schraw, G. and S. Lehman (2001). "Situational Interest: A Review of the Literature and Directions for Future Research." Educational Psychology Review, 13(1): 23-52.

Magner, U. I., et al. (2014). "Triggering situational interest by decorative illustrations both fosters and hinders learning in computer-based learning environments." Learning and Instruction, 29: 141-152.

Takacs, Z. K., Swart, E. K., & Bus, A. G. (2014). "Can the computer replace the adult for storybook reading? A meta-analysis on the effects of multimedia stories as compared to sharing print stories with an adult." Frontiers in Psychology, 5, 1-12. pdf (Accessed November 19, 2015)

Dan Pink's video (Accessed November 19, 2015)


   I Liked That

Tiger Woods (Accessed November 19, 2015)

Bloom, B. and L. Sosniak (1985). Developing Talent in Young People, Ballantine Books.

Roger Kornberg (Accessed November 19, 2015)

Arthur Kornberg (Accessed November 19, 2015)

George, B., Wystrach, V. P., and Perkins, R. (1985). "Why do students choose chemistry as a major?" Journal of Chemical Education, 62(6): 501.


   I Loved That

Bloom, B. and L. Sosniak (1985). Developing Talent in Young People, Ballantine Books.

Colvin, G. (2008). Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else. New York, Penguin Group.

Early math superstars (Accessed November 19, 2015)


   I'll Do That

Kahneman on Charlie Rose (Accessed November 19, 2015)

Egner, T. (2009). "Prefrontal cortex and cognitive control: motivating functional hierarchies." Nature Neuroscience, 12(7), 821-822

Krueger, F., Spampinato, M. V., Barbey, A. K., Huey, E. D., Morland, T., & Grafman, J. (2009). "The frontopolar cortex mediates event knowledge complexity: a parametric functional MRI study." NeuroReport, 20(12), 1093-1097

Charron, S., & Koechlin, E. (2010). "Divided representation of concurrent goals in the human frontal lobes." Science, 328(5976), 360-363.

Carlisle, N. B., & Woodman, G. F. (2011). "When memory is not enough: Electrophysiological evidence for goal-dependent use of working memory representations in guiding visual attention." Journal of Cognitive Neuroscience, 23(10), 2650-2664.

Stanovich, K. E., & West, R. F. (1998). "Cognitive ability and variation in selection task performance." Thinking & Reasoning, 4(3), 193-230.

Stanovich, K. E., & West, R. F. (2000). "Individual differences in reasoning: Implications for the rationality debate?" Behavioral and Brain Sciences, 23(5), 645-665.

Iyengar, S. S., & Lepper, M. R. (2000). "When choice is demotivating: Can one desire too much of a good thing?" Journal of Personality and Social Psychology, 79(6), 995-1006.

Dijksterhuis, A., et al. (2006). "On making the right choice: The deliberation-without-attention effect." Science, 311(5763): 1005-1007.

Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J. L., Arambel-Liu, S., Greenblatt, R., et al. (2004). "Neural activity when people solve verbal problems with insight." PLoS Biol, 2(4) 0020097. Paper (Accessed August 12, 2015).

Subramaniam, K., Kounios, J., Parrish, T., & Jung-Beeman, M. (2009). "A brain mechanism for facilitation of insight by positive affect." Journal of Cognitive Neuroscience, 21(3), 415-432.

Mather, M., Shafir, E., & Johnson, M. (2003). "Remembering chosen and assigned options." Memory & Cognition, 31(3), 422-433.

Sax, H., et al. (2013). "Implementation of infection control best practice in intensive care units throughout Europe: a mixed-method evaluation study." Implementation Science, 8(1): 24. Access (Accessed November 19, 2015)


   ... Two Marshmallows

Gnostic temptation(Accessed November 19, 2015)

Stanford marshmallow experiment(Accessed November 19, 2015)

Magen, E., Kim, B., Dweck, C. S., Gross, J. J., & McClure, S. M. (2014). "Behavioral and neural correlates of increased self-control in the absence of increased willpower." Proceedings of the National Academy of Sciences, 111(27), 9786-9791.

Turner, S., & Lapan, R. T. (2002). "Career self-efficacy and perceptions of parent support in adolescent career development."The Career Development Quarterly,51(1), 44-55.

Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (2001). "Self-efficacy beliefs as shapers of children's aspirations and career trajectories."Child Development,72(1), 187-206.

How people choose 'career paths' (Accessed August 6, 2015).

Krumboltz, J. D. (2009). "The happenstance learning theory."Journal of Career Assessment,17(2), 135-154.

Krumboltz, J. D., & Vidalakis, N. K. (2000). "Expanding learning opportunities using career assessments." Journal of Career Assessment,8(4), 315-327.

Brown, D. (2002).Career choice and development(4th ed.). San Francisco: Jossey-Bass. Dan Gilbert on Happiness(Accessed November 19, 2015). Conley, D. T. and E. M. French (2014). "Student ownership of learning as a key component of college readiness."American Behavioral Scientist,58(8): 1018-1034.
   Happiness

TED: Gilbert on happiness (21 minutes) (Accessed November 21, 2015)


   Working Memory

Cowan, N. (2005). Working memory capacity. New York, Psychology Press.

Working memory (Accessed November 20, 2015)

Ericsson, K. A., Chase, W. G., & Faloon, S. (1980). "Acquisition of a memory skill." Science, 208, 1181-1182.

Rinehart remembering cards (Accessed November 17, 2015)

Balduzzi, D. and G. Tononi (2008). "Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework." PLoS Computational Biology, 4(6 e1000091): 1-18. Access

Rose, N. S., LaRocque, J. J., Riggall, A. C., Gosseries, O., Starrett, M. J., Meyering, E. E., & Postle, B. R. (2016). "Reactivation of latent working memories with transcranial magnetic stimulation". Science, 354(6316), 1136-1139.

Long-term potentiation (Accessed November 19, 2015)


   Chunking

Chase, W. G. and H. A. Simon (1973). "Perception in chess." Cognitive psychology, 4(1): 55-81.

Simon, H. A. (1974). "How big is a chunk?" Science, 183, 482-488.

Fluid and crystallized intelligence


   Fixed Ability

Spearman, C. (1904). "General intelligence objectively determined and measured." American Journal of Psychology, 15, 201-293.

Intelligence quotient (Accessed November 20, 2015)

Genius (Accessed August 11, 2015)

Fluid and crystallized intelligence (Accessed November 20, 2015)

Walter O'Brien(Accessed November 20, 2015)

Flynn effect (Accessed November 20, 2015)

Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). "Working memory is (almost) perfectly predicted by g." Intelligence, 32, 277-296.

Flynn, J. R. (1987). "Massive IQ gains in 14 nations: What IQ tests really measure." Psychological Bulletin, 101, 171-191.

Redick, T. S., et al. (2013). "No evidence of intelligence improvement after working memory training: a randomized, placebo-controlled study." Journal of Experimental Psychology: General, 142(2): 359.

Rouder, J. N., et al. (2008). "An assessment of fixed-capacity models of visual working memory." Proc Natl Acad Sci U S A, 105(16): 5975-5979.

Truth about the Termites (Accessed November 20, 2015)

Terman, L. M. and M. H. Oden (1959). Genetic studies of genius. Vol. V. The gifted group at mid-life. Palo Alto, Stanford University Press.

Kern, M.L., & Friedman, H.S. (2008). "Early educational milestones as predictors of lifelong academic achievement, midlife adjustment, and longevity." Journal of Applied Developmental Psychology, 30, 419-430.

Hyde, J., Lindberg, S., Linn, M., Ellis, A., & Williams, C. (2008). "Diversity: Gender similarities characterize math performance." Science, 321(5888), 494-495

Bian, L., Leslie, S.-J., & Cimpian, A. (2017). "Gender stereotypes about intellectual ability emerge early and influence children's interests." Science, 355(6323), 389-391.

Cimpian, A., & Leslie, S-J. (2017). The Brilliance Trap. Scientific American, 317(3), 60.

Leslie, S.-J., Cimpian, A., Meyer, M., & Freeland, E. (2015). Expectations of brilliance underlie gender distributions across academic disciplines. Science, 347(6219), 262-265.

Kane, J. M., & Mertz, J. E. (2012). "Debunking myths about gender and mathematics performance." Notices of the American Mathematical Society, 59(1), 10-21.

Deary, I. J., Whalley, L. J., Lemmon, H., Crawford, J., & Starr, J. M. (2000). "The stability of individual differences in mental ability from childhood to old age: Follow-up of the 1932 Scottish mental survey." Intelligence, 28(1), 49-55.

Nature versus nurture

Dudbridge, F. (2013). "Power and predictive accuracy of polygenic risk scores." PLoS Genet, 9(3), e1003348.

Chabris, C. F., Lee, J. J., Cesarini, D., Benjamin, D. J., & Laibson, D. I. (2015). "The fourth law of behavior genetics." Current Directions in Psychological Science,24(4), 304-312.

Belsky, D. W., Moffitt, T. E., Corcoran, D. L., Domingue, B., Harrington, H., Hogan, S., … Caspi, A. (2016). "The Genetics of Success: How Single-Nucleotide Polymorphisms Associated With Educational Attainment Relate to Life-Course Development." Psychological science, 27(7), 957-972.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. Journal of personality and social psychology, 77(6), 1121-1134.


   Curious Cat

Livio, M. (2017). Why? What makes us curious. New York: Simon & Schuster.

Baranes, A. F., Oudeyer, P.-Y., & Gottlieb, J. (2014). "The effects of task difficulty, novelty and the size of the search space on intrinsically motivated exploration." Frontiers in neuroscience, 8, 1-9.

Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2012). "The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex." PloS one, 7(5), e36399.


   Believe In Me

Gladwell, M. (2008). Outliers: The story of success. New York: Little, Brown, and Company.

Outliers (book) (Accessed November 20, 2015)

Colvin, G. (2008). Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else. New York: Penguin Group.

Elliott, E. S., & Dweck, C. S. (1988). "An approach to motivation and achievement." Journal of Personality and Social Psychology, 54, 5-12.

Schunk, D. H. (1983). "Ability versus effort attributional feedback: Differential effects on self-efficacy and achievement." Journal of Educational Psychology, 75, 848-856.

Darold A. Treffert, 2010, Islands of Genius, Jessica Kingsley Publishers, London, ISBN-10: 1849058105.

Haimovitz, K., & Dweck, C. S. (2016). "What Predicts Children's Fixed and Growth Intelligence Mind-Sets? Not Their Parents' Views of Intelligence but Their Parents' Views of Failure." Psychological science, 27(6), 859-869.


   Multitasking

The Impact of Hand-Held and Hands-Free Cell Phone Use on Driving Performance and Safety-Critical Event Risk (Accessed November 20, 2015)

Anguera, J., Boccanfuso, J., Rintoul, J., Al-Hashimi, O., Faraji, F., Janowich, J., . . . Johnston, E. (2013). "Video game training enhances cognitive control in older adults." Nature, 501(7465), 97-101.

Diwas Singh (2013). "Does Multitasking Improve Performance? Evidence from the Emergency Department." Manufacturing & Service Operations Management, 16(2), 168-183.

Clapp, W. C., Rubens, M. T., Sabharwal, J., & Gazzaley, A. (2011). "Deficit in switching between functional brain networks underlies the impact of multitasking on working memory in older adults." Proceedings of the National Academy of Sciences, 108(17), 7212-7217.


   Rewiring

Dresler, M., Shirer, W. R., Konrad, B. N., Müller, N. C. J., Wagner, I. C., Fernández, G., … Greicius, M. D (2017) "Mnemonic Training Reshapes Brain Networks to Support Superior Memory." Neuron, 93(5), 1227-1235.

Bickart, K. C., Wright, C. I., Dautoff, R. J., Dickerson, B. C., & Barrett, L. F. (2010). "Amygdala volume and social network size in humans." Nature Neuroscience, 14, 163-164.


   I Know That

Shapiro, A. (2004). "How including prior knowledge as a subject variable may change outcomes of learning research." American Educational Research Journal, 41(1), 159-189.

Lev Vygotsky (Accessed November 21, 2015)

Zone of Proximal Development (Accessed November 21, 2015)

Instructional scaffolding (Accessed November 21, 2015)

Millard Fillmore (Accessed November 21, 2015)

Gleevec (Accessed November 21, 2015)

Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). "The expertise reversal effect." Educational Psychologist, 38(1), 23-31.

Duncan, K., Sadanand, A., & Davachi, L. (2012). "Memory's penumbra: Episodic memory decisions induce lingering mnemonic biases." Science, 337(6093), 485-487.

Groopman, J. (2007). How Doctors Think. New York, Houghton Mifflin.

Wilson, K., Trainin, G., Laughridge, V., Brooks, D., & Wickless, M. (2011). "Our Zoo To You: The link between zoo animals in the classroom and science and literacy concepts in first-grade journal writing." Journal of Early Childhood Literacy, 11(3), 275-306.

Borge-Holthoefer, J., & Arenas, A. (July 29 - August 1, 2009). "Navigating word association norms to extract semantic information." Paper presented at the Proceedings of the 31st Annual Conference of the Cognitive Science Society, Amsterdam.

Transfer of learning (Accessed November 21, 2015)

Simulation (Accessed November 21, 2015)


   By The Numbers

Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). "Statistical Learning by 8-Month-Old Infants." Science, 274(5294), 1926-1928.

Saffran, J. R., Senghas, A., & Trueswell, J. C. (2001). "The acquisition of language by children." Proceedings of the National Academy of Sciences, 98(23): 12874-12875.

Lany, J. and J. Saffran (2010). "From Statistics to Meaning: Infants' Acquisition of Lexical Categories." Psychological science, 21(2): 284-291.

Motherese (Accessed November 21, 2015)

Arzi, A., Shedlesky, L., Ben-Shaul, M., Nasser, K., Oksenberg, A., Hairston, I. S., & Sobel, N. (2012). "Humans can learn new information during sleep." Nature Neuroscience, 15, 1460-1465.


   Predicting Values

Kolling, N., Behrens, T. E. J., Mars, R. B., & Rushworth, M. F. S. (2012). "Neural mechanisms of foraging." Science, 336(6077), 95-98.

TED: Gilbert on decisions (30 minutes) (Accessed November 21, 2015)

Miller, G., Tybur, J. M., & Jordan, B. D. (2007). "Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrus?" Evolution and Human Behavior, 28(6), 375-381.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). "The neural bases of cognitive conflict and control in moral judgment." Neuron, 44(2), 389-400.

Eagleman, D. (2015). The Brain: The Story of You. New York, Penguin Random House.


   Somehow I Know That

Edward Thorndike (Accessed November 21, 2015)

Thorndike, E., & Rock, R. (1934). "Learning without awareness of what is being learned or intent to learn it." Journal of Experimental Psychology, 17(1), 1-19.

Reinforcement Theory (see Research on Reinforcement Theory) (Accessed August 12, 2015.)

Pryor, K. W., Haag, R., & O'Reilly, J. (1969). "The creative porpoise: Training for novel behavior." Journal of the Experimental Analysis of Behavior, 12, 653-661.

Levenson, R. M., Krupinski, E. A., Navarro, V. M., Wasserman, E. A., & Coles, J. A. (2015). "Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images." PloS one, 10(11), e0141357.

Biederman, I., & Shiffrar, M. M. (1987). "Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task." Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(4), 640-645.

Michael Gazzaniga: Your Storytelling Brain (Accessed November 21, 2015)

Miyamoto, K., Osada, T., Setsuie, R., Takeda, M., Tamura, K., Adachi, Y., & Miyashita, Y. (2017). "Causal neural network of metamemory for retrospection in primates." Science, 355(6321), 188-193.


   If It Looks Like a Duck

Homo economicus (Accessed December 1, 2016)

Lewis, M. (2017). The undoing project: A friendship that changed our minds. New York: W. W. Norton & Co.

Kahneman, D., & Tversky, A. (1973). "On the psychology of prediction." Psychological Review, 80(4), 237-251.


   They're All Like That

Stereotype (Accessed November 21, 2015)

Gilbert, D. T., & Hixon, J. G. (1991). "The trouble of thinking: activation and application of stereotypic beliefs." Journal of personality and social psychology, 60(4), 509.

Reyna, C. (2000). "Lazy, Dumb, or Industrious: When Stereotypes Convey Attribution Information in the Classroom." Educational Psychology Review, 12(1): 85-110.

Quantization (image processing) (Accessed November 21, 2015)


   That's Crazy

Computational Psychiatry.


   Faces

Ekman, P., & Rosenberg, E. L. (Eds.). (2005). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). New York: Oxford University Press.

Beneventi, H., Barndon, R., Ersland, L., & Hugdahl, K. (2007). "An fMRI study of working memory for schematic facial expressions." Scandinavian Journal of Psychology, 48(2), 81-86.

Kret, M. E., & Tomonaga, M. (2016). "Getting to the Bottom of Face Processing. Species-Specific Inversion Effects for Faces and Behinds in Humans and Chimpanzees (Pan Troglodytes)." PloS one, 11(11), e0165357. doi:doi:10.1371/journal.pone.0165357

Dogs sense of smell

Broadbent, E., et al. (2013). "Robots with display screens: a robot with a more humanlike face display is perceived to have more mind and a better personality." PloS one, 8(8): e72589. (Access) (Accessed November 21, 2015)

Russell, R., et al. (2009). "Super-recognizers: People with extraordinary face recognition ability." Psychonomic Bulletin & Review, 16(2): 252-257.

Chiroro, P. M., et al. (2008). "Recognizing faces across continents: The effect of within-race variations on the own-race bias in face recognition." Psychonomic Bulletin & Review, 15(6): 1089-1092.

Grammer, K. and R. Thornhill (1994). "Human (Homo sapiens) facial attractiveness and sexual selection: the role of symmetry and averageness." Journal of Comparative Psychology, 108(3): 233.

Hamers, L. (2017). Brains encode faces piece by piece. Science News, July 8, 2017, p. 9.

Chang, L., & Tsao, D. Y. (2017). The Code for Facial Identity in the Primate Brain. Cell, 169(6), 1013-1028.


   Libraries

Premack, D. (2007). "Human and animal cognition: Continuity and discontinuity." Proceedings of the National Academy of Sciences, 104(35), 13861-13867.

Homer (Accessed December 17, 2016)

Printing press (Accessed December 17, 2016)

Tennie, C., Call, J., & Tomasello, M. (2009). "Ratcheting up the ratchet: on the evolution of cumulative culture." Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1528), 2405-2415.

Giles, J. (2005). "Internet encyclopaedias go head to head." Nature, 438, 900-901.

Brulle, R. J. (2014). "Institutionalizing delay: foundation funding and the creation of US climate change counter-movement organizations." Climatic Change, 122(4), 681-694.


   More Stories

Gazzaley, A., & Rosen, L. D. (2016). The distracted mind: Ancient brains in a high-tech world. Cambridge, MA: MIT Press.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). "Learning Styles." Psychological Science in the Public Interest, 9(3), 105.

Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. Bower (Ed.), The Psychology of Learning and Motivation. (Vol. VIII, pp. 47-90). New York: Academic Press.

Aben, B., Stapert, S., & Blokland, A. (2012). "About the distinction between working memory and short-term memory." Frontiers in psychology, 3, 301. doi: 10.3389/fpsyg.2012.00301

Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). "Grit: perseverance and passion for long-term goals." Journal of personality and social psychology, 92(6), 1087-1101.


   (Modern) Telepathy

Telepathy (Accessed November 21, 2015)

Grau, C., Ginhoux, R., Riera, A., Nguyen, T. L., Chauvat, H., Berg, M., … Ruffini, G. (2014). "Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies." PLoS ONE, 9(8), e105225. Access (Accessed November 21, 2015)

Brain-to-brain communication is finally possible. It's just very clunky. (Accessed November 22, 2016)


   Mind Reading

Misunderstanding others (Accessed November 21, 2015)

Eppley, N. (2014). Mindwise. How We Understand What Others Think, Believe, Feel, and Want. New York: Random House.

Kenny, D. A., & DePaulo, B. M. (1993). "Do people know how others view them? An empirical and theoretical account." Psychological Bulletin, 114(1), 145.

Sallet, J., Mars, R. B., Noonan, M. P., Andersson, J. L., O'Reilly, J. X., Jbabdi, S., … Rushworth, M. F. S. (2011). "Social network size affects neural circuits in macaques." Science, 334(6056), 697-700. Science podcast. (Accessed January 1, 2014)

Wang, F., Zhu, J., Zhu, H., Zhang, Q., Lin, Z., & Hu, H. (2011). "Bidirectional control of social hierarchy by synaptic efficacy in medial prefrontal cortex." Science, 334(6056), 693-697.

Bickart, K. C., et al. (2010). "Amygdala volume and social network size in humans." Nature Neuroscience 14, 163-164.

Coplan (shyness) (Accessed November 21, 2015)

The Influential Mind, Google Books.

Sharot, T. (2017). The Influential Mind. New York: Henry Holt and Company.


   Imagine This and That

Isaacson, W. (2013). Steve Jobs. New York: Simon and Schuster.

Necker cube (Accessed November 21, 2015)

Frith, C. (2007). Making up the mind: How the mind creates our mental world. New York: Wiley-Blackwell.

Eureka effect (Accessed November 21, 2015)

Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J. L., Arambel-Liu, S., Greenblatt, R., . . . Kounios, J. (2004). "Neural Activity When People Solve Verbal Problems with Insight." PLoS Biol, 2(4). http://dx.doi:10.1371/journal.pbio.0020097 Access (Accessed December 16, 2016)

Sleeping helps (Accessed November 21, 2015)

Stroebe, W., Nijstad, B. A., & Rietzschel, E. F. (2010). "Beyond productivity loss in brainstorming groups: The evolution of a question." Advances in Experimental Social Psychology, 43, 157-203.


   Imagine Now and Then

TED: The psychology of your future self (Accessed November 21, 2015)

Quoidbach, J., Gilbert, D. T., & Wilson, T. D. (2013). "The End of History Illusion." Science, 339(6115), 96-98.


   Mind the Load

Cooper, G. (1998). "Research into Cognitive load theory and instructional design at UNSW." (Accessed August 4, 2015).

Clark, R. C., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-boased guidelines to manage cognitive load. San Francisco: Pfeiffer.

van Merriënboer, J., Kester, L., & Paas, F. (2006). "Teaching complex rather than simple tasks: Balancing intrinsic and germane load to enhance transfer of learning." Applied Cognitive Psychology, 20(3), 343-352.

Van Merriënboer, J. J. G., & Sweller, J. (2005). "Cognitive load theory and complex learning: Recent developments and future directions." Educational Psychology Review, 17(2), 147-177.


   Load the Mind

Alter, A. L. (2013). "The benefits of cognitive disfluency." Current Directions in Psychological Science, 22(6), 437-442.

Diemand-Yauman, C., Oppenheimer, D. M., & Vaughan, E. B. (2011). "Fortune favors the bold (and the italicized): Effects of disfluency on educational outcomes." Cognition, 118(1), 111-115.

Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). "The expertise reversal effect." Educational Psychologist, 38(1), 23-31.


   System-1, System-2

Kahneman, D. (2011). Thinking, Fast and Slow. New York Farrar Straus & Giroux (Macmillan).

Gladwell, M. (2005). Blink: the power of thinking without thinking. New York: Little, Brown & Co.

Berliner, D. C. (2001). "Learning about and learning from expert teachers." International Journal of Educational Research, 35(5), 463-482.

Ilg, R., Vogeley, K., Goschke, T., Bolte, A., Shah, J. N., Poppel, E., & Fink, G. R. (2007). "Neural processes underlying intuitive coherence judgments as revealed by fMRI on a semantic judgment task." NeuroImage, 38(1), 228-238.

Rameson, L. T., Satpute, A. B., & Lieberman, M. D. (2010). "The neural correlates of implicit and explicit self-relevant processing." NeuroImage, 50(2), 701-708.

Koole, S. L. (2009). "The psychology of emotion regulation: An integrative review." Cognition and Emotion, 23(1), 4-41.

Feldman Barrett, L., Tugade, M. M., & Engle, R. W. (2004). "Individual differences in working memory capacity and dual-process theories of the mind." Psychological Bulletin, 130(4), 553-573.


   Teaching

Shell, D. F., et al. (2010). The Unified Learning Model: How Motivational, Cognitive, and Neurobiological Sciences Inform Best Teaching Practices. Dordrecht, Springer.

Zich, C., et al. (2015). "Wireless EEG with individualized channel layout enables efficient motor imagery training." Clinical Neurophysiology, 126(4): 698-710.


   Teachers

Kelly, S. and L. Northrop (2015). "Early Career Outcomes for the "Best and the Brightest" Selectivity, Satisfaction, and Attrition in the Beginning Teacher Longitudinal Survey." American Educational Research Journal, 52(4): 624-656.

Worldwide comparison of teaching profession (Accessed August 12, 2015)


   The Yellow Brick Road

Cleeremans, A. (2014). "Prediction as a Computational Correlate of Consciousness." International Journal of Computing Anticipatory Systems, 29, 3-12.

Kahneman, D. (2011). Thinking, Fast and Slow. New York Farrar Straus & Giroux (Macmillan).


   Formal Instruction

Formal vs. Informal Education (Accessed November 21, 2015)


   Lectures

John Burmeister (Accessed November 21, 2015)

Zone of Proximal Development (Accessed November 21, 2015)

TED: Khan on flipped instruction (Accessed November 21, 2015)


   Buddies

Johnson, D. W., et al. (1981). "Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis." Psychological Bulletin, 89(1): 47-62.

Parker Small, Jr. (Accessed November 21, 2015)

Mazur, E. (1996). Peer Instruction: A User's Manual. Boston, Addison Wesley.


   Scripted vs Extensively Planned

Whitescarver, K., & Cossentino, J. (2008). "Montessori and the mainstream: A century of reform on the margins." The Teachers College Record, 110(12), 2571-2600.

Montessori Classroom (Accessed March 3, 2015).

Lillard, A., & Else-Quest, N. (2006). "The Early Years: Evaluating Montessori." Science, 313(5795), 1893-1894.

Reeves, J. (2010). "Teacher learning by script." Language Teaching Research, 14(3): 241-258.

Goodwin, B. (2011). Simply Better: Doing What Matters Most to Change the Odds for Student Success. Alexandria, VA, Association for Supervision & Curriculum Development.

Gersten, R., et al. (2015). "Intervention for First Graders With Limited Number Knowledge Large-Scale Replication of a Randomized Controlled Trial." American Educational Research Journal, 52(3): 516-546.


   Building Chunks

Johnson, M. K., Raye, C. L., Mitchell, K. J., & Ankudowich, E. (2012). The cognitive neuroscience of true and false memories, True and false recovered memories (pp. 15-52): Springer.

Transfer of learning

Knowledge transfer

Inglis, M., & Attridge, N. (2017). Does mathematical study develop logical thinking?: Testing the theory of formal discipline. London: World Scientific Publishing Europe Ltd.

Luiten, J., Ames, W., & Ackerson, G. (1980). A meta-analysis of the effects of advance organizers on learning and retention. American Educational Research Journal, 17(2), 211-218.

Advance Organizers


   Inquiry

Kirschner, P. A., et al. (2006). "Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching." Educational Psychologist, 41(2): 75-86.

Wales, C., & Stager, R. A., (1978). The Guided Design Approach, Educational Technology Publications, Englewood Cliffs, New Jersey.


   Machines & The Challenge

Norman, D. A. (1991). Cognitive Artifacts. in Designing Interaction. J. M. Carroll. Cambridge, Cambridge: 17-38.

Norman, D. A. (1990). The Design of Everyday Things. Garden City, NY, Doubleday

Open book tests (Accessed November 21, 2015)

Brooks, D. W. (1997). Web-Teaching. New York, NY, Plenum Press.


   Designing Instruction

Dick & Carey model (Accessed November 21, 2015)

Schaum's Outlines (Accessed November 21, 2015)

Davis, E. A., Palincsar, A. S., Smith, P. S., Arias, A. M., & Kademian, S. M. (2017). "Educative Curriculum Materials: Uptake, Impact, and Implications for Research and Design." Educational Researcher, 46, 293-304.


   My Pace

Colangelo, N., Assouline, S. G., & Gross, M. U. M. (2004). A nation deceived: How schools hold back America's brightest students. (Accessed March 6, 2015).

Lee, S. Y., Olszewski-Kubilius, P., & Thomson, D. T. (2012). "Academically gifted students' perceived interpersonal competence and peer relationships." Gifted Child Quarterly, 56(2), 90-104.

Lee, S. Y., Olszewski-Kubilius, P., & Peternel, G. (2010). "The efficacy of academic acceleration for gifted minority students." Gifted Child Quarterly, 54(3), 189-208.

Paula Olszewski-Kubilius, "Talent searches and accelerated programming for gifted students," Chapter 7 of Volume 2 of A Nation Deceived.

Assouline, S. G., & Lupkowski-Shoplik, A. (2012). "The talent search model of gifted identification." Journal of Psychoeducational Assessment, 30(1), 45-59.

Schofield, J. (2010). "International evidence on ability grouping with curriculum differentiation and the achievement gap in secondary schools." The Teachers College Record, 112(5), 1492-1528.

Kulik, C. L. C., & Kulik, J. A. (1982). "Effects of ability grouping on secondary school students: A meta-analysis of evaluation findings." American Educational Eesearch Journal, 19(3), 415-428.

Nelson, T. L. (2001). "Tracking, parental education, and child literacy development: How ability grouping perpetuates poor education attainment within minority communities." Georgetown Journal on Poverty, Law, & Policy, 8, 363.

Slavin, R. E. (1987). "Ability grouping and student achievement in elementary schools: A best-evidence synthesis." Review of Educational Research, 57(3), 293-336.

Sheldon Cooper (Accessed November 21, 2015)


   Scaffolding

The Early Reading Skills Builder (Accessed November 21, 2015)

Instructional scaffolding (Accessed November 21, 2015)

Robert Solow (Accessed November 21, 2015)

Mathematica (Accessed November 21, 2015)


   Learning Goals

Dweck, C. S., & Master, A. (2008). "Self-theories motivate self-regulated learning." In D. H. Schunk & B. J. Zimmerman (Eds.). Motivation and self-regulated learning: Theory, research, and applications (pp. 31-52). New York: Erlbaum/Taylor & Francis Group.

Ng, E., & Bereiter, C. (1991). "Three levels of goal orientation in learning." Journal of the Learning Sciences, 1, 243-271.

Meece, J., Anderman, E., & Anderman, L. (2006). "Classroom goal structure, student motivation, and academic achievement." Annual Review of Psychology, 57, 487-503.

Long, M. C., Conger, D., & Iatarola, P. (2012). "Effects of High School Course-Taking on Secondary and Postsecondary Success." American Educational Research Journal, 49(2), 285-322.

Trusty, J. (2002). "Effects of high school course-taking and other variables on choice of science and mathematics college majors." Journal of Counseling & Development, 80(4), 464-474.

Lee, J. (2012). "College for all: Gaps between desirable and actual p-12 math achievement trajectories for college readiness." Educational Researcher, 41(2), 43-55.

Dompnier, B., et al. (2015). "Improving Low Achievers' Academic Performance at University by Changing the Social Value of Mastery Goals." American Educational Research Journal, 52(4): 720-749.

Campos, F., Frese, M., Goldstein, M., Iacovone, L., Johnson, H. C., McKenzie, D., & Mensmann, M. (2017). Teaching personal initiative beats traditional training in boosting small business in West Africa. Science, 357(6357), 1287-1290.


   Prior Knowledge

Tyson, W., & Roksa, J. (2017). Importance of Grades and Placement for Math Attainment. Educational Researcher, 46(3), 140-142.

Shapiro, A. (2004). "How including prior knowledge as a subject variable may change outcomes of learning research." American Educational Research Journal, 41(1), 159-189.

European Language Phonemes (Accessed November 21, 2015)

Britton, B. K. and A. Tesser (1982). "Effects of prior knowledge on use of cognitive capacity in three complex cognitive tasks." Journal of Verbal Learning and Verbal Behavior, 21(4): 421-436.

State School Rankings Mostly Measuring Race and Income Rather Than Performance (Accessed August 12, 2015)


   Effort

James Cattell (Accessed November 21, 2015)

Fluid and crystallized intelligence (Accessed November 21, 2015)

Ziegler, A., & Stoeger, H. (2010). "Research on a modified framework of implicit personality theories." Learning and Individual Differences, 20, 318-326.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). "Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention." Child Development, 78(1), 246-263.

Dweck, C. S., & Master, A. (2008). "Self-theories motivate self-regulated learning." In D. H. Schunk & B. J. Zimmerman (Eds.) Motivation and self-regulated learning: Theory, research, and applications (pp. 31-52). New York: Erlbaum/Taylor & Francis Group.

Rory McIlroy (Accessed November 21, 2015)

Michael Jordan (Accessed November 21, 2015)


   Studying

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Rev., 100(3), 363-406.

Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 683-703). New York: Cambridge University Press.

Brown, P. C., Roedinger, H. L., & McDaniel, M. A. (2014). Make It Stick. Cambridge, MA: Belknap Press of Harvard University Press.

NPR on music practice

Moreno, R., Reisslein, M., & Delgoda, G. M. (2006). Toward a fundamental understanding of worked example instruction: Impact of means-ends practice, backward/forward fading, and adaptivity. Paper presented at the 36th ASEE/IEEE Frontiers in Education Conference, San Diego, CA.

Frey, R. F., Cahill, M. J., & McDaniel, M. A. (2017). Students’ Concept-Building Approaches: A Novel Predictor of Success in Chemistry Courses. Journal of Chemical Education, 94(9), 1185-1194.

Karpicke, J. D., & Roediger, H. L. (2008). The Critical Importance of Retrieval for Learning. Science, 319, 966-968.

Karpicke, J. D., Blunt, J. R., & Smith, M. A. (2016). Retrieval-Based Learning: Positive Effects of Retrieval Practice in Elementary School Children. Frontiers in psychology, 7. doi.org/10.3389/fpsyg.2016.00350


   Games

Przybylski, A. K., Weinstein, N., Murayama, K., Lynch, M. F., & Ryan, R. M. (2012). "The ideal self at play the appeal of video games that let you be all you can be." Psychological science, 23(1), 69-76.

Gosling, S. D., Rentfrow, P. J., & Swann Jr, W. B. (2003). "A very brief measure of the Big-Five personality domains." Journal of Research in Personality, 37, 504-528.

TIPI

Alexander, G., Eaton, I., & Egan, K. (2010). Cracking the Code of Electronic Games: Some Lessons for Educators. Teachers College Record, 112(7), 1830-1850.

Parong, J., Mayer, R. E., Fiorella, L., MacNamara, A., Homer, B. D., & Plass, J. L. (2017). Learning executive function skills by playing focused video games. Contemporary Educational Psychology, 51, 141-151.


   Choices

Norway and free college

Rory McIlroy (Accessed November 21, 2015)

Michael Jordan (Accessed November 21, 2015)

George, B., Wystrach, V., & Perkins, R. (1985). "Why do students choose chemistry as a major?" Journal of Chemical Education, 62(6), 501.

See Cecilia Gaposchkin at: Major in what you love (Accessed November 21, 2015)

How to pick a career (Accessed November 21, 2015)

Choosing a college major (Accessed November 21, 2015)

Changing majors (Accessed November 21, 2015)

Blattman 10 things not enough kids know before going to college.


   Schools

Emeril Recipe (Accessed March 4, 2015).

Wilkinson, I. A. G., Parr, J. M., Fung, I. Y. Y., Hattie, J. A. C., & Townsend, M. A. R. (2002). "Discussion: Modeling and maximizing peer effects in school." International Journal of Educational Research, 37(5), 521-535.

Wang, M.-T., & Holcombe, R. (2010). "Adolescents' perceptions of school environment, engagement, and academic achievement in middle school." American Educational Research Journal, 47(3), 633-662.

Newton, X. (2010). "End-of-high-school mathematics attainment: How did students get there?" The Teachers College Record, 112(4), 1064-1095.

Warren, J. R. (2012). "First- through eighth-grade retention rates for all 50 states. A new method and initial results." Educational Researcher, 41(8), 320-329.

Hoy, W. K., Tarter, C. J., & Hoy, A. W. (2006). "Academic optimism of schools: A force for student achievement." American Educational Research Journal, 43(3), 425-446.

Else-Quest, N. M. and O. Peterca (2015). "Academic Attitudes and Achievement in Students of Urban Public Single-Sex and Mixed-Sex High Schools." American Educational Research Journal, 52(4): 693-718.


   Museums

Museum Definition (Accessed March 4, 2015.)


   High Stakes

NCLB


   Rating Teachers

Hill, H. C., Kapitula, L., & Umland, K. (2010). "A Validity Argument Approach to Evaluating Teacher Value-Added Scores." American Educational Research Journal, 48(3), 794-831.

McCaffrey, D. F., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). "The Intertemporal Variability of Teacher Effect Estimates." Education Finance and Policy, 4(4), 572-606.

Goe, L., Bell, C., & Little, O. (2008). Approaches to Evaluating Teacher Effectiveness: A Research Synthesis. Washington, D. C.: National Comprehensive Center for Teacher Quality.

Beleche, T., Fairris, D., & Marks, M. (2010). "Do Course Evaluations Truly Reflect Student Learning?: Evidence from an Objectively Graded Post-test." Economics of Education Review, 31(5), 709-719.

Dean, L. G., Kendal, R. L., Schapiro, S. J., Thierry, B., & Laland, K. N. (2012). "Identification of the Social and Cognitive Processes Underlying Human Cumulative Culture." Science, 335(6072), 1114-1118.


   Improving Teachers

Board certification (Accessed November 22, 2015)

Mathur, P. (2011). "Hand hygiene: Back to the basics of infection control." The Indian Journal of Medical Research, 134(5), 611-620.

Shell, D. F., Brooks, D. W., Trainin, G., Wilson, K. M., Kauffman, D. F., & Herr, L. M. (2010). The Unified Learning Model: How Motivational, Cognitive, and Neurobiological Sciences Inform Best Teaching Practices. Dordrecht: Springer.


   Empowering Learners

Gazzaniga, M. (2011). Who's in Charge? Free Will and the Science of the Brain. New York: HarperCollins Publishers Inc. p. 40.

Self-regulated learning (Accessed November 22, 2015)

Developing Young Children's Self-Regulation through Everyday Experiences (Accessed November 23, 2015)

General resources for self-regulation (Accessed November 23, 2015)

Helping Students Self-Regulate in Math and Sciences Courses: Improving the Will and the Skill (Accessed November 23, 2015)

Pressley, M., Kersey, S. E. D., Bogaert, L. R., Mohan, L., Roehrig, A. D., & Warzon, K. B. (2003). Motivating primary-grade students. New York: Guilford Press.


   Pay Attention!

Gazzaley, A., & Rosen, L. D. (2016). The distracted mind: Ancient brains in a high-tech world. Cambridge, MA: MIT Press.

Eastwood, J. D., Frischen, A., Fenske, M. J., & Smilek, D. (2012). "The unengaged mind defining boredom in terms of attention." Perspectives on Psychological Science, 7(5), 482-495.

20-20-20rule (Accessed November 26, 2016).


   (Pretend to) Think Out Loud

Stork, G., Niu, D., Fujimoto, A., Koft, E. R., Balkovec, J. M., Tata, J. R., & Dake, G. R. (2001). "The first stereoselective total synthesis of quinine." Journal of the American Chemical Society, 123(14), 3239-3242.

Gazzaniga, M. (2011). Who's in Charge? Free Will and the Science of the Brain. New York: HarperCollins Publishers Inc.


   Appendices

Blind men and an elephant (Accessed December 17, 2016)


   Old-fashioned Psych

William James Talks

Thorndike, E., & Rock, R. (1934). "Learning without awareness of what is being learned or intent to learn it." Journal of Experimental Psychology, 17(1), 1-19.

Loftus, E. and J. Palmer (1974). "Reconstruction of automobile destruction: An example of the interaction between language and memory." Journal of verbal learning and verbal behavior, 13(5): 585-589.


   Invertebrate Animal Models

California sea hare

Eric Kandel

Animal model

Antonov, I., Antonova, I., Kandel, E. R., & Hawkins, R. D. (2003). "Activity-Dependent Presynaptic Facilitation and Hebbian LTP Are Both Required and Interact during Classical Conditioning in Aplysia." Neuron, 37(1), 135-147.

Drosophila melanogaster

Caenorhabditis elegans

White, J. G., et al. (1976). "The structure of the ventral nerve cord of Caenorhabditis elegans." Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences:, 275(938)327-348.

Robinson, G. E., et al. (2008). "Genes and social behavior." Science, 322(5903): 896-900.

Beneventi, H., Barndon, R., Ersland, L., & Hugdahl, K. (2007). "An fMRI study of working memory for schematic facial expressions." Scandinavian Journal of Psychology, 48(2), 81-86.

Dupre, C., & Yuste, R. (2017). Non-overlapping neural networks in Hydra vulgaris. Current Biology, 27(8), 1085-1097.

Bee learning and communication

Loukola, O. J., Perry, C. J., Coscos, L., & Chittka, L. (2017). "Bumblebees show cognitive flexibility by improving on an observed complex behavior." Science, 355, 833-836.


   Vertebrate Animal Models

Montague, M. J., Li, G., Gandolfi, B., Khan, R., Aken, B. L., Searle, S. M., … Davis, B. W. (2014). "Comparative analysis of the domestic cat genome reveals genetic signatures underlying feline biology and domestication." Proceedings of the National Academy of Sciences, 111(48) 17230-17235.

Why they won Nobel prizes (Accessed November 28, 2015)

Hubel & Wiesel's demonstration of simple, complex and hypercomplex cells in the cat's visual cortex (Accessed November 28, 2015)

Van Praag, H., Kempermann, G., & Gage, F. H. (2000). "Neural consequences of enviromental enrichment." Nature Reviews Neuroscience, 1(3), 191-198.

Gong, Y., Wagner, M. J., Zhong Li, J., & Schnitzer, M. J. (2014). "Imaging neural spiking in brain tissue using FRET-opsin protein voltage sensors." Nat Commun, 5. doi: 10.1038/ncomms4674 (11 pp).

Jiang, X., Shen, S., Cadwell, C. R., Berens, P., Sinz, F., Ecker, A. S., . . . Tolias, A. S. (2015). "Principles of connectivity among morphologically defined cell types in adult neocortex." Science, 350(6264), 1055. (This is a research article summary.)

Carrillo-Reid, L., Yang, W., Bando, Y., Peterka, D. S., & Yuste, R. (2016). "Imprinting and recalling cortical ensembles." Science, 353(6300), 691-694.

Han, X., et al. (2013). "Forebrain engraftment by human glial progenitor cells enhances synaptic plasticity and learning in adult mice." Cell Stem Cell, 12(3): 342-353.

Florence, S. L., Jain, N., & Kaas, J. H. (1997). "Plasticity of somatosensory cortex in primates." Seminars in Neuroscience, 9(1-2), 3-12.

O'Malley, R. C., Wallauer, W., Murray, C. M., & Goodall, J. (2012). "The Appearance and Spread of Ant Fishing among the Kasekela Chimpanzees of Gombe." Current Anthropology, 53(5), 650-663.

Dean, L. G., Kendal, R. L., Schapiro, S. J., Thierry, B., & Laland, K. N. (2012). "Identification of the Social and Cognitive Processes Underlying Human Cumulative Culture." Science, 335(6072), 1114-1118.


   Neurons and Light

Ten thousand neurons linked to behaviors in fly

Vogelstein, J. T., Park, Y., Ohyama, T., Kerr, R. A., Truman, J. W., Priebe, C. E., & Zlatic, M. (2014). "Discovery of Brainwide Neural-Behavioral Maps via Multiscale Unsupervised Structure Learning." Science, 344(6182), 386-392.

Gibson, E. M., Purger, D., Mount, C. W., Goldstein, A. K., Lin, G. L., Wood, L. S., … Monje, M. (2014). "Neuronal Activity Promotes Oligodendrogenesis and Adaptive Myelination in the Mammalian Brain." Science, 344(6183).

Cai, D. J., Aharoni, D., Shuman, T., Shobe, J., Biane, J., Song, W., . . . Lou, J. (2016). A shared neural ensemble links distinct contextual memories encoded close in time. Nature, 534(7605), 115-118.

Schlichting, M. L., & Preston, A. R. (2015). Memory integration: neural mechanisms and implications for behavior. Current opinion in behavioral sciences, 1, 1-8.

McKenzie, S., Frank, A. J., Kinsky, N. R., Porter, B., RiviŹre, P. D., & Eichenbaum, H. (2014). Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas. Neuron, 83(1), 202-215.

Stern, P. (2017). The brain circuits of a winner. Science, 357(6347), 159-159.

Zhou, T., Zhu, H., Fan, Z., Wang, F., Chen, Y., Liang, H., . . . Hu, H. (2017). History of winning remodels thalamo-PFC circuit to reinforce social dominance. Science, 357(6347), 162-168.

Gizowski, C., & Bourque, C. W. (2017). Neurons that drive and quench thirst. Science, 357(6356), 1092-1093.

Hung, L. W., Neuner, S., Polepalli, J. S., Beier, K. T., Wright, M., Walsh, J. J., . . . Malenka, R. C. (2017). Gating of social reward by oxytocin in the ventral tegmental area. Science, 357(6358), 1406-1411.


   Brain Surgery

HenryMolaison (Accessed November 28, 2015)

Brenda Milner (Accessed November 28, 2015)

Xu, Y., & Corkin, S. (2001). "HM revisits the Tower of Hanoi Puzzle." Neuropsychology, 15(1), 69-79.

Corpus callosum (Accessed November 28, 2015)

Split brain experiments (Accessed November 28, 2015)

Gazzaniga, M. (2011). Who's in Charge? Free Will and the Science of the Brain. New York: HarperCollins Publishers Inc.


   Specialized Neurons

Herculano-Houzel, S. (2013). What is so special about the human brain? [TED]

Herculano-Houzel, S. (2016). The Paradox of the Elephant Brain. With three times as many neurons, why doesn't the elephant brain outperform ours? Nautilus (April 7, 2016).

Neurogenesis (Accessed November 28, 2015)

Spindle neuron (Accessed November 28, 2015)

The Life and Death of a Neuron (Accessed November 28, 2015)

Jiang, X., Shen, S., Cadwell, C. R., Berens, P., Sinz, F., Ecker, A. S., . . . Tolias, A. S. (2015). "Principles of connectivity among morphologically defined cell types in adult neocortex." Science, 350(6264), 1055. (This is a research article summary.)

Mirror neuron (Accessed November 28, 2015)

Hickok, G. (2014). The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition. New York, W. W. Norton & Co.

Synaptic pruning

Blundon, J. A., Roy, N. C., Teubner, B. J. W., Yu, J., Eom, T.-Y., Sample, K. J., . . . Zakharenko, S. S. (2017). Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling. Science, 356(6345), 1352-1356.

Gomez, J., Barnett, M. A., Natu, V., Mezer, A., Palomero-Gallagher, N., Weiner, K. S., . . . Grill-Spector, K. (2017). "Microstructural proliferation in human cortex is coupled with the development of face processing." Science, 355(6320), 68-71.


   Pins and Needles

Mesgarani, N., Cheung, C., Johnson, K., & Chang, E. F. (2014). "Phonetic Feature Encoding in Human Superior Temporal Gyrus." Science, 343(6174), 1006-1010.

Long, N. M., Sperling, M. R., Worrell, G. A., Davis, K. A., Gross, R. E., Lega, B. C., . . . Stein, J. M. (2017). Contextually Mediated Spontaneous Retrieval Is Specific to the Hippocampus. Current Biology, 27(7), 1074-1079.


   Brainspeak

Eagleman, D. (2015). The Brain: The Story of You. New York: Penguin Random House.

Cochlear implant (Accessed November 28, 2015)

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., . . . van der Smagt, P. (2012). "Reach and grasp by people with tetraplegia using a neurally controlled robotic arm." Nature, 485(7398), 372-375.


   Broken Brains

Chuck Close (Accessed December 1, 2016)

prosopagnosia (Accessed November 21, 2015)

Henry Molaison (Accessed December 1, 2016)

Phineas Gage (Accessed December 1, 2016)

Caramazza, A., & Shelton, J. R. (1998). "Domain-specific knowledge systems in the brain: The animate-inanimate distinction." Journal of Cognitive Neuroscience, 10(1), 1-34.

For an excellent, readable review of all sorts of actual cases that have informed our understanding of brain fuctions, see Kean, S. (2014). The Tale of the Dueling Neurosurgeons: The History of the Human Brain as Revealed by True Stories of Trauma, Madness, and Recovery. New York: Back Bay Books.


   Strokes

Taylor, J. B. (2009). My stroke of insight. New York: Penguin Books.

Marks, L. (2017). A stitch of time: The year brain injury changed my language. New York: Simon & Schuster.

Jill Bolte-Taylor's TED talk.


   Don't Stare

Saccade (Accessed November 28, 2015)

Einhauser, W., et al. (2010). "Pupil dilation betrays the timing of decisions." Frontiers in Human Neuroscience, 4, doi: 10.3389/fnhum.2010.00018

Conversion lessons learned from eye tracking (Accessed November 28, 2015)

Holsanova, J. (2011). "How we focus attention in picture viewing, picture description, and during mental imagery." In K. Sachs-Hombach & R. Totzke (Eds.), Bilder - sehen - denken: zum Verhältnis von begrifflich-philosophischen und empirisch-psychologischen Ansätzen in der bildwissenschaftlichen Forschung, (pp. 291-313).

Johansson, R., Holsanova, J., Dewhurst, R., & Holmqvist, K. (2012). "Eye movements during scene recollection have a functional role, but they are not reinstatements of those produced during encoding." Journal of Experimental Psychology: Human Perception and Performance, 38(5), 1289-1314.


   fMRI

Magnetic resonance imaging (Accessed November 28, 2015)

Neurons in a voxel according to Aguirre. (Accessed November 28, 2016)

McClure, S. M., et al. (2004). "Neural correlates of behavioral preference for culturally familiar drinks." Neuron, 44(2): 379-387.

Soon, C. S., et al. (2008). "Unconscious determinants of free decisions in the human brain." Nature Neuroscience, 11(5): 543-545.

Gerraty, X. R. T., Davidow, J. Y., Wimmer, X. E., Kahn, I., & Shohamy, D. (2014). Transfer of Learning Relates to Intrinsic Connectivity between Hippocampus, Ventromedial Prefrontal Cortex, and Large-Scale Networks. The Journal of Neuroscience, 34(34), 11297-11303.

Rosenberg-Lee, M., Ashkenazi, S., Chen, T., Young, C. B., Geary, D. C., & Menon, V. (2015). Brain hyper-connectivity and operation-specific deficits during arithmetic problem solving in children with developmental dyscalculia. Developmental Science, 18(3), 351.

Huth, A. G., de Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016). "Natural speech reveals the semantic maps that tile human cerebral cortex." Nature, 532(7600), 453-458.

LeWinn, K. Z., Sheridan, M. A., Keyes, K. M., Hamilton, A., & McLaughlin, K. A. (2017). Sample composition alters associations between age and brain structure. Nature Communications, 8(1), 874.


   EEG

Electroencephalography (Accessed November 28, 2015)

Benjamin Libet (Accessed November 28, 2015)

Who's the decider? (Accessed December 8, 2014)

EEG Summary (Accessed November 28, 2015)


   Magnetic Personality

Magnetoencephalography

Goldenholz, D. M., et al. (2009). "Mapping the signal-to-noise-ratios of cortical sources in magnetoencephalography and electroencephalography." Human Brain Mapping, 30(4): 1077-1086.


   Shocking

Electroconvulsive therapy

Transcranial magnetic stimulation

Gutchess, A. (2014). "Plasticity of the aging brain: New directions in cognitive neuroscience." Science, 346(6209), 579-582.

John Elder Robison, accessed November 12, 2015.

Casarotto, S., Comanducci, A., Rosanova, M., Sarasso, S., Fecchio, M., Napolitani, M., . . . Boly, M. (2016). Stratification of unresponsive patients by an independently validated index of brain complexity. Annals of neurology, 80(5), 718-729.


   Artifical Intelligence and Neural Networks

https://en.wikipedia.org/wiki/Artificial_neural_network


   December 20, 2016

The following references appear in the current edition but were not in previous editions.

Books

Lewis, M. (2017). The undoing project: A friendship that changed our minds. New York: W. W. Norton & Co.

Gazzaley, A., & Rosen, L. D. (2016). The distracted mind: Ancient brains in a high-tech world. Cambridge, MA: MIT Press.

For an excellent, readable review of all sorts of actual cases that have informed our understanding of brain functions, see Kean, S. (2014). The Tale of the Dueling Neurosurgeons: The History of the Human Brain as Revealed by True Stories of Trauma, Madness, and Recovery. New York: Back Bay Books.

Hofstadter, D. (2007). I am a strange loop. New York: Basic Books.

Papers not previously cited

Boisseau RP, Vogel D, Dussutour A. (2016) "Habituation in non-neuralorganisms: evidence from slime moulds." Proc. R. Soc. B 283: 20160446. dx.doi.org/10.1098/rspb.2016.0446

Engel, T. A., Steinmetz, N. A., Gieselmann, M. A., Thiele, A., Moore, T., & Boahen, K. (2016). "Selective modulation of cortical state during spatial attention." Science, 354(6316), 1140-1144.

Garrett, N., Lazzaro, S. C., Ariely, D., & Sharot, T. (2016). "The brain adapts to dishonesty." Nature Neuroscience. doi:10.1038/nn.4426

Rose, N. S., LaRocque, J. J., Riggall, A. C., Gosseries, O., Starrett, M. J., Meyering, E. E., & Postle, B. R. (2016). "Reactivation of latent working memories with transcranial magnetic stimulation". Science, 354(6316), 1136-1139.

Soares, S., Atallah, B. V., & Paton, J. J. (2016). "Midbrain dopamine neurons control judgment of time." Science, 354(6317), 1273-1277.

Galla, B. M., & Duckworth, A. L. (2015). "More than resisting temptation: Beneficial habits mediate the relationship between self-control and positive life outcomes." Journal of personality and social psychology, 109(3), 508-525.

Kim, T., Thankachan, S., McKenna, J. T., McNally, J. M., Yang, C., Choi, J. H., . . . McCarley, R. W. (2015). "Cortically projecting basal forebrain parvalbumin neurons regulate cortical gamma band oscillations." Proceedings of the National Academy of Sciences, 112(11), 3535-3540.

Milyavskaya, M., Inzlicht, M., Hope, N., & Koestner, R. (2015). "Saying "no" to temptation: Want-to motivation improves self-regulation by reducing temptation rather than by increasing self-control." Journal of personality and social psychology, 109(4), 677-693.

Brulle, R. J. (2014). "Institutionalizing delay: foundation funding and the creation of US climate change counter-movement organizations." Climatic Change, 122(4), 681-694.

Landmann, N., Kuhn, M., Piosczyk, H., Feige, B., Baglioni, C., Spiegelhalder, K., . . . Nissen, C. (2014). "The reorganisation of memory during sleep." Sleep medicine reviews, 18(6), 531-541.

Eastwood, J. D., Frischen, A., Fenske, M. J., & Smilek, D. (2012). "The unengaged mind defining boredom in terms of attention." Perspectives on Psychological Science, 7(5), 482-495.

Hofmann, W., Baumeister, R. F., Förster, G., & Vohs, K. D. (2012). "Everyday temptations: an experience sampling study of desire, conflict, and self-control." Journal of personality and social psychology, 102(6), 1318.

Tennie, C., Call, J., & Tomasello, M. (2009). "Ratcheting up the ratchet: on the evolution of cumulative culture." Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1528), 2405-2415.

Premack, D. (2007). "Human and animal cognition: Continuity and discontinuity." Proceedings of the National Academy of Sciences, 104(35), 13861-13867.

Giles, J. (2005). "Internet encyclopaedias go head to head." Nature, 438, 900-901.

Scott, W. C., Kaiser, D., Othmer, S., & Sideroff, S. I. (2005). "Effects of an EEG biofeedback protocol on a mixed substance abusing population." The American Journal of Drug and Alcohol Abuse, 31(3), 455-469.

Caramazza, A., & Shelton, J. R. (1998). "Domain-specific knowledge systems in the brain: The animate-inanimate distinction." Journal of Cognitive Neuroscience, 10(1), 1-34.

Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). "Statistical Learning by 8-Month-Old Infants." Science, 274(5294), 1926-1928.

Kahneman, D., & Tversky, A. (1973). "On the psychology of prediction." Psychological Review, 80(4), 237-251.

Links not previously used

20-20-20rule (Accessed November 26, 2016).

Blind men and an elephant (Accessed December 17, 2016)

Boundary extension (Accessed November 23, 2016).

Brain-to-brain communication is finally possible. It's just very clunky. (Accessed November 22, 2016)

Chuck Close (Accessed December 1, 2016)

Forbidden fruit (Accessed December 17, 2016)

Habituation (Accessed December 17, 2016)

Henry Molaison (Accessed December 1, 2016)

Homer (Accessed December 17, 2016)

Homo economicus (Accessed December 1, 2016)

Neural oscillations (Accessed December 1, 2016)

Neurons in a voxel according to Aguirre. (Accessed November 28, 2016)

Phineas Gage (Accessed December 1, 2016)

Physarum polycephalum (Accessed December 17, 2016)

Printing press (Accessed December 17, 2016)


   March 23, 2017

References not in previous editions.

Books

Sloman, S., & Fernbach, P. (2017). The knowledge illusion: Why we never think alone. New York: Riverhead Books.

Papers

Bian, L., Leslie, S.-J., & Cimpian, A. (2017). "Gender stereotypes about intellectual ability emerge early and influence children's interests." Science, 355(6323), 389-391.

de Vivo, L., Bellesi, M., Marshall, W., Bushong, E. A., Ellisman, M. H., Tononi, G., & Cirelli, C. (2017). "Ultrastructural evidence for synaptic scaling across the wake/sleep cycle." Science, 355(6324), 507-510.

Diering, G. H., Nirujogi, R. S., Roth, R. H., Worley, P. F., Pandey, A., & Huganir, R. L. (2017). "Homer1a drives homeostatic scaling-down of excitatory synapses during sleep." Science, 355(6324), 511-515.

Dresler, M., Shirer, W. R., Konrad, B. N., Müller, N. C. J., Wagner, I. C., Fernández, G., . . . Greicius, M. D. "Mnemonic Training Reshapes Brain Networks to Support Superior Memory." Neuron, 93(5), 1227-1235.e1226.

Feinberg, M., & Willer, R. (2015). "From Gulf to Bridge When Do Moral Arguments Facilitate Political Influence?" Personality and Social Psychology Bulletin, 41(12), 1665-1681.

Gomez, J., Barnett, M. A., Natu, V., Mezer, A., Palomero-Gallagher, N., Weiner, K. S., . . . Grill-Spector, K. (2017). "Microstructural proliferation in human cortex is coupled with the development of face processing." Science, 355(6320), 68-71.

Gosling, S. D., Rentfrow, P. J., & Swann Jr, W. B. (2003). "A very brief measure of the Big-Five personality domains." Journal of Research in Personality, 37, 504-528.

Kahan, Dan M. and Landrum, Asheley R and Carpenter, Katie and Helft, Laura and Jamieson, Kathleen Hall, Science Curiosity and Political Information Processing (August 1, 2016). Advances in Political Psychology, Forthcoming; Yale Law & Economics Research Paper No. 561. Available at SSRN: https://ssrn.com/abstract=2816803

Kaplan, J. T., Gimbel, S. I., & Harris, S. (2016). "Neural correlates of maintaining one's political beliefs in the face of counterevidence." Scientific reports, 6, 39589.

Kret, M. E., & Tomonaga, M. (2016). "Getting to the Bottom of Face Processing. Species-Specific Inversion Effects for Faces and Behinds in Humans and Chimpanzees (Pan Troglodytes)." PloS one, 11(11), e0165357. doi:doi:10.1371/journal.pone.0165357

Loukola, O. J., Perry, C. J., Coscos, L., & Chittka, L. (2017). "Bumblebees show cognitive flexibility by improving on an observed complex behavior." Science, 355, 833-836.

Magen, E., Kim, B., Dweck, C. S., Gross, J. J., & McClure, S. M. (2014). "Behavioral and neural correlates of increased self-control in the absence of increased willpower." Proceedings of the National Academy of Sciences, 111(27), 9786-9791.

Miyamoto, K., Osada, T., Setsuie, R., Takeda, M., Tamura, K., Adachi, Y., & Miyashita, Y. (2017). "Causal neural network of metamemory for retrospection in primates." Science, 355(6321), 188-193.

Przybylski, A. K., Weinstein, N., Murayama, K., Lynch, M. F., & Ryan, R. M. (2012). "The ideal self at play the appeal of video games that let you be all you can be." Psychological science, 23(1), 69-76.

Yokose, J., Okubo-Suzuki, R., Nomoto, M., Ohkawa, N., Nishizono, H., Suzuki, A., . . . Inokuchi, K. (2017). "Overlapping memory trace indispensable for linking, but not recalling, individual memories." Science, 355(6323), 398-403.

Links

Bee learning and communication

Fake news

'Who shared it?': How Americans decide what news to trust on social media

Information deficit model

Inventor of Light Bulb

Minnesota Multiphasic Personality Inventory

Replication crisis

Synaptic pruning


   August 8, 2017

Books

Livio, M. (2017). Why? What makes us curious. New York: Simon & Schuster.

Marks, L. (2017). A stitch of time: The year brain injury changed my language. New York: Simon & Schuster.

Taylor, J. B. (2009). My stroke of insight. New York: Penguin Books.

Papers

Baranes, A. F., Oudeyer, P.-Y., & Gottlieb, J. (2014). "The effects of task difficulty, novelty and the size of the search space on intrinsically motivated exploration." Frontiers in neuroscience, 8, 1-9.

Blundon, J. A., Roy, N. C., Teubner, B. J. W., Yu, J., Eom, T.-Y., Sample, K. J., . . . Zakharenko, S. S. (2017). Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling. Science, 356(6345), 1352-1356.

Cai, D. J., Aharoni, D., Shuman, T., Shobe, J., Biane, J., Song, W., . . . Lou, J. (2016). A shared neural ensemble links distinct contextual memories encoded close in time. Nature, 534(7605), 115-118.

Chang, L., & Tsao, D. Y. (2017). The Code for Facial Identity in the Primate Brain. Cell, 169(6), 1013-1028.

Hamers, L. (2017). Brains encode faces piece by piece. Science News, July 8, 2017, p. 9.

Herculano-Houzel, S. (2016). The Paradox of the Elephant Brain. With three times as many neurons, why doesn't the elephant brain outperform ours? Nautilus (April 7, 2016).

Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2012). "The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex." PloS one, 7(5), e36399.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. Journal of personality and social psychology, 77(6), 1121-1134.

Long, N. M., Sperling, M. R., Worrell, G. A., Davis, K. A., Gross, R. E., Lega, B. C., . . . Stein, J. M. (2017). Contextually Mediated Spontaneous Retrieval Is Specific to the Hippocampus. Current Biology, 27(7), 1074-1079.

McKenzie, S., Frank, A. J., Kinsky, N. R., Porter, B., RiviŹre, P. D., & Eichenbaum, H. (2014). Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas. Neuron, 83(1), 202-215.

Nichols, A. L. A., Eichler, T., Latham, R., & Zimmer, M. (2017). A global brain state underlies C. elegans sleep behavior. Science, 356(6344).

Schlichting, M. L., & Preston, A. R. (2015). Memory integration: neural mechanisms and implications for behavior. Current opinion in behavioral sciences, 1, 1-8.

Stern, P. (2017). The brain circuits of a winner. Science, 357(6347), 159-159.

Tyson, W., & Roksa, J. (2017). Importance of Grades and Placement for Math Attainment. Educational Researcher, 46(3), 140-142.

Zhou, T., Zhu, H., Fan, Z., Wang, F., Chen, Y., Liang, H., . . . Hu, H. (2017). History of winning remodels thalamo-PFC circuit to reinforce social dominance. Science, 357(6347), 162-168.

Links

Dunning-Kruger Effect

Herculano-Houzel, S. (2013). What is so special about the human brain? [TED]

Jill Bolte-Taylor's TED talk.


   December 30, 2017

Books

Brown, P. C., Roedinger, H. L., & McDaniel, M. A. (2014). Make It Stick. Cambridge, MA: Belknap Press of Harvard University Press.

Sharot, T. (2017). The Influential Mind. New York: Henry Holt and Company.

Papers

Alexander, G., Eaton, I., & Egan, K. (2010). Cracking the Code of Electronic Games: Some Lessons for Educators. Teachers College Record, 112(7), 1830-1850.

Bittner, K. C., Milstein, A. D., Grienberger, C., Romani, S., & Magee, J. C. (2017). Behavioral time scale synaptic plasticity underlies CA1 place fields. Science, 357(6355), 1033-1036.

Bassett, D. S., Yang, M., Wymbs, N. F., & Grafton, S. T. (2015). Learning-induced autonomy of sensorimotor systems. Nat Neurosci, 18(6), 744-751.

Bassett, D. S., & Mattar, M. G. (2017) A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior. Trends in cognitive sciences, 21(4), 250-264.

Campos, F., Frese, M., Goldstein, M., Iacovone, L., Johnson, H. C., McKenzie, D., & Mensmann, M. (2017). Teaching personal initiative beats traditional training in boosting small business in West Africa. Science, 357(6357), 1287-1290.

Casarotto, S., Comanducci, A., Rosanova, M., Sarasso, S., Fecchio, M., Napolitani, M., . . . Boly, M. (2016). Stratification of unresponsive patients by an independently validated index of brain complexity. Annals of neurology, 80(5), 718-729.

Cimpian, A., & Leslie, S-J. (2017). The Brilliance Trap. Scientific American, 317(3), 60.

Davis, E. A., Palincsar, A. S., Smith, P. S., Arias, A. M., & Kademian, S. M. (2017). "Educative Curriculum Materials: Uptake, Impact, and Implications for Research and Design." Educational Researcher, 46, 293-304.

Dupre, C., & Yuste, R. (2017). Non-overlapping neural networks in Hydra vulgaris. Current Biology, 27(8), 1085-1097.

Elpidorou, A. (2014). The bright side of boredom. Frontiers in psychology, 5, Article 1245, 1-4.

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Rev., 100(3), 363-406.

Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 683-703). New York: Cambridge University Press.

Frey, R. F., Cahill, M. J., & McDaniel, M. A. (2017). Students’ Concept-Building Approaches: A Novel Predictor of Success in Chemistry Courses. Journal of Chemical Education, 94(9), 1185-1194.

Gerraty, X. R. T., Davidow, J. Y., Wimmer, X. E., Kahn, I., & Shohamy, D. (2014). Transfer of Learning Relates to Intrinsic Connectivity between Hippocampus, Ventromedial Prefrontal Cortex, and Large-Scale Networks. The Journal of Neuroscience, 34(34), 11297-11303.

Gizowski, C., & Bourque, C. W. (2017). Neurons that drive and quench thirst. Science, 357(6356), 1092-1093.

Goldberg, Y. K., Eastwood, J. D., LaGuardia, J., & Danckert, J. (2011). Boredom: An emotional experience distinct from apathy, anhedonia, or depression. Journal of social and clinical psychology, 30(6), 647-666.

Hung, L. W., Neuner, S., Polepalli, J. S., Beier, K. T., Wright, M., Walsh, J. J., . . . Malenka, R. C. (2017). Gating of social reward by oxytocin in the ventral tegmental area. Science, 357(6358), 1406-1411.

Karpicke, J. D., & Roediger, H. L. (2008). The Critical Importance of Retrieval for Learning. Science, 319, 966-968

.

Karpicke, J. D., Blunt, J. R., & Smith, M. A. (2016). Retrieval-Based Learning: Positive Effects of Retrieval Practice in Elementary School Children. Frontiers in psychology, 7. doi.org/10.3389/fpsyg.2016.00350

Leslie, S.-J., Cimpian, A., Meyer, M., & Freeland, E. (2015). Expectations of brilliance underlie gender distributions across academic disciplines. Science, 347(6219), 262-265.

LeWinn, K. Z., Sheridan, M. A., Keyes, K. M., Hamilton, A., & McLaughlin, K. A. (2017). Sample composition alters associations between age and brain structure. Nature Communications, 8(1), 874.

Moreno, R., Reisslein, M., & Delgoda, G. M. (2006). Toward a fundamental understanding of worked example instruction: Impact of means-ends practice, backward/forward fading, and adaptivity. Paper presented at the 36th ASEE/IEEE Frontiers in Education Conference, San Diego, CA.

Parong, J., Mayer, R. E., Fiorella, L., MacNamara, A., Homer, B. D., & Plass, J. L. (2017). Learning executive function skills by playing focused video games. Contemporary Educational Psychology, 51, 141-151.

Rosenberg-Lee, M., Ashkenazi, S., Chen, T., Young, C. B., Geary, D. C., & Menon, V. (2015). Brain hyper-connectivity and operation-specific deficits during arithmetic problem solving in children with developmental dyscalculia. Developmental Science, 18(3), 351.

Trübutschek, D., Marti, S., Ojeda, A., King, J.-R., Mi, Y., Tsodyks, M., & Dehaene, S. (2017). A theory of working memory without consciousness or sustained activity. Elife, 6, e23871. doi: 10.7554/eLife.23871

Links

Allen Brain Atlas. (2017). Retrieved November 14, 2017, 2017, from http://www.brain-map.org/

John Elder Robison. Retrieved December 15, 2017, 2017

NPR on music practice

Sanders, L. (2017). See these first-of-a-kind living human nerve cells. Science News. Science News