Introduction.
We have
previously discussed numerous aspects of problem-solving, usually from a
general attitude of applying logic and its multifaceted venues. Yet some of the
most difficult problems come from the everyday category. If you’re a CEO
managing the subtleties and irregularities of your company, you have everyday
problems that require vast experience carefully tuned to your operations. Many
problems are quite undefinable, but imply requiring a vast superstructure of
information, much of it tangential, to solve. Similarly, if you are a
homemaker, managing your home, partner, and children, you have entirely similar
problems, though perhaps different in scope. In this brief essay, we consider
everyday problems. Since we have an alien race living among us, we can look at the
problems they have. Of course, we created
these aliens. They are us but called AI.
Problem-solving
is one of the defining features of intelligence. Both humans and artificial
intelligence (AI) systems engage in problem-solving behavior all the time, yet
they do so for profoundly different reasons. Human beings solve everyday
problems to satisfy needs, express emotions, uphold relationships, and create
meaning. AI systems, by contrast, solve problems because they are programmed or
trained to do so, optimizing outcomes defined by human designers. This contrast
illuminates a fundamental difference between intentional intelligence and
instrumental computation. Understanding these distinctions helps clarify both
the capabilities and the limitations of AI in replicating human thought.
Everyday
Human Problem-Solving
Human
problem-solving emerges from the complexity of daily life. Everyday problems
are typically ill-defined, context-dependent, and socially embedded (Simon,
1973). They involve emotional, moral, and practical considerations that
transcend pure logic. Unlike structured scientific or mathematical problems,
they lack single correct answers and instead demand judgment, flexibility, and
empathy.
According
to Maslow’s (1943) hierarchy of needs, human motivation originates in the drive
to satisfy basic physiological and safety requirements before advancing toward
higher-level goals such as belonging, esteem, and self-actualization. Everyday
problem-solving often operates across these levels simultaneously. Deciding
what to eat, how to balance work and family, or how to respond to a friend’s
frustration involves both material and emotional reasoning.
Bandura’s
(1997) theory of self-efficacy emphasizes the psychological reward in mastering
challenges. Humans solve problems partly to affirm their agency—to feel capable
of shaping outcomes. This emotional feedback loop of effort and success
reinforces motivation and learning. Similarly, Damasio (1994) argues that
emotion is not merely an accompaniment to reasoning but a vital component that
guides decisions through what he terms “somatic markers,” bodily signals that
shape judgments in uncertain conditions.
Beyond
personal needs, problem-solving serves social and moral purposes. Humans act
within networks of relationships where cooperation, empathy, and reciprocity
are essential. Haidt’s (2012) theory of moral foundations explains how moral
intuitions, such as fairness and care, underpin social decision-making.
Everyday problems, such as negotiating disagreements or comforting others, thus
involve moral reasoning as much as pragmatic calculation.
Finally,
human problem-solving is driven by the search for meaning. Frankl (1959)
proposed that the will to meaning is a fundamental human motivation: people
solve problems not merely to survive but to define themselves and their values.
Everyday reasoning, therefore, is an expression of identity as well as
intellect. Dewey (1933) similarly viewed reflective thought as a moral and
creative process. It is one that transforms experience into understanding.
Big Trouble
for AI and Sometimes People.
This short
section gives a brief preview of what types of everyday activities cause endless
problems for AI, while also causing trouble for some people. The reasons are
that AI is not (yet) programmed for these peculiar problem types, and many humans
simply don’t have the capacity to think in such terms. It could be fundamental
brain capacity or there was no need to learn them as a child.
The issue
is that everyday problems can be more difficult for AI than chess The reason is
that everyday problems are less well-defined, often expressed in vague
language, with a greater variety of solutions, and without clear criteria for
the “best solution.” Also, for any given everyday problem, the person having it
may have entirely different concepts about what it means. Basically,
“When
you are attempting to solve vague problems expressed in vague language with
high precision and logical tools, you are bound to have trouble.”
Here’s the
list. AI has immense trouble in the following scenarios.
a.
Situational
awareness, such as understanding of physical context, social cues, and unspoken
intentions — things that humans process subconsciously.
b.
Recognizing
Humor or Sarcasm. Humor is difficult for many of us, virtually impossible for
AI.
c.
Common-sense
physical reasoning. Many problems require intuitive physics, not formal
equations, but fuzzy, experience-based predictions that humans acquire through
years of sensory interaction.
d.
AI
is missing curiosity. When AI answers a question, that concludes its task.
However, humans may dwell on the problem, seeking better solutions and other
ideas that apply.
e.
Planning
a day or sequence of errands efficiently involves dynamic decision-making,
multi-objective optimization, and fuzzy goals, not just data processing.
f.
Emotional
problems. Emotion is multimodal (voice, face, posture, timing, culture), and AI
lacks empathy and emotional memory to interpret meaningfully.
g.
The
Jump Shift. This is an interesting and important ability of the human mind that
allows it to bring an entirely different body of information or thought upon a
problem. AI normally sifts and winnows information available within the scope
of the stated problem, while humans can take an enlightened perspective.
In the
next section, we’ll see it often comes down to intuition and intrinsic motivation,
the great equalizer for humans. It allows some with little analytic capacity to
stand equal to the player with a very strong logical skills. The first may be
able to solve a committee squabble with just the right words, while our
analytical colleague and AI wouldn’t know where to begin.
Artificial
Intelligence and the Absence of Intrinsic Motivation
Artificial
intelligence, despite its growing sophistication, operates on entirely
different motivational principles but on the absence of intrinsic motivation.
AI systems do not possess desires, emotions, or goals of their own. They
execute tasks according to predefined objectives or reward functions designed
by humans (Russell & Norvig, 2021). Their “motivation” is an engineered
simulation, a mathematical representation of preference devoid of experience or
meaning.
1.
Externally Defined Goals.
AI systems
act to optimize performance metrics, such as accuracy, efficiency, reward
maximization, rather than self-generated purposes. These metrics substitute for
intention, but they are externally imposed (Lake et al., 2017). For example, a
reinforcement learning agent may appear “motivated” to win a game, but its
behavior is driven by statistical adjustment, not desire or curiosity.
2. Absence
of Emotional and Embodied Grounding.
Where
human cognition is embodied and emotional, AI cognition is abstract and
disembodied. Merleau-Ponty (1962) argued that perception and understanding
arise from bodily engagement with the world. AI lacks this sensorimotor
grounding, learning instead from symbolic or textual data. Consequently, it
cannot experience frustration, relief, curiosity, or satisfaction—emotions
that, in humans, signal progress and guide persistence in problem-solving.
3.
Optimization Without Meaning.
AI
“solves” problems by minimizing loss functions or maximizing rewards. These
processes lack awareness of purpose or consequence. The system cannot ask why a
goal matters or whether it should be pursued. As Bostrom (2014) warns, such
optimization without intrinsic purpose can yield misaligned outcomes: a system
might achieve its task efficiently while violating ethical or social norms.
Bound part
and parcel with intrinsic motivation is the human method of working with
uncertainty, when its precise nature is unknown. A human solver with experience
eventually becomes comfortable with uncertainty,
but AI remains in continuing conflict.
Dependence
on Data and Design
Human
problem-solving adapts dynamically to new and unforeseen challenges. AI systems
remain bound by the scope of their data and the assumptions of their
architecture. Without explicit reprogramming or retraining, AI cannot
autonomously redefine its goals or recognize the moral dimension of a
situation. It may have the limited ability to “sense” or determine the problem’s
more important factors. Its behavior is mechanical rather than reflective.
In a comparative
analysis between human intentionality and artificial instrumentality, the
differences between human and artificial problem-solving can be summarized
across several key dimensions.
Dimension |
Humans |
Artificial Intelligence |
Source of Motivation |
Intrinsic, as driven by
biological, emotional, and existential needs |
Extrinsic, as driven by
programmed objectives or rewards |
Emotional Feedback |
Affects reasoning and persistence
(Damasio, 1994) |
Absent; feedback purely
mathematical |
Learning Basis |
Experience, embodiment, and
social interaction |
Data patterns and optimization
algorithms |
Ethical Awareness |
Grounded in empathy and moral
reasoning (Haidt, 2012) |
Externalized ethics; follows
explicit constraints only |
Goal Adaptation |
Flexible, context-sensitive,
value-driven |
Fixed within defined parameters |
Sense of Meaning |
Tied to identity and
self-actualization (Frankl, 1959) |
None; lacks self-awareness or
purpose |
To
encapsulate all this, we note human problem-solving is teleological, that is oriented
toward goals that express meaning and value. AI problem-solving is instrumental,
that is focused on achieving outputs efficiently. The first is existential and
experiential; the second is computational and formal. Humans clearly have the
edge with intuition and processing within low information environments, but
unless I was truly an expert, I would not challenge AI to a game of chess.
Conclusions.
Humans
solve everyday problems because doing so sustains life, expresses emotion,
builds relationships, and creates meaning. AI systems, in contrast, solve
problems because they are engineered to perform functions. The human process is
intentional, emotional, and moral; the artificial process is statistical,
algorithmic, and indifferent.
This
difference is more than technical, it is philosophical. It reveals that
intelligence, in its richest form, is not just the ability to calculate or
predict but the capacity to care, to choose, to comprehend what is nonverbal, and
to find meaning. Until AI systems integrate models of intrinsic motivation,
emotional regulation, and moral reasoning, their problem-solving will remain
powerful but fundamentally instrumental, and therefore an imitation of
intelligence without its inner life. A serious indictment, this is.
All this
said, many, too many, humans apply strictly automatic thinking to everyday
problems. They apply past experiences and internal algorithms to every problem
that comes along. They don’t pursue any deeper thinking or alternative
solutions. Their idea of optimization is for the quickest solution.
Humans clearly
have the edge with intuition and processing within low information environments,
but unless I was truly an expert, I would not challenge AI to a game of chess.
It may become a new subject in warfare to compound battle strategies with every
day-style components giving the opponents’s general AI staff conundrums.
PS. If
real aliens ever do visit, a working plan of action is to examine how they
solve problems, particularly those seemingly simple everyday examples.
References
·
Bandura,
A. (1997). Self-efficacy: The exercise of control. W. H. Freeman.
·
Bostrom,
N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University
Press.
·
Damasio,
A. (1994). Descartes’ error: Emotion, reason, and the human brain. Putnam.
·
Dewey,
J. (1933). How we think. D. C. Heath.
·
Frankl,
V. E. (1959). Man’s search for meaning. Beacon Press.
·
Haidt,
J. (2012). The righteous mind: Why good people are divided by politics and
religion. Pantheon.
·
Lake,
B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building
machines that learn and think like people. Behavioral and Brain Sciences, 40,
e253.
· Maslow, A. H. (1943). A theory of
human motivation. Psychological Review, 50(4), 370–396.
·
Merleau-Ponty, M. (1962). Phenomenology of perception. Routledge.
·
Russell,
S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th
ed.). Pearson.
·
Simon,
H. A. (1973). The structure of ill-structured problems. Artificial
Intelligence, 4(3–4), 181–201.
©2025 G
Donald Allen
Comments
Post a Comment
Please Comment.