
Footnotes2Plato Podcast The Problem with Frames: Metaphysics, Ethics, and Agency
Timothy Jackson, Darius Parvizi-Wayne, and I got together to discuss Darius’ recent article, “What Active Inference Still Can’t Do: The (Frame) Problem that Just Won’t Go Away” (2025). The frame problem was our intended target, but the subject kept widening. Front and center was the relation between the everyday world of care and significance and the sub-personal mechanisms sought by cognitive scientists, or between the manifest image and the scientific image, as Sellars called them. We explored whether the Active Inference Framework is really explaining our experience as conscious agents acting in a meaningful world, or whether it is redescribing that experience in its formal terms while quietly taking the realization of relevance for granted.
Darius’s paper provides an excellent internal critique of Active Inference, showing how it has not—at least not yet—provided a satisfactory solution to the frame problem. In its original classical AI form, the frame problem concerned how to encode the effects of action without having to encode all the irrelevant consequences that follow from that action. The broader issue is the problem of relevance: how agents like us somehow manage to zero in on what matters in perception and action despite the indefinite overabundance of what does not. Why this affordance, this threat, this invitation, this face, this concern, now? Darius’s argument, as I understand it, is that active inference can at best describe the encoding of relevance once relevance realizing behavior is already underway. But it does not explain how relevance is realized in the first place, especially in genuinely novel situations.
The distinction between description and explanation recurred several times. One can always preserve a formalism by making it more elastic. One can say that if only the hierarchy of priors were rich enough, if only the precision-weighting were dynamic enough, if only the generative model were fine-grained enough, then relevance would be accounted for. Darius put pressure on the appeal to hierarchical motivated control, on the question of how highly weighted preferences get weighted in the first place. If every precision weighting requires some higher-order preference structure to authorize it, then one faces an infinite regress. If one halts the regress by positing a “first prior” or ground norm, one pays for that stability by rendering the system too rigid, too overdetermined, too cramped to account for the flexibility and novelty of actual life. So the theory gets caught between regress and rigidity.
Tim asked whether this is really just a problem in one corner of cognitive science, or whether it is a generic problem that appears everywhere once one begins asking about the genesis of order, the origin of norms, the emergence of stable forms from fields of variation. The frame problem is not merely a technical puzzle for AI engineers or Bayesian theorists but a particular way of expressing a much older and broader question about the origins of order and the intelligibility of the world.
Tim kept returning to Darwin. Darwin’s genius was to take variation seriously, to refuse to treat species or forms as primary givens, and instead to ask how relatively stable forms arise from processes that do not begin with fixed types. It is Tim’s Darwinism that leads him to resist any metaphysical move that front-loads the possibility landscape, any appeal to pre-given frames, a priori attractors, transcendent priors, or what Dennett called skyhooks. For him, speculative philosophy, at its best, is the critique of postulation. It should strip back the first principles as far as possible, not to deny the reality of individuals, teloi, meanings, and norms, but to show how their reality depends upon an immanent genealogy rather than a transcendent guarantee.
This is where Tim and I always begin our tango about Whitehead’s Philosophy of Organism. Whitehead is in one sense as radical an evolutionist as one could hope for. He historicizes the laws of nature, the very geometry of spacetime, the atomic elements and all forms of enduring organization. And yet he also introduces eternal objects and the primordial nature of God, which for Tim sound like precisely the sort of metaphysical posit that compromises a fully constructivist, ontogenetic account. I am sympathetic to the concern, but I think Whitehead is doing something subtler and more interesting than installing a fixed cosmic prior. The primordial nature of God is not God’s Bayesian belief-state about future outcomes. It is not a probability distribution over a pre-given state space. It is not a hidden optimization function or Maxwellian demon secretly running the universe from above. It is Whitehead’s way of biting the bullet to solve the meta-frame problem. Each actual occasion of experience feels some gradient of relevance, some lure toward forms of novelty appropriate to its concrete situation.
Culturally freighted phrases like “the primordial nature of God” are not the sort of thing most cognitive scientists are trained to take seriously. One of the things I wish we had had more time to discuss in the dialogue is Whitehead’s distinction in Process and Reality between statistical and non-statistical judgment.
The statistical ground is numerical. It depends upon a finite set of cases, upon ratios, frequencies, measurable likelihoods. If one wants to model cognition in Bayesian terms, this is the territory one occupies. There is nothing wrong with that as far as it goes. Whitehead is not anti-statistics. But he explicitly introduces a non-statistical ground because statistical induction, by itself, cannot explain how organisms navigate genuinely novel situations in which the relevant possibilities are not already laid out in a countable state-space. Here the ordering of possibilities is not statistical but intensive, qualitative. It concerns suitability, relevance, appetition, felt importance, in Whitehead’s terms. It is graduated, but not by ratios. Not everything that comes in degrees is numerical in the extensional sense.
This is where my half serious contrast between error minimization and Eros maximization comes in. I am not proposing a rival optimization theory, as if to swap the statistical with the libidinal. Tim is right to be suspicious of optimization language when it starts pretending to be ontology. My phrase is meant as a polemical device to mark a difference between what can be calculated and what cannot. Error minimization, as formalized in Predictive Processing or Active Inference, operates within a pre-specified, extensional structure of more or less likely states. It is powerful precisely because it works within a mathematizable space. But that very strength is also its limit. Eros maximization, as I’ve been trying to think it, names not another formal objective function but an organism’s lure toward richer aesthetic contrasts, deeper satisfaction, more intense realization of value in situations where the possibility space cannot be pre-stated in advance. It concerns what Whitehead calls “intensive relevance,” not merely likelihood. It is not the inverse of error minimization. It is an attempt to think what extensional models necessarily abstract away.
The Active Inference Framework can model organisms “as if” they are Bayesian calculators minimizing prediction error, and no doubt that as if can be instrumentally fruitful. The trouble begins when the model is ontologized, when the extensional formalism is taken to reveal what life and mind really are. Then the organism, and consciousness, purposiveness, valuation, and meaning along with it, start to look like mere appearances to be explained away by reference to more fundamental cybernetic machinery.
This brings us back to the manifest image/scientific image issue that I raised at the start of our conversation. The scientific image, especially in its computationalist forms, seeks to get behind the world of lived experience and conscious agency, to explain these in terms of sub-personal mechanisms, eg, neurological functions, information processing, thermodynamic constraints, etc. I would hope it was obvious that all of us, including cognitive scientists, begin always and inevitably embedded within the meaning-saturated world disclosed to consciousness, what Husserl called the “life-world.” Computationalist cogsci tends to forget that disclosure. It then builds a formal apparatus it hopes might account for the relevance realizing behavior of beings that lack such worldly embeddedness. It then declares that the formal apparatus is all that is needed to explain it. The frame problem thus exposes the tremendous costs of beginning from a picture of mind as somehow separate from the world, thus needing to construct internal models in order to infer what lies beyond it.
I mention Helmholtz in our dialogue, who takes the Kantian problem of how the mind contributes form to experience and naturalizes it into a theory of unconscious inference. Sensations become signs and cognition becomes the inferential reconstruction of their external causes. Contemporary Predictive Processing and Active Inference frameworks, especially in their more explicitly representational forms, inherit this philosophical move. The organism, somehow severed from its world, must re-establish indirect contact by building a self-world-model. The frame problem thus becomes inevitable. If the mind is first sealed off from the world, then it must carry a vast inner representation or mega-model for deciding which among its possible interpretations is relevant now. But the very need for such a mega-model is a sign that something has gone wrong upstream.
Kant thought the scandal of philosophy was that no proof of the external world had been given. Heidegger later says the real scandal is that one thinks such a proof is needed. Dasein is not first a private interiority needing to infer its way back to the world. We are always already in the world, already thrown into history, immersed in significance, and caught up in networks of concern. This was one reason I was drawn to enactivism as a student. Enactivism resists the temptation to treat cognitive science as the search for sub-personal machinery that will explain personhood away. It begins from embodied world-involvement rather than from inner representation. But as we also discussed, even enactivism can falter if it makes too much of homeostatic self-maintenance (ie, autopoiesis) as the ground norm. Then one still struggles to explain the idiosyncrasy and occasional perversity of actual life.
And actual life, as Tim rightly insisted, is not some miraculous machine for solving the frame problem. Organisms are not unfailing virtuosos of relevance realization. They are thrown into ongoing processes of selection, distraction, error, improvisation. They get eaten because they sometimes miss what matters. They are lured by mimicry and fooled by camouflage. They seize on the wrong thing. They wander. This is not a bug in life but is constitutive of its openness. Whitehead says something like this in his theory of propositions. A proposition is not first of all a true or false sentence. It is a lure for feeling. And he says that it is more important that a proposition be interesting than that it be true. That line should be framed above the desk of every cognitive scientist tempted to reduce life to error minimization. The false, the irrelevant, the not-yet-validated, the interestingly wrong, are not merely noise to be filtered out. They are what drives the creative advance.
Darius has exposed a real weakness in Active Inference’s stronger explanatory ambitions. Priors, precision-weighting, and hierarchical policy control all seem to presuppose precisely the relevance they are meant to explain. Tim is right that the frame problem is one local manifestation of a deeper problem of origins. From my Whiteheadian point of view, the reason these optimization models keep running aground (at least as final explanations, if not as useful descriptions) is not merely that they need more technical refinement. It is that they remain trapped within a Cartesian, representationalist image of cognition. Whitehead gives us a way to think beyond that image, not by abandoning science, but by refusing to turn a successful formalism into a final ontology. In this context, Darius mentioned Hubert Dreyfus and Charles Taylor’s wonderful little book, Retrieving Realism (2015), which I reviewed almost a decade ago, including a Whitehead-inspired extension: “Retrieving Realism: A Whiteheadian Wager” (2017).
The frame problem is not just a headache for computationalists. It is a clue that reality is more participatory, relational, value-saturated, and improvised than the dominant modern scientific image has allowed. What is needed is not less metaphysics but better, more explicit metaphysics. Whitehead says that metaphysics is “descriptive generalization.” I take this as part of his effort to remind scientists, and really all theorists, that philosophy is not primarily in the business of offering yet another explanation of things. As Tim was suggesting, philosophy should function as a critic of the abstractions of the special sciences. Its task is avoid allowing the world we actually inhabit to be explained away.
The danger is that certain scientific frameworks, when overextended, begin to explain away important aspects of the prosaic or manifest image. They forget that their own explanatory power depends upon abstractions drawn from experience, and then they start treating those abstractions as though they were the whole of reality. Whitehead’s idea of descriptive generalization is meant to resist this. He wants a metaphysical scheme broad and supple enough to apply across the special sciences, but also across the full range of human life: art, law, ethics, religion or spirituality, and the rest.
Such an approach to metaphysics would help us see when an explanation is genuinely illuminating and when it has become greedy, when it slides into reductive overreach. It keeps us from explaining away realities that the scientific practice itself already presupposes, or from arriving at a view of human life that is ethically and spiritually impoverished. Descriptive generalization thus seems to me a much better way of understanding the aim of metaphysics than explanation. Metaphysics is not trying to outcompete the sciences. It is trying to provide a critical and capacious matrix of general ideas within which their abstractions can be situated, assessed, and kept in proportion.
Get full access to ☿Footnotes²Plato☀☾ at footnotes2plato.substack.com/subscribe
