Tuesday 19 July 2016

Reply to Hamlyn: In detail

This is a detailed reply to the critique Jim Hamlyn wrote critiquing our Ecological Representation preprint (and related blog post). If you aren’t him, then you might want to visit his blog to read his full review before proceeding…

First, Jim, thank you very much for providing such a thorough review of our paper. I hope this lengthy reply can be a part of an ongoing dialogue about these ideas. 

I’ve written this the same way I did the one for Sergio Graziosi (here) – replying to each point you make without reading ahead. 

Quotations are text taken directly from Hamlyn’s critique and are presented in the order in which they appear in his blog post.


“Hopefully they can be persuaded that this is only true if the required representations are of the fully public and intentional sort and not the neural and non-intentional sort that they seem to have embraced.”

As this sentence comes at the very beginning of your critique, let me take a second to say that I think what we propose here is consistent with a conception of public and intentional representations. The notion of informational representations is both public (in that they are external physical structures) and inherently intentional (in that they specify biologically relevant and, frequently, dispositional properties and, as such, coordinating with respect to this information is fundamentally FOR or ABOUT something, which satisfies most people’s construal of intentionality). 



“So when Gibson proposes that we perceive the affordances the “environment…offers… provides or furnishes”, he confuses practices of use attribution and/or meaning ascription with skills of perception. I think this is a very serious mistake that Golonka and Wilson only amplify with their new paper. Wittgenstein took the view that the meaning of a word is best determined by looking at the various ways in which it is used.”

I absolutely agree that “meaning” is only something we psychologists ascribe to things in virtue of the ways they are used. I do my best to avoid using the word meaning and, instead, talk about the consequences information variables have on behavior (rather than to say that information variables “mean” or are “about” anything). But, while I agree with you, this is orthogonal to the question of whether EI (ecological information) can be considered a representation. We argue that EI meets the criteria for a representation because the consequences of EI on behavior are best understood through its ability to stand in for distal properties. That is, the assessment of whether EI is a representation depends entirely on “the various ways in which it is used”


“Think of a fork. Would we immediately see its alleged inherent Heideggerian function if we were intelligent animals of a different shape and size? G&W are bound by the force of reason to say that we do not. So then, how can the many different ways of using a stick — its alleged affordances — be perceptible in the stick?”

Gibson was unhelpfully confusing about affordances. Luckily, Turvey et al (1981) were not. Taking their lead, we construe affordance properties as dispositional properties of objects / events that lawfully structure energy media within an ecological scope. As we’ve worked out elsewhere, affordances aren’t everything. In particular, they don’t underpin action selection where the action bears only a conventional relation to the dynamics structuring the energy media (e.g., Golonka, 2o15). 

Anyway, the force of the intensional, ecological argument, is that we don’t perceive a stick and then work out properties and possible uses. We perceive properties via EI. Some of those properties are dispositions and thus have complements. We can perceive these dispositional properties whether or not we can do anything about them. The physical process of perception (the physical consequences of EI on a perception action system) depends on our specific shape and size, so an intelligent alien without fingers might perceive that a fork might do a particular type of job for a particular type of organism without mistakenly trying to use the fork itself.


“Another serious issue that arises in Gibson’s theorisation, and that G&W further ramify, is his suggestion that light carries information; that there is information in it (I will return to this issue of “content” in a moment). The philosophy of information (as distinct from Information Theory which is an engineering term) was in its infancy in Gibson’s day (some say that it still is (Floridi 2011)), so it is unlikely that Gibson would have been aware of the dangers of his use of the word. “Information” is what Ryle (1954) might have called a “smother word”. For Ryle, terms like “depiction”, “description” and “illustration” often smother important conceptual distinctions and create otherwise avoidable philosophical dilemmas. It is the task of conceptual analysis to tease out these differences and to dispel conceptual confusion.”

If I could go back in time and prevent Gibson from using the word “information” then I certainly would as I think this word choice underpins most disagreements between the ecological and broader cog sci community. The short answer to this objection is that it doesn’t matter because we can objectively define ecological information variables. I don’t care whether we rename EI Steven, or Not Information, the fact remains that we can objectively define and identify ecological informational structures and that these structures bear a lawful relationship to the dynamics of the environment that create them. I also don’t mind whether we talk about information about or information for (I prefer the later) or whether we talk about structures “carrying information.” It doesn’t matter to me because we can formally model how dynamics structure energy media and how this impacts nervous system activity. 

It’s unfamiliar territory for cognitive scientists to be able to formally define real objects and processes, but, far from being a “smother word” we are actually offering a way to definitively say whether an information variable is present or not; whether an information variable causes a neural representation or not; whether behavior is organized with respect to this variable or not. This is the opposite of the kind of “catch all” problem you are concerned about.


“According to a recent comment from Golonka on their blog, they “agree with content critiques regarding mental reps”, so they would probably reject at least some of Adams’ radical representationalism. Nonetheless, since they take Gibson’s ecological information to fit with ecological representations they have a job on their hands to reconcile their agreement with say Hutto and Myin (2013) on the question of content and their own representational “vehicles”. If, as I contend, the influence is merely causal, then no representation, no vehicles and no content need be imputed.”

For us (and many others before us), the key to distinguishing the kinds of examples that Adams offers from a representation is how the structure in question is used. Specifying structures in energy arrays only become information (and, consequently, informational representations) if their influence on behavior is best understood in terms of their roles as a stand in for the property that created them. That is, they can be understood as being used as representations for properties of objects and events in the world. The only reason animals evolved into the kinds of physical systems that respond to kinematic patterns in energy distributions is because those patterns specify biologically relevant properties of objects and events. Understanding the representational function of these structures, we think, is fundamental to understanding how nervous systems evolved to coordinate activity with structures in energy media (and I don’t think we’re arguing about whether animals do this). Similarly, within the lifespan of individuals with complex nervous systems like ours, understanding why we coordinate our behavior in a particular way with novel structures in energy media is best understood by the ability of those structures to stand in for properties of the world. 


“G&W are clearly aware that perhaps the greatest explanatory challenge for a theory of cognition is to give a coherent account of intentionality. In philosophy "intentionality" has a technical sense that I assume is the sense in which G&W are using it. Nonetheless, both senses are applicable here. They state that: “The need for intentionality therefore provided the first and primary motivation for treating cognition as necessarily representational.” What should be pointed out here is that this assumption is questionable on grounds of logical incoherence. In order for cognition to be intentional (in either sense), it must intended, but if it is intended this intention must be supplied by representations, then these must also be intended and must therefore be motivated by intentionally generated representations. This is a logical regress of the most vicious kind that is widely overlooked in much of the relevant literature. Perhaps it is this general lack of recognition that has led to G&W's overlooking this serious logical obstacle.”

I’ll front up and say that I think the philosophical literature on intentionality is a mess and I’d almost prefer to avoid the concept entirely. Our basis for evaluating intentionality (which aligns with most but not all formal conceptions) is whether behavior is directed toward biologically-relevant properties of objects or events. Our behavior can be described by nosy scientists as being about something or for something. Whether we, in our hearts and minds, actually intend anything is not a concern of mine. This conception DOES NOT require the dreaded intending, which must then be accounted for. Importantly, this conception is also sufficient for us to say that EI can support intentional behavior in the since that Newell, Barsalou, etc, want representations to support. 


“In my view this is too narrow a definition of representation. Onomatopoeia does not designate the thing it represents and nor does a photograph, an enactment or a model. Designation is more akin to delegation, nomination or stipulation than it is to depiction or imitation. So, at best, Newell’s definition applies to symbolic representations only.”

Regarding our choice of Newell’s definition: Partly, we piggy-backed on Bechtel who critiqued van Gelder using this definition. Partly, we liked the definition for being minimal and eschewing some of the unnecessary assumptions built into other definitions (e.g., the need to be internal). 

I disagree that Newell’s definition is limited to symbolic representations. The definition applies quite easily to instances of continuous control of action with respect to an information variable (e.g., continuous control of locomotion with respect to properties of optic flow designating environmental layout). But, such cases do not involve symbols. The definition is likely limited to instances where there is a lawful relationship between X and Y or where there is a constraint imposed on the relationship between X and Y such that they behave sufficiently as if they are lawfully related in a particular context (conventions). As before, the thing that rescues this construal (what is sufficient??) is the need for these structures to have a consequence on behavior that is best understood by their role as stand ins.


“So, on this basis:
A wind turbine is a thing that a lightbulb can use as if it were a battery. When it does, the lightbulb works as if it had access to a battery.
Or, better still:
Sugar syrup is a thing that a honeybee can use as if it were honey. When it does, the honeybee works as if it had access to honey.”


There is some misleading word substitution going on here. Going back to the original Newell definition:

“An entity X designates an entity Y relative to a process P, if, when P takes X as input, its behavior depends on Y.”

In your wind turbine example, there is no sense in which the light bulb is acting as if it had access to a battery. It is acting as if it has access to a wind turbine, which is a physical thing that can do the same work as a battery. You have two different physical entities which have similar consequences on the lightbulb. The effect of a wind turbine on a light bulb is not best understood in terms of the similarity between a wind turbine and a battery. The physical consequences of a wind turbine on a light bulb can be understood all on their own, without needing to know anything about the battery. 
The honey and sugar syrup example suffers the same problem. Sugar syrup and honey are two different physical substances which have similar physiological effects on bees. You don’t need to know anything about honey to understand how sugar syrup affects a bee’s physiology. Just check this against the last part of Newell’s definition. Is it true that when a bee takes sugar syrup as input its behavior depends on honey? No, not in the least. Its behavior is entirely related to the sugar syrup. 

Compare this with our description of informational representations. It is impossible for me to understand the consequences of a particular pattern of optic flow on an animal’s behavior without understanding how that pattern relates to a property in the world. There is no reason that a kinematic pattern in an energy distribution should have any consequences on nervous system activity except in as much as these patterns stand-in for properties of the world. Unlike the bee and light bulb examples, information and world properties aren’t two things that have similar consequences on perceiving acting systems. There is no other way for us to make contact with distal properties except via specifying information that represents those properties (and, as time goes on those properties may cease to be distal and may, instead, be causing us bodily harm!). 


“I may be missing something important, but I fail to see how this qualifies the wind turbine as a representation of a battery or sugar syrup as a representation of honey. All of the paradigmatic cases of representation of which I am aware involve substitution for the purposes of communication between agents, not simple replacement of functional component A with alternative functional component B.” 

Yes, I agree, I hope that the discussion of your examples above clears this up.


“Leaving this objection aside for the moment, G&W focus their attention on what they see as “the gap” which X can “close” between P and Y (the bee and its honey). But this is merely an anomalous consequence of their turn of phrase (which I edited out of my reformulation). Sugar syrup does not close a gap between the bee and its honey; it simply replaces honey. Nonetheless G&W spend several sentences fleshing out the significance of this supposed “action at a distance”.”

Again, yes, I agree completely. No gap closed when you’re subbing out similar components. Luckily, that’s not what we’re arguing. There is quite literally a physical distance in space between us and certain relevant properties of the world. This, I hope, is not controversial! We call this a gap, though, of course, it’s not a vacuum. It’s filled by energy media like light, sound, pressure, etc. When we speak of EI as closing a gap, we simply mean that EI exists in that physical space between us and things in the world we want to know about. More importantly, EI doesn’t just hang out in this space, it is lawfully structured by the very biologically relevant properties we want to know about. In this way it designates those properties in a way that our nervous systems (and other bodily systems) can make use of. 


“G&W turn next to a consideration of “ecological information [as] a representation”. They define ecological information as energy patterns of  “lawful interaction of the energy with the dynamics of the world [that] are used by organisms to perceive that world.” [My emphasis]. If organisms use energy patterns to perceive the world, then this form of usage needs to be sharply distinguished from intentional use, otherwise we have no means of distinguishing tool using creatures (humans mostly) from all the other creatures in the world who do not use tools.”

Now, this I hadn’t thought of! Again, I’m with you on our use of the word “use”. I generally try to avoid this kind of language and this slipped by me. Although it’s more awkward, I prefer what I’ve been saying in this post which is that EI is a kinematic projection of the dynamics that characterize a biologically relevant property of the world that has consequences on the behavior of the organism that are best explained in virtue of the fact that the kinematic pattern stands in for a property of the world. So yes, if we insisted on this particular example of language that appears in the paper, then we’d need to do as you suggest, but my preference is just to get rid of the problematic language. 


“Moreover, we also need this important distinction to distinguish between the intentional actions of purposeful creatures and the efficacious (but not intentionally directed) behaviours of their internal processes. My bone forming processes do not intend to use strontium as a replacement for calcium, but my dentist did intend to use gold as a crown for one of my teeth.”

Again, your example is of the form, 2 different things which serve the same or similar function in some system (like the bees and the lightbulb), which is not what we’re dealing with ecological representations. The distinction you’re after is accomplished if you adhere to the definition of representation we’re working with. The bone forming process is non-representational – strontium is not a stand-in for calcium. Its effect on your tooth can be understood without reference to calcium. 

“The reason such actions, as the replacement of a tooth, are intentional is because they are performed in pursuit of a goal that can be represented on demand.”

This is not an obligatory feature of intentional systems as you explain below


“If I might be allowed to go into a little technical detail, theorists often distinguish between teleological and teleonomic descriptions of behaviour. A telos is a goal, an aim or an envisaged end that an action is intentionally directed towards. Teleological behaviours are thus genuinely purposeful actions. Teleonomic behaviours, on the other hand, often have the appearance of purposefulness but are actually merely efficacious (some theorists use the word “purposive” here, as contrasted with genuinely purposeful activity), having been shaped by millions of years of evolution. When we say that a plant uses varying light intensities to find its way towards the sun, we do not mean to suggest that the plant is an intentionally directed agent: a perceiver. We are simply using a teleonomic description. Unfortunately I think both Gibson and G&W conflate teleonomic descriptions in which organisms and their inner processes “use” energy and genuinely teleological descriptions in which we human agents use energy—to illuminate a light bulb for instance.”

I agree that this is an important distinction and that many people confuse the two. One of my main goals is to purge the unintended implications of this kind of language from behavioral explanations, but, obviously, stuff sneaks through. Again, I’m happy to get rid of the word “use” and any hint that we’re implying that what we’re calling intentional behavior requires a telos. This is what I was getting at earlier in terms of construing intentionality as behavior that could be described as directed towards biologically relevant properties of the world. My bet is that the vast majority of everything we get up to is best described a teleonomic and that a much small fraction of what we do is truly goal-driven (and which fraction this is can be revealed with a proper task analysis).


“A lot of confusion can be cleared up in discussions of representation if we distinguish sharply between processes in which X is taken as Y and actions in which X is treated as Y. My bones will take strontium as calcium but only a performer of actions can treat an act of mock aggression as if it is merely playful as opposed to genuinely threatening.”

As before, you aren’t using parallel constructions in your examples and this is causing problems. The strontium example is not of the same type as the world property -> EI relation. It is not representational. 


“G&W return to the notion of “a gap” when they state: “Most of the behaviorally relevant dynamics in the world are ‘over there’ and not in mechanical contact with the organism. They must therefore be perceived.” The fact that the keys of my keyboard are “over there” and not in “mechanical contact” with my fingers does not mean that they are not causally influential upon me by virtue of the light reflected from them. My perception certainly depends upon light but my perception is not of the light as information, it is of the keys as keys. Light is something we know about, not something we see. So, whilst it is true that we pretenders can act as if light is perceptible, the light reflected from my keyboard is simply taken by my sensory system not as information but as causal influence. When G&W say that: “Perception relies on information about dynamics” this is not true. Only knowledge (propositional knowledge that is) relies on information about dynamics."

It’s true that the keyboard has some causal influence on your behavior. The question psychologists have to answer is how? Obviously, it has to be something about the light (since this is a vision example). You note that your perception is of the keys and not of the light. First, “perception” in this sense is a wooly concept. Do you mean what your experience is as you look at your keyboard? Accounting for the phenomenality of experience is important, but, man, one thing at a time. I think the more primary question is, what is behavior being driven by – the property of the world or the property of the light? In this case, experiments show us that behavior is driven by the information. When information is experimentally disentangled from properties of the world, behavior follows the information. Given the right kind of learning context, this breaking of a specification relationship can have behavioral consequences on the organism such that they are driven to coordinate with another information variable or abandon ship on the task

“Gibsonian ecological information is only a kinematic projection of those dynamics into an energy array. […] This means that kinematic information cannot be identical to the dynamical world, and this fact is effectively a poverty of stimulus.
Kinematic information is quite clearly a culturally enabled ascription—indeed a “description”—of “units” of measure to the “dynamical world”. There is no possibility that such sophisticated cultural contrivances as units are to be found in nature.”

No one thinks the units are found in nature. But, the mathematical language we use to model dynamic events and kinematic projections tells us that there IS a real and important difference between these things in nature. Just like the periodic table describes a real and important difference between say neon and argon. 


“Their worries about “a gap”, “action at a distance” and “a poverty of stimulus” continue when they write:”

Just pausing here because you make a few passing comments about our calling this a gap that suggest you have an alternative view, but I can’t figure out what you do think is going on. I’m guessing you don’t deny the physical distance between certain biologically relevant properties and organisms that might benefit from acting with respect to those properties? If you agree with this, then why shouldn’t we label it a gap? The reason we talk about action at a distance is, as I said before, it is impossible to understand the causal powers of kinematic projections on nervous system activity without invoking their relationship to distal properties of objects and events. The reason we talk about poverty of stimulus is because the equations that model dynamics are higher dimensional than the equations that model kinematics – lowering the dimensionality results in a description that is technically impoverished (although the 1:1 mapping between dynamics and kinematics mean that the kinematic projections are still suitable stand ins, rather than ambiguous stimulation that requires cognitive enrichment). Is there a disagreement about any of these points?


“According to G&W’s theory of representation it is important that structure is carried through the nervous system because this qualifies the structure as a neural representation of the ecological information that caused it (recall that they take all forms of replacement to be representational).”

This isn’t especially important to us, but we think the evidence suggests that it will be the case. We lay out a definition of what type of neural activity would count as a representation and suggest that 1) not all neural activity caused by informational representations will be an instance of neural representation and 2) that the majority of informational representations we engage with during action control do not result in neural representations that can be re-instantiated in the absence of that information. These two statements put a big limit on the extent and power of hypothesized neural representations. 

Also, I hope to have made it clear by now that not all forms of replacement are representational (See criticism of your bee and lightbulb examples, which fail to meet Newell’s definition e.g.). 


“At the risk of repeating myself, the fact that some pattern corresponds with an antecedent state of affairs does not mean that the pattern is a representation. Effects are not representations of their causes.”

Yep, agree completely. See above.


“I therefore think we have good reason to reject G&W’s proposal that “at least some of the neural activity caused by informational representations will qualify as a neural representation of that information.””

No. The critical issue is whether the behavior of the system is best explained by understanding that neural representation to be a stand-in for the ecological information. If this is the case, then the neural activity is a representation. That last part of Newell’s definition is absolutely critical to making any of this useful.


“To be fair to G&W, they observe that: “These neural representations are… not implementing the mental representations of the standard cognitive approach.” because they do not “enrich, model or predict anything about that information.” If this is true, then it leaves these representations as representations in name only.”

This is an interesting point because it conveys how thoroughly the assumptions of cognitive science have infiltrated the broader notion of representation. The thing that’s left in our neural representations is the ability of the structured neural activity to STAND IN FOR the ecological information which, itself, stands in for a property of the world. The fundamental job of representations remains. What we’ve chucked out is the baggage that comes from assuming mental representations need to build a coherent picture of the world from chaotic stimulation.

““To be clear, the stipulation that knowledge systems must be conceptual and componential is so that knowledge systems can support counterfactual thinking, etc.” This is mistaken. Pretending that I am rocking a baby in my arms is a gesture that would be understood by humans the world over but, even though it is counterfactual (there is no baby after all) it is not a conceptual representation. Conceptualisation relies on the ability to manipulate abstractions and there is no other species on the planet that has the capacity to manipulate abstractions with anything more the most rudimentary competence.”

We don’t care about whether conceptual and componential systems are required for counterfactual thinking. I think you’re right that there are other ways to explain how systems exhibit these types of behaviors. But, this is a commonly held view in cognitive science (e.g., Barsalou 1999) and we simply want to show that, if you are one of the many cognitive scientists who hold this belief, then informational and neural representations would support this type of conceptual and componential system.


“On page 18 G&W write: “From the first person perspective of the organism, it is just interacting with information.” We commonly interact with others by means of information but it is somewhat confused to suggest without qualification that we interact with information. Our use of information forms part of our interactions with other intelligent agents:”

The purpose of saying that we interact with information is to acknowledge, as I said earlier, that behavior follows information, not the world (unless through mechanical contact). 


“G&W are also right to focus on behaviour that treats X as if it is Y. Nonetheless their Newell-derived definition of representation is inadequate to the task of distinguishing between behaviours in which X is taken for Y and actions in which X is treated as Y.”

I think this conclusion is based on a mistaken application of Newell’s definition in your counterexamples (which I’ve tried to show whenever they come up). I would be very interested to know your thoughts on this as it seems to be one of the main points of criticism. 

Thanks once again for taking the time to produce such a thorough critique. My reading of this is that we have failed to communicate some important features of our argument, but I think with some work, we can address the majority of your concerns. The manuscript will certainly be improved as a result!
















No comments:

Post a Comment