Logic Naturalized

Logic Naturalized0%

Logic Naturalized Publisher: Unknown
Category: Miscellaneous Books

Logic Naturalized

This book is corrected and edited by Al-Hassanain (p) Institue for Islamic Heritage and Thought

Publisher: Unknown
Category: visits: 3077
Download: 1873

Comments:

search inside book
  • Start
  • Previous
  • 6 /
  • Next
  • End
  •  
  • Download HTML
  • Download Word
  • Download PDF
  • visits: 3077 / Download: 1873
Size Size Size
Logic Naturalized

Logic Naturalized

Publisher: Unknown
English

This book is corrected and edited by Al-Hassanain (p) Institue for Islamic Heritage and Thought

Logic Naturalized

John Woods[1]

Contents

I- The Mathematical Turn in Logic 3

II- The Naturalistic Turn in Logic 12

Notes 23

I- The Mathematical Turn in Logic

It is no secret that classical logic and its mainstream variants aren’t much good for human inference as it actually plays out in the conditions of real life - in life on the ground, so to speak. It isn’t surprising. Human reasoning is not what the modern orthodox logics were meant for. The logics of Frege and Whitehead & Russell were purpose-built for the pacification of philosophical perturbation in the foundations of mathematics, notably but not limited to the troubles occasioned by the paradox of sets in their application to transfinite arithmetic. Logic from Aristotle to then had been differently conceived of, and would be decked out to serve different ends. The Western founder of systematic logic wanted his account of syllogisms to be the theoretical core of a general theory of everyday face-to-face argument in the courts and councils of Athens, and more broadly in theagora . Aristotle understood that in contexts such as these premiss-conclusion reasoning[2]is an essential component of competent case-making. He thinks that when a conclusion is correctly derived from a set of premisses there exists between it and them a truth-preserving relation ofconsequence . This is a distinctively Greek idea, and one that has resonated from then to now. It is the idea that eveneveryday case-making argument is deductively structured when good.[3]Deductivism is with us still, albeit often in rather watered-down ways. Even so, the nonmonotonic consequence relations of the 20th and 21st centuries, virtually all of them, are variations of variations of classical consequence. They are, so to say, classical consequence twice-removed.[4]

Whatever we might make of this lingering fondness for deductivism, the logic of premiss-conclusion inference assigns a pair of important tasks. One is to describe the conditions under which premisseshave the consequences they do. Call this the logician’s “consequence-having” task. The second is related but different. It requires the logician to specify the conditions under which a consequence of a premiss-set is also a consequence that a human reasoner should actuallydraw . Call this the “consequence-drawing” task. The dichotomy between having and drawing is deep and significant. Consequence-having occurs in logical space. Consequence-drawing occurs in an inferer’s head and, when vocalized, mainly in public space.

This gives us an efficient way of capturing a distinctive feature of modern mainstream logics. They readily take on the consequence-having task, but they respond ambivalently to its consequence-drawing counterpart. This ambivalence plays out in two main ways. I’ll call these “rejectionism” and “idealization” respectively. In the first, the consequence-drawing task is refused outright as an unsuitable encumbrance on logic.[5]Such gaps as there may be between consequence-having and consequence-drawing are refused a hearing in rejectionist logics. However, according to the second, the consequence-having problem not only receives a hearing in logic but derives from it a positive solution. Logic would rule that any solution of the consequence-having problem wouldeo ipso be a solution to the consequence-drawing problem. The desired correspondence would be brought about byfiat , by the stipulation that the “ideally rational” consequence-drawer will find that his rules of inference are wholly provided for by the truth conditions on consequence itself. By a further stipulation, the conditions on inference-making would be declared to benormatively binding on human inference-making on the ground.[6]There is a sense, then, in which an idealized logic closes the gap between having and drawing. Even so, it is clear that on the idealization model the gap that actually does close is not the gap between consequence-having and consequence-drawing on the ground, but rather is the gap between having and idealized drawing. In that regard, the idealization model is in its own right a kind of quasi-rejectionism. All it says about on-the-ground consequence-drawing is that the rules of idealized drawing are normative for it, notwithstanding its routine non-compliance with them.  Beyond that, inference on the ground falls outside logic’s remit. It has no lawful domicile in the province of logic.

Aristotle is differently positioned. On a fair reading, what he seeks is a new purpose-built relation of deductive consequence-having -syllogistic consequence - whose satisfaction conditions would coincide with the rules of consequence-drawing not under idealized conditions but rather those actually in play when human beings reason about things. Accordingly, Aristotle’s is a genuinely gap-closing logic, but without the artifice of idealization. The nub of it all is that Aristotle’s constraints bite so deeply that for any arbitrarily set of premisses the likelihood that there would beany syllogistic consequences is virtually nil; and yet when premisses do have syllogistic consequences, they are at mosttwo. [7]

This is a considerable insight. Implicitly or otherwise, Aristotle sees that the way to close the gap between having and on-the-ground drawing is by reconstructing the relation of consequence-having, that is, by making consequence-havingitself more inference-friendly. To modern eyes it is quite striking as to how Aristotle brought this about. He did it by taking a generic notion of consequence (he called it “necessitation”) and imposing additional conditions on it that would effect the desired transformation. This would produce - in my words, not his - the new relation of syllogistic consequence - a proper subrelation of necessitation - whose defining conditions would make it nonmonotonic and paraconsistent, and at least some adumbration of relevant and intuitionist in the modern senses of those terms.[8]It is well to note that these inference-friendly improvements derive entirely from readjustments to consequence-having, and they put to no definitional work any considerations definitive of face-to-face argumental engagement. In other words, although inference happens in the head, Aristotle’s provisions for inference-friendliness take hold in logical space.

When we turn from Aristotle’s to modern-day efforts to improve logic’s inference-friendliness, we continue to see similarities and differences. As with syllogistic consequence, the newer consequence relations trend strongly to the nonmonotonic, and many of them are in one way or another relevant and paraconsistent as well. Others still are overtly intuitionist. The differences are even more notable. I have already said that, unlike his theory of face-to-face argument-making, Aristotle’s syllogistic consequence is wholly provided for without thedefinitional [9]employment of considerations about inference-making or of the beings who bring them off. In the logic of syllogisms there is no role for agents, information flow, actions, times or resources. In contrast, modern attempts at inference-friendliness give all these parameters an official seat at the definitional table of consequence-having. Consequence-having is now defined for consequence relations expressly connected to agents, information flow, actions, times and resources. There is yet a further difference to respect. It is that although these modern logics give official admittance to agents, actions and the rest, they are admitted as idealizations, rather than as they are on the ground.

In our own day, a case in point is Hintikka’s agent-centred logics of belief and knowledge, in ground-breaking work of 1962.[10]Hintikka’s epistemic logic is an agent-centred adaptation of Lewis’ modal system S4, in which the box-operator for necessity is replaced with the epistemic operator for knowledge, relativized to agentsa . The distinguishing axiom of S4 is ⌐□ A®□□ A¬.  Its epistemic counterpart is ⌐Ka A®Ka Ka A¬, where “Ka” is read as “It is known by agenta that¼”. We have it straightaway that the epistemicized S4 endorses the KK-hypothesis, according to which it is strictly impossible to know something without realizing you do. Of course, this is miles away, and more, from the epistemic situation of real-life human agents; so we are left to conclude thatHintikka’ s agents are idealizations ofus . It is a gap-closing arrangement only in the sense that the behaviour of Hintikka’s people is advanced as normatively binding on us.

Hintikka has an interesting idea about how to mitigate this alienation, and to make his logic more groundedly inference-friendly after all. Like Aristotle, Hintikka decides to make gap-closing adjustments to orthodox consequence-having, not just by way of specific constraints on it, but also by way of provisions that make definitional use of what agents say. That is, Hintikka decides to fit his consequence relation for greater inference-friendliness not just by imposition of additional semantic constraints, but also by application ofpragmatic ones as well.

It is a radical departure. It effects the pragmaticization of consequence-having. I regard this as a turning point for most of the agent-based logics ever since. Logics of nonmonotonic, defeasible, autoepistemic and default reasoning also pragmatize their consequence relations.[11]Still, radical or not, it shouldn’t be all that surprising a departure. How could it be? What would be the point of inviting even idealized agents into one’s logic if there were nothing for them to do there?

Consider for example, the Hintikkian treatment of logical truth. In the orthodox approaches a wff A is a truth of logic if and only if there is no model of any interpretation in which it fails to hold. In Hintikka’s pragmaticized logic, A is a truth of logic if and only if either it holds in every model of every interpretation or its negation would be a self-defeating statement for any agent toutter . Similarly, B is a consequence of A just when an agent’s joint affirmation of A and denial of B would be another self-defeating thing for him tosay . The same provisions extend to Hintikkian belief logics. Not only do people (in the model) believe all logical truths, but they close their beliefs under consequence. There are no stronger idealizations than these in any of the agent-free orthodox predecessor-logics.

Closure under consequence is especially problematic. In the usual run of mainline approaches, there exist for any given A transfinitely many consequences of it. Think here of the chain A⊧ B, A⊧ BÚC, A⊧ BÚCÚD, and so on - summing to aleph null many in all. Take any population of living and breathing humans. Let Sarah be the person who has inferred from some reasonably manageable premiss-set the largest number of its consequences, and let Harry be the person to have inferred from those same premisses the fewest; let’s say exactly one. Then although Sarah considerably outdraws Harry, she is not a whit closer to the number of consequences-had than Harry is. They both fall short of the ideal inferrer’s count equally badly. Neither of them approaches or approximates to that ideal in any finite degree. Now that’s what I’d call a gap, a breach that is transfinitely wide. It is also an instructive gap. It tells us that giving (the formal representations of) agents, actions, etc. some load-bearing work to do under a pragmaticized relation of consequence-having is far from sufficient to close the gap between behaviour in the logic and behaviour on the ground.

Still, it is important to see what Hintikka had it in mind to do. The core idea was that, starting with some basic but gap-producing logic, the way to close it or anyhow narrow it to real advantage, is to do what Aristotle himself did to the everyday notion of necessitation. You would restructure your own base notion of consequence by subjecting it to additional requirements. In each case, gap-closure would be sought by making the base notion of consequence a more complex relation, as complex as may be needed for the objectives at hand. In other words,

The turn towards complexification : The complexification of consequence is the route of choice towards gap-closure and inference-friendliness.

One can see in retrospect that Hintikka’s complexifications were too slight.[12]There is, even so, an important methodological difference between Aristotle’s complexification and those of the present day. Aristotle’s constraints are worked out in everyday language. Syllogistic consequence would just be ordinary necessitation, except that premisses would be (1) non-redundant, (2) more than only one and (probably) no more than two, (3)  none repeated as conclusion or immediately equivalent to any other that does, (4) internally and jointly consistent, and (5) supportive of single conclusions only. These and others that derive from them would provide that the conclusion of any syllogism is either one that should obviously be drawn or is subject to brief, reliable step-by-step measures to make its drawability obvious. This is got by way of the “perfectability” proof of thePrior Analytics . (Even it is set out in everyday Greek supplemented by some modest stipulation of technical meanings for ordinary words).[13]

Modern gap-closers have quite different procedural sensibilities. They are the heirs of Frege and Russell, who could hardly in turn could be called heirs of Aristotle. Frege and Russell were renegades. They sought a wholesale restructuring of logic, of what it would be for, and how it would be done. Those objectives and their attendant procedural sensibilities are mother’s milk for modern logicians. Logic pursues its objectives by way of mathematically expressible formal representations, subject in turn to the expositional and case-making discipline characteristic of theoretical mathematics. There flows from this a novel understanding of complexification. In the modern way, complexifications are best achieved by beefing up the mathematical formalizations of a base mathematical logic. Let’s give this a name. Let’s say that today’s preferred route to gap-closure is the building of more mathematically complex technical machinery. In briefer words, inference-friendly logics areheavy-equipment logics .

Johan van Benthem has recently written of an idea that gripped him in the late 1980s:

The idea had many sources, but what it amounted to was this: make actions of language use and inference first-class citizens of logical theory, instead of studying just their products and data, such as sentences or proofs. My programme then became to explore the systematic repercussions of this ‘dynamic turn’.[14]

In the ensuing thirty years, van Benthem and his colleagues have constructed a complex technology for the execution of this dynamic turn. It is an impressive instrument, an artful synthesis of many moving parts. Here is a close paraphrase of its principal author’s summary remarks: With the aid of categorical grammars and relational algebra we can develop a conception of natural language as a kind of cognitive programming language for transforming information. This could be linked in turn to modal logic and the dynamic logic of programs, prompting insights into process invariances and definability, dynamic inference and computational complexity logics. In further variations, logical dynamics would become a general theory of agents that produce, transform and convey information in contexts both social and solo. The result is adynamic epistemic logic (DEL), which gives a unified theoretical framework for knowledge-update, inference, questions, belief revision, preference change and “complex social scenarios over time, such as games.” The creator of DEL also

would see argumentation with different players as a key notion of logic, with proof just a single-agent projection. This stance is a radical break with current habits, and I hope that it will gradually grow on the reader, the way it did on me. (p. ix)

Indeed, I would be happy if the viewpoints and techniques offered here would change received ideas about the scope of logic, and in particular,revitalize its interface with philosophy .” (p. x; emphasis added)

Van Benthem notes with approval the suggestion in Gabbay’s and my 2002 paper, “Formal approaches to practical reasoning: A survey,”[15]that the interface with argument may be the last frontier where modern logic finds its proper generality and impact on human reasoning. Again I paraphrase: Over the last decade this insight has developed into a paradigm of attack-and-defend-networks (ADNs) - from unconscious neural nets, to variations that adapt to several kinds of conscious reasoning. This, too, is a highly complex technology, a fusion of several moving parts. As provided for by Barringer, Gabbay and me,[16]the ADN paradigm unifies across several fields, from logic programs to dynamical systems. AD-networks have some interesting technical capacities. They give an equational algebraic analysis of connection strength, where stable states can be found by way of Brouwer’s fixed-point result. When network activity is made responsive to time, logic re-enters the picture, including the development of quite novel modal and temporal languages. “Clearly”, says van Benthem, “this is an immense intellectual space to consider.” He adds that he “totally agrees” with the ADN “vision, and am happy to support it.” (p. 84)

Here, then, are just two of a great many heavy-equipment technologies, specifically adapted to the requirements of argument. They are unifications of partner elements, some of their authors’ own contrivance, but in the main having an already established and well understood methodological presence in the several research communities from which they have been borrowed. Both the DEL and ADN approaches carry the same presupposition for the logic of argument, and underlying it the logic of inference too. It is that argument and inference won’t yield the mysteries of their deep structures unless excavated by heavy-equipment regimes capable of mathematically precise formulation and implementation. It is here that the fissure between modern logic and Aristotle’s is deepest and most intensely felt, at least by me.[17]

Why, then, it might well be asked, myown complicity in constructing ADN logics and the formal models on display in earlier work?[18]I am not opposed to heavy equipment methodologies as such. I am perfectly happy to see formally contrived new ideas added to our conceptual inventories, for whatever good may be in them in due course, apart from their beauty as intellectual artifacts. I also concede the necessity of  idealizations, formally wrought or not - even those that are transfinitely untrue to what happens on the ground - that are indispensable to thedescriptive success of the empirical sciences; not excluding those of them that investigate empirically realized and normatively assessable human goings-onin terra firma . I also welcome the fact that thinking of things in ways they couldn’t possibly be sometimes gets us to thinking of things, even perhaps of other things, in ways theydo turn out to be.[19]In this present section, I’ve been trying to set my course for the developments that lie ahead in section II. Part of what I want to say is how much I distrust our present compulsion to mathematicize everything in sight. Compulsions aren’t good for intellectual health. They are a drag on the market and a pathological impediment to open-minded enquiry.

A further reservation concerns the groundlessness of the pretensions of the heavy-equipment methodologies to a normative authority over human cognitive performance in London, Vancouver and Guangzhou. The two most noticeable explanations of the normative authority of ideal models are thereflective equilibrium defence, and what I’ll call themathematico-analytic defence. According to the first, the correct procedures for action are those implicitly in play in the relevant community of agents. The trouble with this is the impossibility of finding credible candidates to qualify as relevant communities. If it is the human community on the ground - that is, all of us in general - then there is between how we perform and what the orthodox models require us to perform no equilibrium at all. On the other hand, if the authoritative community is the ideal-modelling research community, there will indeed be a nice concurrence between what their models demand and what they say should be demanded. Which prompts a good, if somewhat informal, question: “Who made these guys king of the normativity castle?” Besides, why would we think thatsaying is a salient consideration? What the experts say at the office is one thing. In all other respects, they are just like the rest of us. Don’t we all put on our trousers in the same way? Don’t we all - routinely so - violate the norms of trouser-manipulation at some or other idealized juncture of perfection?

The mathematico-analytic defence is even more of a muddle. In one version of it, an idealized norm is binding on the ground when it arises in the theory as a theorem. In another, the norm’s authority arises from the fact that it is analytic in the model (i.e. made true there by stipulation). The general idea is this: The proposition “2 + 3 = 5” is a theorem of number theory; and some people think that it is true by meanings alone. Its normative authority is straightforwardly clear. If someone in London, Vancouver or Guangzhou wants to add 2 and 3 in the way authorized by number theory, he should not identify their sum as any number that’s not 5. The same would be true for belief-closure. If someone on the ground wanted to close a belief in the way authorized by idealized closing, he would fire away transfinitely. Of course, this is absurd. No one on earth, except for the odd decision-theorist when at the office, has ever heard of the idealized closure-conditions, never mind aspiring to their fulfillment.[20]

From the very beginnings and most of the time thereafter, the logician had to be two things at once. He would be the setter of the targets for his theory, and he would be the creator of the tools to enable him to meet them. If we were speaking of cars rather than inferences, we could see this duality nicely captured by a quite common division of labour in Detroit. Cars would be sold by the sales staff, but they would be built by the engineers. Things are different in the logic business. Not many of Ford’s sales people know anything much of how cars are built, and engineers are notorious for their poor salesmanship. But in logic, it falls to the logician to build what he sells. He must be his own engineer. It is not at all surprising that Ford’s top salesman might know nothing of engineering. But the same thing in logic would be quite astonishing.

There is a further difference between the car business and logic. Logic’s modern machinery is put together in a quite particular way. Originally designed for expressly mathematical purposes, its creators, then and now, bring a generalized mathematical sensibility to their creative work. In due course it would become apparent that the technical objects of their machinery are themselves possessed of a mathematical character and are eligible for mathematical investigation in their own right. In the car business the work of the engineering division and the work of the sales division is harmonized by the biting discipline of the bottom-line. No engineer will thrive in Dearborn if the company’s cars don’t sell, even if he’s more interested in new equipment than he is in new cars. Logic is different. By and large, the work of logicians is free of commercial expectation.[21]

When we put these two points together, we can see a quite considerable alienation of the mathematical study of logic’s machinery and the attainment of what the equipment might be good for. The factor of good-for recedes into the background, and technological self-study becomessui generis, and withal the route to the upper elechons of academic achievement and repute.[22]The heavy equipment logics of the day have put themselves in an awkward position. Theysay that their technical complexifications are wanted for the inference-friendliness. But they construct their complexifications in ways that discourage if not outright preclude the accomplishment of those ends.

With complexification comes complexity, which is a well-known inhibitor of on-the-ground implementability.[23]This necessitates the reinstatement of idealizations, for two particular reasons among others. Idealizations would simplify and streamline procedures for theorem-proving; and they would explain the broadening gap between having and drawing occasioned by the idealization process itself. This would be brought about in the same old ways: by closing the gap between having and drawing in the heavy-equipment model, and by normativizing the model’s drawings in relation to those that play out in the world. This is seriously problematic. Heavy equipment upgrades yield empirically false accounts of on-the-ground drawing, and do so in ways that exacerbate, not solve, the normativity problem. (As for my own involvements with the heavy-technology industry, I have never supposed that the ADN technology is normatively authoritative for anything apart from its own self-created objects. Similarly, I believe, for my ADN co-conspirators.)

One thing that could be done - and in some cases has been - to mitigate the gap-producing difficulties engendered by ideal models is to deny thetransfinitely false ones a place at the table, and admit only those falsities for which an approximation relation is either definable or at least plausibly entertainable. With belief still our example, closure under consequence would not be permitted, but a sizeable gap could still remain between drawings in the model and drawings in human life. The difference would be that, where the original gap is transfinitely wide, the new gap is smaller - anyhow smaller enough to qualify the new closure-rule as approaching in some finite degree what actually happens here.

So adjusted, the heavy equipment approach to inference-friendliness could now be roughly summed up this way:

Complexity as gap-closing : The heavier the equipment the less empirically unfaithful the machinery’s formal models, to the degree theyapproximate to what happens on the ground.

It is an interesting idea, animating another.

Approximation converges on normativity : The closer the approximation of a theoretical model of premiss-conclusion, the greater its descriptive adequacy; and the greater too, its presumptions to normative sway.

As far as I can tell, nothing in the heavy equipment literature puts things in just this way, or even close to it. And a good thing, too, readers may be thinking! Isn’t everyone still cringing atle scandale created by poor Mill’s gaffe in proposing inUtilitarianism , chapter 4, that “the sole evidence¼that anything is desirable, is that people do actually desire it”? A not uncommon complaint can be found in Charles Hamblin’s observation that “[i]t was given to J.S. Mill to make the greatest of modern contributions to this Fallacy [= the ‘naturalistic’ fallacy] by perpetrating a serious example of it himself.”[24]

  The mockery is misplaced. It is little more than name-calling, occasioned by the critics’ misconception that Mill is saying that “The desirable (F) is what’s normally desired by us all (G)” is true by meanings, supplemented by the further assumption that believing that something is F entails believing that it is G. The first of these assumptions is implausible on its face. The second owes its truth (if true it be) to the falsehood that believing that this thinga is F requires that there be some distinct term “G” that the believer in question believes to be semantically equivalent to “F”. Notwithstanding the stout resistance it provokes, the converge of approximation on normativity is an extremely engaging idea, whatever its prior origins. It carries a suggestion of the first importance for the logic of consequence-drawing. It is that the normative authority of a logic converges on its descriptive adequacy. Should this prove to be so, it deserves acknowledgement as a foundational insight for a naturalized logic of inference.