Wednesday, 27 August 2014

CFP: SoTFoM II 'Competing Foundations?', 12-13 January 2015, London.

The focus of this conference is on different approaches to the foundations
of mathematics. The interaction between set-theoretic and category-theoretic
foundations has had significant philosophical impact, and represents a shift
in attitudes towards the philosophy of mathematics. This conference will
bring together leading scholars in these areas to showcase contemporary
philosophical research on different approaches to the foundations of
mathematics. To accomplish this, the conference has the following general
aims and objectives. First, to bring to a wider philosophical audience the
different approaches that one can take to the foundations of mathematics.
Second, to elucidate the pressing issues of meaning and truth that turn on
these different approaches. And third, to address philosophical questions
concerning the need for a foundation of mathematics, and whether or not
either of these approaches can provide the necessary foundation.

Date and Venue: 12-13 January 2015 - Senate House, University of London.

Confirmed Speakers: Sy David Friedman (Kurt Gödel Research Center, Vienna),
Victoria Gitman (CUNY), James Ladyman (Bristol), Toby Meadows (Aberdeen).

Call for Papers: We welcome submissions from scholars (in particular, young
scholars, i.e. early career researchers or post-graduate students) on any
area of the foundations of mathematics (broadly construed). Particularly
desired are submissions that address the role of and compare different
foundational approaches. Applicants should prepare an extended abstract
(maximum 1’500 words) for blind review, and send it to sotfom [at] gmail
[dot] com, with subject `SOTFOM II Submission'.

Submission Deadline: 15 October 2014

Notification of Acceptance: Early November 2014

Scientific Committee: Philip Welch (University of Bristol), Sy-David
Friedman (Kurt Gödel Research Center), Ian Rumfitt (University of
Birmigham), John Wigglesworth (London School of Economics), Claudio Ternullo
(Kurt Gödel Research Center), Neil Barton (Birkbeck College), Chris Scambler
(Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni
(Università Vita-Salute S. Raffaele), Giorgio Venturi (Université de Paris
VII, “Denis Diderot” - Scuola Normale Superiore)

Organisers: Sy-David Friedman (Kurt Gödel Research Center), John
Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel
Research Center), Neil Barton (Birkbeck College), Carolin Antos-Kuby (Kurt
Gödel Research Center)

Conference Website: sotfom [dot] wordpress [dot] com

Further Inquiries: please contact
Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at)
Neil Barton (bartonna [at] gmail [dot] com)
Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at)
John Wigglesworth (jmwigglesworth [at] gmail [dot] com)

The conference is generously supported by the Mind Association, the Institute of Philosophy, and Birkbeck College.

Monday, 25 August 2014

What’s the big deal with consistency?

(Cross-posted at NewAPPS)

It is no news to anyone that the concept of consistency is a hotly debated topic in philosophy of logic and epistemology (as well as elsewhere). Indeed, a number of philosophers throughout history have defended the view that consistency, in particular in the form of the principle of non-contradiction (PNC), is the most fundamental principle governing human rationality – so much so that rational debate about PNC itself wouldn’t even be possible, as famously stated by David Lewis. It is also the presumed privileged status of consistency that seems to motivate the philosophical obsession with paradoxes across time; to be caught entertaining inconsistent beliefs/concepts is really bad, so blocking the emergence of paradoxes is top-priority. Moreover, in classical as well as other logical systems, inconsistency entails triviality, and that of course amounts to complete disaster.

Since the advent of dialetheism, and in particular under the powerful assaults of karateka Graham Priest, PNC has been under pressure. Priest is right to point out that there are very few arguments in favor of the principle of non-contradiction in the history of philosophy, and many of them are in fact rather unconvincing. According to him, this holds in particular of Aristotle’s elenctic argument in Metaphysics gamma. (I agree with him that the argument there does not go through, but we disagree on its exact structure. At any rate, it is worth noticing that, unlike David Lewis, Aristotle did think it was possible to debate with the opponent of PNC about PNC itself.) But despite the best efforts of dialetheists, the principle of non-contradiction and consistency are still widely viewed as cornerstones of the very concept of rationality.

However, in the spirit of my genealogical approach to philosophical issues, I believe that an important question to be asked is: What’s the big deal with consistency in the first place? What does it do for us? Why do we want consistency so badly to start with? When and why did we start thinking that consistency was a good norm to be had for rational discourse? And this of course takes me back to the Greeks, and in particular the Greeks before Aristotle.

Variations of PNC can be found stated in a few authors before Aristotle, Plato in particular, but also Gorgias (I owe these passages to Benoît Castelnerac; emphasis mine in both):

You have accused me in the indictment we have heard of two most contradictory things, wisdom and madness, things which cannot exist in the same man. When you claim that I am artful and clever and resourceful, you are accusing me of wisdom, while when you claim that I betrayed Greece, you accused me of madness. For it is madness to attempt actions which are impossible, disadvantageous and disgraceful, the results of which would be such as to harm one’s friends, benefit one’s enemies and render one’s own life contemptible and precarious. And yet how can one have confidence in a man who in the course of the same speech to the same audience makes the most contradictory assertions about the same subjects? (Gorgias, Defence of Palamedes)
You cannot be believed, Meletus, even, I think, by yourself. The man appears to me, men of Athens, highly insolent and uncontrolled. He seems to have made his deposition out of insolence, violence and youthful zeal. He is like one who composed a riddle and is trying it out: “Will the wise Socrates realize that I am jesting and contradicting myself, or shall I deceive him and others?” I think he contradicts himself in the affidavit, as if he said: “Socrates is guilty of not believing in gods but believing in gods”, and surely that is the part of a jester. Examine with me, gentlemen, how he appears to contradict himself, and you, Meletus, answer us. (Plato, Apology 26e- 27b)
What is particularly important for my purposes here is that these are dialectical contexts of debate; indeed, it seems that originally, PNC was to a great extent a dialectical principle. To lure the opponent into granting contradictory claims, and exposing him/her as such, is the very goal of dialectical disputations; granting contradictory claims would entail the opponent being discredited as a credible interlocutor. In this sense, consistency would be a derived norm for discourse: the ultimate goal of discourse is persuasion; now, to be able to persuade one must be credible; a person who makes inconsistent claims is not credible, and thus not persuasive.
As argued in a recent draft paper by my post-doc Matthew Duncombe, this general principle applies also to discursive thinking for Plato, not only for situations of debates with actual opponents. Indeed, Plato’s model of discursive thinking (dianoia) is of an internal dialogue with an imaginary opponent, as it were (as to be found in the Theaetetus and the Philebus). Here too, consistency will be related to persuasion: the agent herself will not be persuaded to hold beliefs which turn out to be contradictory, but realizing that they are contradictory may well come about only as a result of the process of discursive thinking (much as in the case of the actual refutations performed by Socrates on his opponents).
Now, as also argued by Matt in his paper, the status of consistency and PNC for Aristotle is very different: PNC is grounded ontologically, and then generalizes to doxastic as well as dialogical/discursive cases (although one of the main arguments offered by Aristotle in favor of PNC is essentially dialectical in nature, namely the so-called elenctic argument). But because Aristotle postulates the ontological version of PNC -- a thing a cannot both be F and not be F at the same time, in the same way -- it is difficult to see how a fruitful debate can be had between him and the modern dialethists, who maintain precisely that such a thing is after all possible in reality.
Instead, I find Plato’s motivation for adopting something like PNC much more plausible, and philosophically interesting in that it provides an answer to the genealogical questions I stated earlier on. What consistency does for us is to serve the ultimate goal of persuasion: an inconsistent discourse is prima facie implausible (or less plausible). And so, the idea that the importance of consistency is subsumed to another, more primitive dialogical norm (the norm of persuasion) somehow deflates the degree of importance typically attributed to consistency in the philosophical literature, as a norm an sich.
Besides dialetheists, other contemporary philosophical theories might benefit from the short ‘genealogy of consistency’ I’ve just outlined. I am now thinking in particular of work done in formal epistemology by e.g. Branden Fitelson, Kenny Easwaran (e.g. here), among others, contrasting the significance of consistency vs. accuracy. It seems to me that much of what is going on there is also a deflation of the significance of consistency as a norm for rational thought; their conclusion is thus quite similar to the one of the historically-inspired analysis I’ve presented here, namely: consistency is over-rated.


Servus, New York! Invitation to the MCMP Workshop "Bridges" (2 and 3 Sept, 2014)

MCMP Workshop "Bridges 2014"

New York City, 2 and 3 Sept, 2014

www.lmu.de/bridges2014


The Munich Center for Mathematical Philosophy (MCMP) cordially invites you to "Bridges 2014" in the German House, New York City, on 2 and 3 September, 2014. The 2-day trans-continental meeting in mathematical philosophy will focus on inter-theoretical relations thereby connecting form and content of this philosophical exchange. The workshop will be accompanied by an open-to-public evening event with Stephan Hartmann and Branden Fitelson on 2 September, 2014 (6:30 pm).

Speakers

Lucas Champollion (NYU)
David Chalmers (NYU)
Branden Fitelson (Rutgers)
Alvin I. Goldman (Rutgers)
Stephan Hartmann (MCMP/LMU)
Hannes Leitgeb (MCMP/LMU)
Kristina Liefke (MCMP/LMU)
Sebastian Lutz (MCMP/LMU)
Tim Maudlin (NYU)
Thomas Meier (MCMP/LMU)
Roland Poellinger (MCMP/LMU)
Michael Strevens (NYU)

Idea and Motivation

We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Our spectrum of interest includes: reduction and emergence, mechanistic links between causal theories, belief vs. probability, mind and brain, relations between formal and informal accounts in the special sciences, cognition and the outer world.

Program and Registration

Due to security regulations at the German House registering is required (separately for workshop and evening event). Details on how to register and the full schedule can be found on the official website:

www.lmu.de/bridges2014

Sunday, 24 August 2014

Extending a theory with the theory of mereological fusions

"Arithmetic with fusions" (draft) is a joint paper with my graduate student Thomas Schindler (MCMP).  The abstract is:
In this article, the relationship between second-order comprehension and unrestricted mereological fusion (over atoms) is clarified. An extension $\mathsf{PAF}$ of Peano arithmetic with a new binary mereological notion of ``fusion'', and a scheme of unrestricted fusion, is introduced. It is shown that $\mathsf{PAF}$ interprets full second-order arithmetic, $Z_2$.
Roughly this shows:
First-order arithmetic + mereology = second-order arithmetic.
This implies that adding the theory of mereological fusions can be a very powerful, non-conservative, addition to a theory, perhaps casting doubt on the philosophical idea that once you have some objects, then having their fusion also is somehow "redundant". The additional fusions can in some cases behave like additional "infinite objects"; positing their existence allows one to prove more about the original objects.

Friday, 22 August 2014

L. A. Paul on transformative experience and decision theory II

In the first part of this post, I considered the challenge to decision theory from what L. A. Paul calls epistemically transformative experiences.  In this post, I'd like to turn to another challenge to standard decision theory that Paul considers.  This is the challenge from what she calls personally transformative experiences.  Unlike an epistemically transformative experience, a personally transformative experience need not teach you anything new, but it does change you in another way that is relevant to decision theory---it leads you to change your utility function.  To see why this is a problem for standard decision theory, consider my presentation of naive, non-causal, non-evidential decision theory in the previous post.

Tuesday, 19 August 2014

Is the human referee becoming expendable in mathematics?

(Cross-posted at NewAPPS)

Mathematics has been much in the news recently, especially with the announcement of the latest four Fields medalists (I am particularly pleased to see the first woman, and the first Latin-American, receiving the highest recognition in mathematics). But there was another remarkable recent event in the world of mathematics: Thomas Hales has announced the completion of the formalization of his proof of the Kepler conjecture. The conjecture: “what is the best way to stack a collection of spherical objects, such as a display of oranges for sale? In 1611 Johannes Kepler suggested that a pyramid arrangement was the most efficient, but couldn't prove it.” (New Scientist)

The official announcement goes as follows:
We are pleased to announce the completion of the Flyspeck project, which has constructed a formal proof of the Kepler conjecture. The Kepler conjecture asserts that no packing of congruent balls in Euclidean 3-space has density greater than the face-centered cubic packing. It is the oldest problem in discrete geometry. The proof of the Kepler conjecture was first obtained by Ferguson and Hales in 1998. The proof relies on about 300 pages of text and on a large number of computer calculations.
The formalization project covers both the text portion of the proof and the computer calculations. The blueprint for the project appears in the book "Dense Sphere Packings," published by Cambridge University Press. The formal proof takes the same general approach as the original proof, with modifications in the geometric partition of space that have been suggested by Marchal.
So far, nothing very new, philosophically speaking. Computer-assisted proofs (both at the level of formulation and at the level of verification) have attracted the interest of a number of philosophers in recent times (here’s a recent paper by John Symons and Jack Horner, and here is an older paper by Mark McEvoy, which I commented on at a conference back in 2005; there are many other papers on this topic by philosophers).  More generally, the question of the extent to which mathematical reasoning can be purely ‘mechanical’ remains a lively topic of philosophical discussion (here’s a 1994 paper by Wilfried Sieg on this topic that I like a lot). Moreover, this particular proof of the Kepler conjecture does not add anything substantially new (philosophically) to the practice of computer-verifying proofs (while being quite a feat mathematically!). It is rather something Hales said to the New Scientist that caught my attention (against the background of the 4 years and 12 referees it took to human-check the proof for errors): "This technology cuts the mathematical referees out of the verification process," says Hales. "Their opinion about the correctness of the proof no longer matters."

Now, I’m with Hales that ‘software intensive mathematics’ (to borrow Symons and Horner’s terminology) is of great help to offload some of the more tedious parts of mathematical practice such as proof-checking. But there are a number of reasons that suggest to me that Hales’ ‘optimism’ is a bit excessive, in particular with respect to the allegedly expendable role of the human referee (broadly construed) in mathematical practice, even if only for the verification process.

Indeed, and as I’ve been arguing in a number of posts, proof-checking is a major aspect of mathematical practice, basically corresponding to the role I attribute to the fictitious character ‘opponent’ in my dialogical conception of proof (see here). The main point is the issue of epistemic trust and objectivity: to be valid, a proof has to be ‘replicable’ by anyone with the relevant level of competence. This is why probabilistic proofs are still thought to be philosophically suspicious (as argued for example by Kenny Easwaran in terms of the notion of ‘transferability’). And so, automated proof-checking will most likely never replace completely human proof-checking, if nothing else because the automated proof-checkers themselves must be kept ‘in check’ (lame pun, I know). (Though I am happy to grant that the role of opponent can be at least partially played by computers, and that our degree of certainty in the correctness of Hales’ proof has been increased by its computer-verification.)

Moreover, mathematics remains a human activity, and mathematical proofs essentially involve epistemic and pragmatic notions such as explanation and persuasion, which cannot be taken over by purely automated proof-checking. (Which does not mean that the burden of verification cannot be at least partially transferred to automata!) In effect, a good proof is not only one that shows that the conclusion is true, but also why the conclusion is true, and this explanatory component is not obviously captured by automata. In other words, a proof may be deemed correct by computer-checking, and yet fail to be persuasive in the sense of having true explanatory value. (Recall that Smale’s proof of the possibility of sphere eversion was viewed with a certain amount of suspicion until models of actual processes of eversion were discovered.)

Finally, turning an ‘ordinary’ mathematical proof* into something that can be computer-checked is itself a highly theoretical, non-trivial, and essentially informal endeavor that must itself receive a ‘seal of approval’ from the mathematical community. While mathematicians hardly ever disagree on whether a given proof is or is not valid once it is properly scrutinized, there can be (and has been, as once vividly described to me by Jesse Alama) substantive disagreement on whether a given formalized version of a proof is indeed an adequate formalization of that particular proof. (This is also related to thorny issues in the metaphysics of proofs, e.g. criteria of individuation for proofs, which I will leave aside for now.) 

A particular informal proof can only be said to have been computer-verified if the formal counterpart in question really is (deemed to be) sufficiently similar to the original proof. (Again, the formalized proof may have the same conclusion as the original informal proof, in which case we may agree that the theorem they both purport to prove is true, but this is no guarantee that the original informal proof itself is valid. There are many invalid proofs of true statements.) Now, evaluating whether a particular informal proof is accurately rendered in a given formalized form is not a task that can be delegated to a computer (precisely because one of the relata of the comparison is itself an informal construct), and for this task the human referee remains indispensable.

And so, I conclude that, pace Hales, the human mathematical referee is not going to be completely cut out of the verification process any time soon. Nevertheless, it is a welcome (though not entirely new) development that computers can increasingly share the burden of some of the more tedious aspects of mathematical practice: it’s a matter of teamwork rather than the total replacement of a particular approach to proof-verification by another (which may well be what Hales meant in the first place).

-----------------------------
* In some research programs, mathematical proofs are written directly in computer-verifiable form, such as in the newly created research program of homotopy type-theory.

Sunday, 17 August 2014

Bohemian gravity

Tim Blais, a McGill University physics student made this really great a capella version of "Bohemian Rhapsody", called "Bohemian Gravity", with physics lyrics explaining superstring theory, like "Manifolds must be Kahler!" (lyrics here).


Another article on this.

Saturday, 16 August 2014

Reinstatement

M-Phi readers may or may not have paid attention to the bizarre events during the last twelve months, over which I was initially dismissed by the University of Oxford, and smeared with lies in the national press. I described some of this in a statement Prof. Leiter put up on 26 March 2014:
From late 2013, Oxford proceeded with a prosecution, involving failures of due process and proportionality, despite the support I received from my College and several members of the Faculty. The prosecution ignored my evidence, detailed email documentation, a police incident note concerning an assault against me, application records, and eleven witness statements, covering the period November 2008 up to the present. As of mid April 2014, I am terminated from Oxford. The reasons stated amount to this: that I told a student to stay away from me and then responded to her refusal to do so; that I pointed out to a witness at Oxford her harassment of me while it was happening; and that I complained to Oxford of false allegations being made against me.
Subsequently, I have now been reinstated, after an external appeal. It would have better had others not forced all this into the public domain. This was not of my choosing.

Regarding the false allegations in general, the Senior Coroner for Oxfordshire wrote to me, in a 5 June 2014 letter, that the contents of the statements by the two witnesses are "not to be regarded as findings of fact by the Coroner", but rather "the opinions of the witnesses", which "could only have been based on what the witnesses were told by Miss Coursier". In response to the time-travel allegation against me, endorsed by Oxford philosopher Dr. Paula Boddington (and, apparently, a "witness"), the Senior Coroner wrote:
"I note that some press reports suggest that you followed Miss Coursier to Oxford but I know from the documents that I have seen that in fact you came to Oxford first." 
The facts themselves are documented and there are multiple eyewitnesses. It was Ms. Coursier who had harassed, stalked and assaulted me in the past. Infatuated, she later followed me and my family to Oxford in 2012, after I had helped her considerably years before, something she acknowledged over several years -- but which she later tried to hide, while making truly bizarre claims about me. In Oxford, she was treated at the Warneford psychiatric hospital. She stalked my wife in Oxford, for several months.

As her relationship with her boyfriend Mr. Fardell was deteriorating, after an abortion she described as having "murdered her child" and other events, it was Ms. Coursier who was repeatedly making unwelcome contact with me in Oxford, including stalking me at my College in May 2013, not the other way round. She was doing this in the expectation that I would somehow come to her rescue again, now that she was in so much emotional trouble again (which I didn't know about).

I did make efforts in Oxford on her behalf to protect her welfare; a friend of mine notified the Faculty of Philosophy of the possible risk, but this was ignored. As the Senior Coroner noted in his letter, "I have seen evidence that you were in contact with the Police afterwards and did raise concerns about Miss Coursier’s mental wellbeing". No such efforts to protect her welfare were made by others around her. I tried to alert the police on 5 June 2013. In London, on 10 June 2013, her boyfriend, Mr. Fardell, ended their relationship and she returned to Oxford and took her own life.

Thursday, 14 August 2014

L. A. Paul on transformative experience and decision theory I

I have never eaten Vegemite---should I try it?  I currently have no children---should I apply to adopt a child?  In each case, one might imagine, whichever choice I make, I can make it rationally by appealing to the principles of decision theory.  Not according to L. A. Paul.  In her rich and fascinating new book, Transformative Experience, Paul issues two challenges to orthodox decision theory---they are based upon examples such as these.

(In this post and the next, I'd like to tryout some ideas concerning Paul's challenges to orthodox decision theory.  The idea is that some of them will make it into my contribution to the Philosophy and Phenomenological Research book symposium on Transformative Experience.)

Worlds Without Domain

An article "Worlds Without Domain" arguing against the idea that possible worlds have domains. The abstract is: "A modal analogue to the "hole argument" in the foundations of spacetime is given against the conception of possible worlds having their own special domains".

Thursday, 24 July 2014

Mathematicians' intuitions - a survey

I'm passing this on from Mark Zelcer (CUNY):

A group of researchers in philosophy, psychology and mathematics are requesting the assistance of the mathematical community by participating in a survey about mathematicians' philosophical intuitions. The survey is here: http://goo.gl/Gu5S4E. It would really help them if many mathematicians participated. Thanks!

Tuesday, 15 July 2014

Abstract Structure

Draft of a paper, "Abstract Structure", cleverly called that because it aims to explicate the notion of "abstract structure", bringing together some things I mentioned a few times previously.

Friday, 11 July 2014

Interview at 3am magazine

Here is the shameless self-promotion moment of the day: the interview with me at 3am magazine is online. I mostly talk about the contents of my book Formal Languages in Logic, and so cover a number of topics that may be of interest to M-Phi readers: the history of mathematical and logical notation, 'math infatuation', history of logic in general, and some more. Comments are welcome!

Thursday, 10 July 2014

Methodology in the Philosophy of Logic and Language

This M-Phi post is an idea Catarina and I hatched, after a post Catarina did a couple of weeks back at NewAPPS, "Searle on formal methods in philosophy of language", commenting on a recent interview of John Searle, where Searle comments that
"what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight".
I commented a bit underneath Catarina's post, as this is one thing that interests me. I'm writing a more worked-out discussion. But because I tend to reject the terminology of "formal modelling" (note, British English spelling!), I have to formulate Searle's objection a bit differently. Going ahead a bit, his view is that:
the abstract study of languages as free-standing entities has replaced study of the psychology of actual speakers and hearers.
This is an interesting claim, impinging on the methodology of the philosophy of logic and language. I think the clue to seeing what the central issues are can be found in David Lewis's 1975 article, "Languages and Language" and in his earlier "General Semantics", 1970.

1. Searle

To begin, I explain problems (maybe idiosyncratic ones) I have with both of these words "formal" and "modelling".

1.a "formal"
By "formal", I normally mean simply "uninterpreted". So, for example, the uninterpreted first-order language $L_A$ of arithmetic is a formal language, and indeed a mathematical object. Mathematically speaking, it is a set $\mathcal{E}$ of expressions (finite strings from a vocabulary), with several distinguished operations (concatenation and substitution) and subsets (the set of terms, formulas, etc). But it has no interpretation at all. It is therefore formal. On the other hand, the interpreted language $(L_A, \mathbb{N})$ of arithmetic is not a "formal" language. It is an interpreted language, some of whose strings have referents and truth values! Suppose that $v$ is a valuation (a function from the variables of $L_A$ to the domain of $\mathbb{N}$), that $t$ is a term of this language and $\phi$ is a formula of this language. Then $t$ has a denotation $t^{\mathbb{N},v}$ and $\phi$ has a truth value $\mid \mid \phi \mid \mid_{\mathbb{N},v}$.

This distinction corresponds to what Catarina calls "de-semantificaiton" in her article "The Different Ways in which Logic is (said to be) Formal" (History and Philosophy of Logic, 2011). My use of "formal" is always "uninterpreted". So, $L_A$ is a formal language, while $(L_A, \mathbb{N})$ is not a "formal" language, but is rather an interpreted language, whose intended interpretation is $\mathbb{N}$. (The intended interpretation of an interpreted language is built-into the language by definition. There is no philosophical problem of what it means to talk about the intended interpretation of an interpreted language. It is no more conceptually complicated that talking about the distinguished order $<$ in a structure $(X,<)$.)

1.b "modelling"
But my main problem is with this Americanism, "modelling", which I seem to notice all over the place. It seems to me that there is no "modelling" involved here, unless it is being used to involve a translation relation. For modelling itself, in physics, one might, for example, model The Earth as an oblate spheroid $\mathcal{S}$ embedded in $\mathbb{R}^3$. That is modelling. Or one might model a Starbucks coffee cup as a truncated cone embeddied in $\mathbb{R}^3$. Etc. But, in the philosophy of logic and language, I don't think we are "modelling": languages are languages, are languages, are languages ... That is, languages are not "models" in the sense used by physicists and others -- for if they are "models", what are they models of?

A model $\mathcal{A} = (A, \dots)$ is a mathematical structure, with a domain $A$ and some bunch of defined functions and relations on the domain. One can probably make this precise for the case of an oblate spheroid or a truncated cone; this is part of modelling in science. But in the philosophy of logic and language, when describing or defining a language, we not modelling.

But: I need to add that Catarina has rightly reminded me that some authors do often talk about logic and language in terms of "modelling" (now I should say "modeling" I suppose), and think of logic as being some sort of "model" of the "practice" of, e.g., the "working mathematician". A view like this has been expressed by John Burgess, Stewart Shapiro and Roy Cook. I am sceptical. What is a "practice"? It seems to be some kind of supra-human "normative pattern", concerning how "suitably qualified experts would reason", in certain "idealized circumstances". Personally, I find these notions obscure and unhelpful; and it all seems motivated by a crypto-naturalistic desire to remain in contact with "practice"; whereas, when I look, the "practice" is all over the place. When I work on a mathematics problem, the room ends up full of paper, and most of the squiggles are, in fact, wrong.

So, I don't think a putative logic is somehow to be thought of as "modelling" (or perhaps to be tested by comparing it with) some kind of "practice". For example, consider the inference,
$\forall x \phi \vdash \phi^x_t$
Is this meant to "model" a "practice"? If so, it must be something like this:
The practice wherein certain humans $h_1, \dots$ tend to "consider" a string $\forall x \phi$ and then "emit" a string $\phi^x_t$
And I don't believe there is such a "practice". This may all be a reflection of my instinctive rationalism and methodological individualism. If there are such "practices", then these are surely produced by our inner cognition. Otherwise, I have no idea what the scientifically plausible  mechanism behind a "practice" is.

Noam Chomsky of course long ago distinguished performance and competence (and before him, Ferdinand de Saussure distinguished parole and langue), and has always insisted that generative grammars somehow correspond to competence. If what is meant by "practice" is competence, in something like the Chomskyan sense, then perhaps that is the way to proceed in this direction. But in the end, I suspect that brings one back to the question of what it means to "speak/cognize a language", which is discussed below.

1.c Über-language 
On the other hand, when Searle mentions modelling, it is likely that he has the following notion in mind:
A defined language $L$ models (part of) English.
In other words, the idea is that English is basic and $L$ is a "tool" used to "model" English. But is English basic? I am sceptical of this, because there is a good argument whose conclusion denies the existence of English. Rather, there is an uncountable infinity of languages; many tens of millions of them, $L_1, L_2, \dots, L_{1000,000}, \dots$, are mutually similar, albeit heterogenous, idiolects, spoken by speakers, who succeed to high degree in mutual communication. Not any these $L_1, L_2, \dots, L_{1000,000}, \dots$ spoken by individual speakers is English. If one of these is English, then which one? The idiolect spoken by The Queen? Maybe the idiolect spoken by President Barack Obama? Michelle Obama? Maybe the idiolect spoken by the deceased Christopher Hitchens? Etc. The conclusion is that, strictly speaking, there is no such thing as English.

It seems the opposite is true: there is a heterogeneous speech community $C$ of speakers, whose members speak overlapping and similar idiolects, and these are to a high degree mutually interpretable. But here is no single "über-language" they all speak. By the same reasoning, one may deny altogether the existence of so-called "natural" languages. (Cf., methodological individualism in social sciences; also Chomsky's distinction between I-languages and E-languages.) There are no "natural" languages. There are languages; and there are speakers; and speakers speak a vast heterogeneous array of varying and overlapping languages, called idiolects.

1.d Methodology
Next Searle moves on to his central methodological point:
Any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers. And that doesn’t happen now. What happens now is that many philosophers aim to build a formal model where they can map a puzzling element of language onto the formal model, and people think that gives you an insight. … 
The point of disagreement here is again with the phrase "formal model", as the languages we study aren't formal models! The entities involved when we work in these areas are sometimes pairs of languages $L_1$ and  $L_2$ and the connection is not that $L_1$ is a "model" of $L_2$ but rather that "$L_1$ has certain translational relations with $L_2$". And translation is not "modelling". A translation is a function from the strings of $L_1$ to the strings of $L_2$ preserving certain properties. Searle illustrates his line of thinking by saying:
And this goes back to Russell’s Theory of Descriptions. … I think this was a fatal move to think that you’ve got to get these intuitive ideas mapped on to a calculus like, in this case, the predicate calculus, which has its own requirements. It is a disastrously inadequate conception of language.
But this seems to me an inadequate description of Russell's 1905 essay. Russell was studying the semantic properties of string "the" in a certain language English. (The talk of a "calculus" loads the deck in Searle's favour.) Russell does indeed translate between languages. For example, the string
(1) The king of France is bald
is translated to the string
(2) $\exists x(\text{king-of-Fr.}(x) \wedge \text{Bald}(x) \wedge \forall y(\text{king-of-Fr.}(y) \to y = x)).$
But this latter string (2) is not a "model", either of the first string (1), or of some underling "psychological mechanism".
… That’s my main objection to contemporary philosophy: they’ve lost sight of the questions. It sounds ridiculous to say this because this was the objection that all the old fogeys made to us when I was a kid in Oxford and we were investigating language. But that is why I’m really out of sympathy. And I’m going to write a book on the philosophy of language in which I will say how I think it ought to be done, and how we really should try to stay very close to the psychological reality of what it is to actually talk about things.
Having got this far, we reach a quite serious problem. There is, currently, no scientific understanding of "the psychological reality of what it is to actually talk about things". A cognitive system $C$ may speak a language $L$. How this happens, though, is anyone's guess. No one knows how it can be that
Prof. Gowers uses the string "number" to refer to the abstract object $\mathbb{N}$.
Prof. Dutilh Novaes uses the string "Aristotle" to refer to Aristotle.
SK uses the string "casa" to refer to his home.
Mr. Salmond uses the string "the referendum" to refer to the future referendum on Scottish independence.
etc.
The problem here is that there is no causal connection between Prof. Gowers and $\mathbb{N}$! Similarly, a (currently) future referendum (18 Sept 2014) cannot causally influence Mr. Salmond's present (10 July 2014) mental states. So, it is quite a serious puzzle.

2. Lewis

Methodologically, on such issues -- that is, in the philosophy of logic and language -- the outlook I adhere to is the same as Lewis's, whose view echoes that of Russell, Carnap, Tarski, Montague and Kripke. Lewis draws a crucial distinction:
(A) Languages (a language is an "abstract semantic system whereby symbols are associated with aspects of the world").
(B) Language as a social-psychological phenomenon.
With Lewis, I think it's important not to confuse these. In an M-Phi post last year (March 2013), I quoted Lewis's summary from his "General Semantics" (1970):
My proposals will also not conform to the expectations of those who, in analyzing meaning, turn immediately to the psychology and sociology of language users: to intentions, sense-experience, and mental ideas, or to social rules, conventions, and regularities. I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics.
I will just call them (A) and (B). See also Lewis's "Languages and Language" (1975) for this distinction. Most work in what is called "formal semantics" is (A)-work. One defines a language $L$ and proves some results about it; or one defines two languages $L_1, L_2$ and proves results about how they're related. But this is (A)-work, not (B)-work.

3. (Syntactic-)Semantic Theory and Conservativeness

For example, suppose I decided I am interested in the following language $\mathcal{L}$: this language $\mathcal{L}$ has strings $s_1, s_2$, and a meaning function $\mu_{\mathcal{L}}$ such that,
$\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$
$\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$
Then this is in a deep sense logically independent of (B)-things. And one can, in fact, prove this!

First, let $L_O$ be an "empirical language", containing no terms for syntactical entities or semantic properties and relations. $L_O$ may contain terms and predicates for rocks, atoms, people, mental states, verbal behaviour, etc. But no terms for syntactical entities or semantic relations.

Second, we extend this observation language $L_O$ by adding:
  • the unary predicate "$x$ is a string in $\mathcal{L}$" (here "$\mathcal{L}$" is not treated as a variable), 
  • the constants "$s_1$", "$s_2$", 
  • the unary function symbol "$\mu_{\mathcal{L}}(-)$", 
  • the constants "the proposition that Oxford is north of Cambridge" and "the proposition that Oxford is north of Birmingham". 
Third, consider the following six axioms of semantic theory $ST$ for $\mathcal{L}$:
(i) $s_1$ is a string in $\mathcal{L}$.
(ii) $s_2$ is a string in $\mathcal{L}$.
(iii) $s_1 \neq s_2$.
(iv) the only strings in $\mathcal{L}$ are $s_1$ and $s_2$.
(v) $\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$
(vi) $\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$
Then, assuming $O$ is not too weak ($O$ must prove that there are at least two objects), for almost any choice of $O$ whatsoever,
$O+ST$ is a conservative extension of $O$.
To prove this, I consider any interpretation $\mathcal{I}$ for $L_O$, and I expand it to a model $\mathcal{I}^+ \models ST$. There are some minor technicalities, which I skirt over.

Consequently, the semantic theory $ST$ is neutral with respect to any observation claim: the semantic description of a language $\mathcal{L}$ is consistent with (almost) any observation claim. That is, the semantic description of a language $\mathcal{L}$ cannot be empirically tested, because it has no observable consequences.

(There are some further caveats. If the strings actually are physical objects, already referred to in $L_O$, then this result may not quite hold in the form stated. Cf., the guitar language.)

4. The Wittgensteinian View

Lewis's view can be contrasted with a Wittgensteinian view, which aims to identify $(A)$ and $(B)$ very closely. But, since this is a form of reductionism, there must be "bridge laws" connecting the (A)-things and the (B)-things. But what are they? They play a crucial methodological role. I come back to this below.

Catarina formulates the view like this:
I am largely in agreement with Searle both on what the ultimate goals of philosophy of language should be, and on the failure of much (though not all!) of the work currently done with formal methods to achieve this goal. Firstly, I agree that “any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers”. Language should not be seen as a freestanding entity, as a collection of structures to be investigated with no connection to the most basic fact about human languages, namely that they are used by humans, and an absolutely crucial component of human life. (I take this to be a general Wittgensteinian point, but one which can be endorsed even if one does not feel inclined to buy the whole Wittgenstein package.)
In short, I think this is a deep (but very constructive!) disagreement about ontology: what a language is.

On the Lewisian view, a language is, roughly, "a bunch of syntax and meaning functions"; and, in that sense, it is indeed a "free-standing entity".

(Analogously, the Lie group $SU(3)$ is a free-standing entity and can be studied independently of its connection to quantum particles called gluons (gluons are the "colour gauge field" of an $SU(3)$-gauge theory, which explains how quarks interact together). So, e.g., one can study Latin despite there being no speakers of the language; one can study infinitary languages, despite their having no speakers. One can study strings (e.g., proofs) of length $>2^{1000}$ despite their having no physical tokens. The contingent existence of one, or fewer, or more, speakers of a language $L$ has no bearing at all on the properties of $L$. Similarly, the contingent existence or non-existence of a set of physical objects of cardinality $2^{1000}$ has no bearing on the properties of $2^{1000}$. It makes no difference to the ontological status of numbers.)

Catarina continues by noting the usual way that workers in the (A)-field generally keep (A)-issues separate from (B)-issues:
I also agree that much of what is done under the banner of ‘formal semantics’ does not satisfy the requirement of sticking as closely as possible to the psychology of actual human speakers and hearers. In my four years working at the Institute for Logic, Language and Computation (ILLC) in Amsterdam, I’ve attended (and even chaired!) countless talks where speakers presented a sophisticated formal machinery to account for a particular feature of a given language, but the machinery was not intended in any way to be a description of the psychological phenomena underlying the relevant linguistic phenomena.
I agree - this is because when such a language $L$ is described, it is being considered as a free-standing entity, and so is not intended to be a "description". Catarina continues then:
It became one of my standard questions at such talks: “Do you intend your formal model to correspond to actual cognitive processes in language users?” More often than not, the answer was simply “No”, often accompanied by a puzzled look that basically meant “Why would I even want that?”. My general response to this kind of research is very much along the lines of what Searle says.
I think that the person working in the (A)-field sees that (A)-work and (B)-work are separate, and may not have any good idea about how they might even be related. Finally, Catarina turns to a positive note:
However, there is much work currently being done, broadly within the formal semantics tradition, that does not display this lack of connection with the ‘psychological reality’ of language users. Some of the people I could mention here are (full disclosure: these are all colleagues or former colleagues!) Petra Hendriks, Jakub Szymanik, Katrin Schulz, and surely many others. (Further pointers in comments are welcome.) In particular, many of these researchers combine formal methods with empirical methods, for example conducting experiments of different kinds to test the predictions of their theories. 
In this body of research, formalisms are used to formulate theories in a precise way, leading to the design of new experiments and the interpretation of results. Formal models are thus producing new insights into the nature of language use (pace Searle), which are then put to test empirically. 
The methodological issue comes alive precisely at this point.
How are (A)-issues related to (B)-issues? 
The logical point I argued for above was that a semantic theory $ST$ for a fixed well-defined language $L$ makes no empirical predictions, since the theory $ST$ is consistent with any empirical statement $\phi$. I.e., if $\phi$ is consistent, then $ST + \phi$ is consistent.

5. Cognizing a Language

On the other hand, there is a different empirical claim:
(C) a speaker $S$ speaks/cognizes $L$. 
This is not a claim about $L$ per se. It is cognizing claim about how the speaker $S$ and $L$ are related. This is something I gave some talks about before, and also wrote about a few times before here (e.g., "Cognizing a Language"), and also wrote about in a paper, "There's Glory for You!" (actually a dialogue, based on a different Lewis - Lewis Carroll) that appeared earlier this year. A cognizing claim like (C) might yield a prediction. Such a claim uses the predicate "$x$ speaks/cognizes $y$", which links together the agent and the language. But without this, there are no predictions.

The methodological point is then this: any such prediction from (C) can only be obtained by bridge laws, invoking this predicate linking the agent and language. But these bridge laws have not been stated at all. Such a bridge law might take the generic form:
Psycho-Semantic Bridge Law
If $S$ speaks $L$ and $L$ has property P, then $S$ will display (verbal) behaviour B.
Typically, such psycho-semantic laws are left implicit. But, in the end, to understand how the (A)-issues are connected to the (B)-issues, such putative laws need to be made explicit. Methodologically, then, I say that all of the interest lies in the bridge laws.

6. Summary

So, that's it. I summarize the three main points:
1. Against Searle and with Lewis: languages are free-standing entities, with their own properties, and these properties aren't dependent on whether there are, or aren't, speakers of the language.
2. The semantic description of a language $L$ is empirically neutral (indeed, the properties of a language are in some sense modally intrinsic).
3. To connect together the properties of a language $L$ and the psychological states or verbal behaviour of an agent $S$ who "speaks/cognizes" $L$, one must introduce bridge laws. Usually they are assumed implicitly, but from the point of view of methodology, they need to be stated clearly.
7. Update: Addendum 

I hadn't totally forgotten -- I sort of semi-forgot. But Catarina wrote about these topics before in several M-Phi posts, so I should include them too:
Logic and the External Target Phenomena (2 May 2011)
van Benthem and System Imprisonment (5 Sept 2011)
Book draft: Formal Languages in Logic (19 Sept 2011) 
(Probably some more, that I actually did forget...) And these raise many questions related to the methodological one here.

Tuesday, 24 June 2014

Sean Carroll: "Physicists should stop saying silly things about philosophy"

Readers probably saw this already, but I mention it anyhow. Physicist Sean Carroll has a 23 June 2014 post, "Physicists should stop saying silly things about philosophy", on his blog gently criticizing some recent anti-philosophy remarks by some well-known physicists, and trying to emphasize some of the ways physicists and philosophers of physics might interact constructively on foundational/conceptual issues. Interesting comments underneath too.

Saturday, 21 June 2014

Trends in Logic XIV, rough schedule

We now have a rough version of the conference schedule, including all the speakers and their titles. Here.

Rafal

Friday, 20 June 2014

Preferential logics, supraclassicality, and human reasoning

(Cross-posted at NewAPPS)

Some time ago, I wrote a blog post defending the idea that a particular family of non-monotonic logics, called preferential logics, offered the resources to explain a number of empirical findings about human reasoning, as experimentally established. (To be clear: I am here adopting a purely descriptive perspective and leaving thorny normative questions aside. Naturally, formal models of rationality also typically include normative claims about human cognition.)  

In particular, I claimed that preferential logics could explain what is known as the modus ponens-modus tollens asymmetry, i.e. the fact that in experiments, participants will readily reason following the modus ponens principle, but tend to ‘fail’ quite miserably with modus tollens reasoning – even though these are equivalent according to classical as well as many non-classical logics. I also defended (e.g. at a number of talks, including one at the Munich Center for Mathematical Philosophy which is immortalized in video here and here) that preferential logics could be applied to another well-known, robust psychological phenomenon, namely what is known as belief bias. Belief bias is the tendency that human reasoners seem to have to let the believability of a conclusion guide both their evaluation and production of arguments, rather than the validity of the argument as such.

Well, I am now officially taking most of it back (and mostly thanks to working on these issues with my student Herman Veluwenkamp).

Already at the Q&A of my talk at the MCMP, it became obvious that preferential logics would not work, at least not in a straightforward way, to explain the modus ponens-modus tollens asymmetry (in other words: Hannes Leitgeb tore this claim to pieces at Q&A, which luckily for me is not included in the video!). As it turns out, it is not even obvious how to conceptualize modus ponens and modus tollens in preferential logics, but in any case a big red flag is the fact that preferential logics are supraclassical, i.e. they validate all inferences validated by classical logic, and a few more (i.e. there are arguments that are valid according to preferential logics but not according to classical logic, but not the other way round). And so, since classical logic sanctions modus tollens, then preferential logics will sanction at least something that looks very much like modus tollens. (But contraposition still fails.)

In fact, I later discovered that this is only the tip of the iceberg: the supraclassicality of preferential logics (and other non-monotonic systems) becomes a real obstacle when it comes to explaining a very large and significant portion of experimental results on human reasoning. In effect, we can distinguish two main tendencies in these results:
  •       Overgeneration: participants endorse or produce arguments that are not valid according to classical logic.
  •       Undergeneration: participants fail to endorse or produce arguments that are valid according to classical logic.

For example, participants tend to endorse arguments that are not valid according to classical logic, but which have a highly believable conclusion (overgeneration). But they also tend to reject arguments that are valid according to classical logic, but which have a highly unbelievable conclusion (undergeneration). (Another example of undergeneration would be the tendency to ‘fail’ modus tollens-like arguments.) And yet, overgeneration and undergeneration related to (un)believability of the conclusion are arguably two phenomena stemming from the same source, so to speak: our tendency towards what I call ‘doxastic conservativeness’, or less pedantically, our aversion to changing our minds and revising our beliefs.

Now, if we want to explain both undergeneration and overgeneration within one and the same formal system, we seem to have a real problem with the logics available in the market. Logics that are strictly subclassical, i.e. which do not sanction some classically valid arguments but also do not sanction anything classically invalid (such as intuitionistic or relevant logics), will be unable to account for overgeneration. Logics that are strictly supraclassical, i.e. which sanction everything that classical logic sanctions and some more (such as preferential logics), will be unable to account for undergeneration. (To be fair, preferential logics do work quite well to account for overgeneration.)

So it seems that something quite radically different would be required, a system which both undergenerates and undergenerates with respect to classical logic. At this point, my best bet (and here, thanks again to my student Herman) are some specific versions of belief revision theory, more specifically what is known as non-prioritized belief revision. The idea is that incoming new information does not automatically get added to one’s belief set; it may be rejected if it conflicts too much with prior beliefs (whereas the original AGM belief revision theory includes the postulate of Success, i.e. new information is always accepted). This is a powerful insight, and in my opinion precisely what goes on in the cases of belief bias-induced undergeneration: participants in fact do not really take the false premises as if they were true, which then leads them to reject the counterintuitive conclusions that do follow deductively from the premises offered. (See also this paper of mine which discusses the cognitive challenges with accepting premises ‘at face value’ for the purposes of reasoning.)


In other words, what needs to be conceptualized when discussing human reasoning is not only how reasoners infer conclusions from prior belief, but also how reasoners accept new beliefs and revise (or not!) their prior beliefs. Now, the issue seems to be that logics, as they are typically understood (and not only classical logic), do not have the resources to conceptualize this crucial aspect of reasoning processes – a point already made almost 30 years ago by Gilbert Harman in Change in View. And thus (much as it pains me to say so, being a logically-trained person and all), it does look like we are better off adopting alternative general frameworks to analyze human reasoning and cognition, namely frameworks that are able to problematize what happens when new information arrives. (Belief revision is a possible candidate, as is Bayesian probabilistic theory.)