Monday, 22 September 2014

Winter School on Paradoxes and Dilemmas -- Groningen

On January 26th-27th 2015, the Faculty of Philosophy of the University of Groningen will host a short Winter School aimed at advanced undergraduate students and early-stage graduate students. The theme of the winter school is Paradoxes and Dilemmas, and it will consist of 6 tutorials where the topic will be discussed from different viewpoints: theoretical philosophy, practical philosophy, and the history of philosophy. As such, the Winter School may be of interest to at least some of the M-Phi readers; for further details, check the site of the Winter School.

Lectures: 

  • Catarina Dutilh Novaes: ‘Paradoxes: at the heart of philosophy’
  • Barteld Kooi: ‘Epistemic paradoxes: is the concept of knowledge incoherent?’
  • Han -Thomas Adriaenssen : ‘Divine foreknowledge versus free will? Theology and modality in the Middle Ages’
  • Sander de Boer: 'So what were these Aristotelian forms supposed to do again? Late Medieval and Early Modern metaphysics'
  • Frank Hindriks: ‘Trolleyology: The Philosophy and Psychology of a Moral Dilemma’
  • Marc Pauly: ‘Philosophical Dilemmas in Public Policy: Ontology meets Ethics'

Scholarships: 
The Faculty is offering up to three EUR 300 scholarships for the best students enrolling in the winter school, and who express serious interest in later applying for the Research Masters’ program. Moreover, participants who are then accepted in the Research Masters’ program for the year 2015/2016 will have their registration fee for the winter school reimbursed.
To apply for the scholarships, send a short CV (max 2 pages) and a letter (max 1 page) stating your interest in the Faculty of Philosophy in Groningen and the Research Masters’ program in particular, to winterschoolphilosophy 'at' rug.nl with 'Application for winter school scholarship' as subject. Deadline to apply for the scholarships: December 1st 2014. Preference will be given to members of underrepresented groups in philosophy (women, people of color, persons with disabilities etc.).

Registration:
To register, send an email with your name, affiliation and status (undergraduate, graduate) to winterschoolphilosophy 'at' rug.nl with 'Registration for winter school' as subject, no later than December 15th 2014. As the number of spots is limited, you are encouraged to register early.

Friday, 19 September 2014

Review of T. Parsons' Articulating Medieval Logic

By Catarina Dutilh Novaes
(Cross-posted at NewAPPS)

I was asked to write a review of Terry Parsons' Articulating Medieval Logic for the Australasian Journal of Philosophy. This is what I've come up with so far. Comments welcome!
=======================

Scholars working on (Latin) medieval logic can be viewed as populating a spectrum. At one extremity are those who adopt a purely historical and textual approach to the material: they are the ones who produce the invaluable modern editions of important texts, without which the field would to a great extent simply not exist; they also typically seek to place the doctrines presented in the texts in a broader historical context. At the other extremity are those who study the medieval theories first and foremost from the point of view of modern philosophical and logical concerns; various techniques of formalization are then employed to ‘translate’ the medieval theories into something more intelligible to the modern non-historian philosopher. Between the two extremes one encounters a variety of positions. (Notice that one and the same scholar can at times wear the historian’s hat, and at other times the systematic philosopher’s hat.) For those adopting one of the many intermediary positions, life can be hard at times: when trying to combine the two paradigms, these scholars sometimes end up displeasing everyone (speaking from personal experience).

Terence Parsons’ Articulating Medieval Logic occupies one of these intermediate positions, but very close to the second extremity; indeed, it represents the daring attempt to combine the author’s expertise in natural language semantics, linguistics, and modern philosophy with his interest in medieval logical theories (which arose in particular from his decade-long collaboration with Calvin Normore, to whom the book is dedicated). For scholars of Latin medieval logic, the fact that such a distinguished expert in contemporary philosophy and linguistics became interested in these medieval theories only confirms what we’ve known all along: medieval logical theories have intrinsic systematic interest; they are not only curious museum pieces.

Despite not being the first to employ modern logical techniques to analyze medieval theories, Parsons' approach is quite unique (one might even say idiosyncratic). It seems fair to say that nobody has ever before attempted to achieve what he wants to achieve with this book. A passage from the book’s Introduction is quite revealing with respect to its goals:

Tuesday, 16 September 2014

What makes a mathematical proof beautiful?

(Cross-posted at NewAPPS)

In December, I will be presenting at the Aesthetics in Mathematics conference in Norwich. The title of my talk is Beauty, explanation, and persuasion in mathematical proofs, and to be honest at this point there is not much more to it than the title… However, the idea I will try to develop is that many, perhaps even most, of the features we associate with beauty in mathematical proofs can be subsumed to the ideal of explanatory persuasion, which I take to be the essence of mathematical proofs. 

As some readers may recall, in my current research I adopt a dialogical perspective to raise a functionalist question: what is the point of mathematical proofs? Why do we bother formulating mathematical proofs at all? The general hypothesis is that most of the defining criteria for what counts as a mathematical proof – and in particular, a good mathematical proof – can be explained in terms of the (presumed) ultimate function of a mathematical proof, namely that of convincing an interlocutor that the conclusion of the proof is true (given the truth of the premises) by showing why that is the case. (See also this recent edited volume on argumentation in mathematics.) Thus, a proof seeks not only to force the interlocutor to grant the conclusion if she has granted the premises; it seeks also to reveal something about the mathematical concepts involved to the interlocutor so that she also apprehends what makes the conclusion true – its causes, as it were. On this conception of proof, beauty may well play an important role, but its role will be subsumed to the ideal of explanatory persuasion.

There is a small but very interesting literature on the aesthetics of mathematical proof – see for example this 2005 paper by my former colleague James McAllister, and a more recent paper on Kant’s conception of beauty in mathematics applied to proof by Angela Breitenbach, one of the organizers of the meeting in Norwich. (If readers have additional literature suggestions, please share them in comments.) But perhaps the locus classicus for the discussion of what makes a mathematical proof beautiful is G. H. Hardy’s splendid A Mathematician’s Apology (a text that is itself very beautiful!). In it, Hardy identifies and discusses a number of features that should be present for a proof to be considered beautiful: seriousness, generality, depth, unexpectedness, inevitability, and economy. And so, one way for me to test my dialogical hypothesis would be to see whether it is possible to provide a dialogical rationale for each of these features that Hardy discusses. My prediction is that most of them can receive compelling dialogical explanations, but that there will be a residue of properties related to beauty in a mathematical proof that cannot be reduced to the ideal of explanatory persuasion. (What this residue will be I do not yet know).

As I mentioned, this is still very much work in progress, but for now I would like to sketch what a dialogical account of beauty in a mathematical demonstration might look like for a specific feature. Now, a fascinating desideratum for a mathematical proof, which has been discussed in detail recently by Detlefsen and Arana, is the ideal of purity:
Throughout history, mathematicians have expressed preference for solutions to problems that avoid introducing concepts that are in one sense or another “foreign” or “alien” to the problem under investigation. (Detlefsen & Arana 2011, 1)
A mathematical proof is said to be pure if it does not rely on concepts that are not present in the statement of the conclusion of the proof (the theorem). Many famous mathematical proofs are not pure in this sense, such as Wiles’ proof of Fermat’s Last Theorem, which utilizes incredibly sophisticated and complex mathematical machinery to prove a theorem the statement of which can be understood with knowledge of standard high school level mathematics. (The impurity of Wiles’ proof is one of the motivations often given to seek for alternative proofs of FLT, as described in this guest post by Colin McLarty.) Now, I take it to be fairly obvious that purity concerns can be readily understood as aesthetic concerns, in particular related to simplicity (which is one of the features widely associated with beauty). 

What would a dialogical account of the purity desideratum look like? Going back to the idea that the function of a proof is that of eliciting persuasion by means of understanding in an interlocutor (hence the stress on the explanatory dimension), it is clear that, in general, the less complex the mathematical machinery of a proof, the less it will demand of the interlocutor being persuaded in terms of cognitive investment. Moreover, if it relies on simpler machinery, the proof will most likely reach a larger audience, i.e. be persuasive for a larger number of people (those possessing mastery of the concepts used in it). In particular, a proof that only uses concepts already contained in the formulation of the theorem will be at least in theory comprehensible to anyone who can understand the statement of the conclusion. Thus, a pure proof maximizes its penetration among potential audiences, as it only excludes those who do not even grasp the statement of the theorem in the first place. In other words, purity sets the lower bound of cognitive sophistication required from an interlocutor precisely at the right place. (Naturally, I can also be convinced of the truth of a theorem even if I do not understand the proof myself, i.e. by relying on the expertise of the mathematical community as a whole.)

As I said, these are only tentative ideas at this point, so I look forward to feedback from readers. In particular, I would like to hear from practicing mathematicians their answers to the question in the title: what makes a mathematical proof beautiful? Do you agree with Hardy's list? (I could definitely use some input so as to render my investigation more in sync with actual practices!)

Wednesday, 10 September 2014

Apologies

On behalf of the M-Phi contributors, I want to sincerely apologize to our readers for the misguided and inappropriate post that was online at M-Phi for four days (now taken down, as well as all other posts referencing the Oxford events). The moderation structure of the blog was such that none of us could do anything to take it down, except for pleading with the author to do so.

[UPDATE (Sep. 12th): It has been brought to my attention that we owe an apology not only for the most recent post, but also for at least some of the content of the other posts pertaining to the Oxford events, which had been posted a few months ago (now also deleted). So, for those too, our apologies. We are also looking into additional ways to make amends with the people negatively affected.]

The structure and moderation of the blog will change completely now; Jeffrey Ketland will no longer be a contributor (of his own initiative). The exact details still need to be discussed, but we hope to come back with something more concrete within a week or so.

Again, our apologies, to our readers and to those who were negatively affected by the post.

(And thank you Jeff, for all your otherwise very good work here at M-Phi over the years.)

UPDATE: the opinions of those who felt negatively affected by the posts are most welcome in comments below (or in private to me by email).

Tuesday, 9 September 2014

A break

This is a short note just to say that I will not be contributing posts to M-Phi for the time being.

UPDATE: In view of recent events here at M-Phi, some important changes will take place regarding the management of the blog. We will talk more concretely about them in the near future, but for now let me say that we will do our utter best to restore the readers' trust in the blog, which may have been affected by recent developments.

Wednesday, 27 August 2014

CFP: SoTFoM II 'Competing Foundations?', 12-13 January 2015, London.

The focus of this conference is on different approaches to the foundations
of mathematics. The interaction between set-theoretic and category-theoretic
foundations has had significant philosophical impact, and represents a shift
in attitudes towards the philosophy of mathematics. This conference will
bring together leading scholars in these areas to showcase contemporary
philosophical research on different approaches to the foundations of
mathematics. To accomplish this, the conference has the following general
aims and objectives. First, to bring to a wider philosophical audience the
different approaches that one can take to the foundations of mathematics.
Second, to elucidate the pressing issues of meaning and truth that turn on
these different approaches. And third, to address philosophical questions
concerning the need for a foundation of mathematics, and whether or not
either of these approaches can provide the necessary foundation.

Date and Venue: 12-13 January 2015 - Senate House, University of London.

Confirmed Speakers: Sy David Friedman (Kurt Gödel Research Center, Vienna),
Victoria Gitman (CUNY), James Ladyman (Bristol), Toby Meadows (Aberdeen).

Call for Papers: We welcome submissions from scholars (in particular, young
scholars, i.e. early career researchers or post-graduate students) on any
area of the foundations of mathematics (broadly construed). Particularly
desired are submissions that address the role of and compare different
foundational approaches. Applicants should prepare an extended abstract
(maximum 1’500 words) for blind review, and send it to sotfom [at] gmail
[dot] com, with subject `SOTFOM II Submission'.

Submission Deadline: 15 October 2014

Notification of Acceptance: Early November 2014

Scientific Committee: Philip Welch (University of Bristol), Sy-David
Friedman (Kurt Gödel Research Center), Ian Rumfitt (University of
Birmigham), John Wigglesworth (London School of Economics), Claudio Ternullo
(Kurt Gödel Research Center), Neil Barton (Birkbeck College), Chris Scambler
(Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni
(Università Vita-Salute S. Raffaele), Giorgio Venturi (Université de Paris
VII, “Denis Diderot” - Scuola Normale Superiore)

Organisers: Sy-David Friedman (Kurt Gödel Research Center), John
Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel
Research Center), Neil Barton (Birkbeck College), Carolin Antos-Kuby (Kurt
Gödel Research Center)

Conference Website: sotfom [dot] wordpress [dot] com

Further Inquiries: please contact
Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at)
Neil Barton (bartonna [at] gmail [dot] com)
Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at)
John Wigglesworth (jmwigglesworth [at] gmail [dot] com)

The conference is generously supported by the Mind Association, the Institute of Philosophy, and Birkbeck College.

Monday, 25 August 2014

What’s the big deal with consistency?

(Cross-posted at NewAPPS)

It is no news to anyone that the concept of consistency is a hotly debated topic in philosophy of logic and epistemology (as well as elsewhere). Indeed, a number of philosophers throughout history have defended the view that consistency, in particular in the form of the principle of non-contradiction (PNC), is the most fundamental principle governing human rationality – so much so that rational debate about PNC itself wouldn’t even be possible, as famously stated by David Lewis. It is also the presumed privileged status of consistency that seems to motivate the philosophical obsession with paradoxes across time; to be caught entertaining inconsistent beliefs/concepts is really bad, so blocking the emergence of paradoxes is top-priority. Moreover, in classical as well as other logical systems, inconsistency entails triviality, and that of course amounts to complete disaster.

Since the advent of dialetheism, and in particular under the powerful assaults of karateka Graham Priest, PNC has been under pressure. Priest is right to point out that there are very few arguments in favor of the principle of non-contradiction in the history of philosophy, and many of them are in fact rather unconvincing. According to him, this holds in particular of Aristotle’s elenctic argument in Metaphysics gamma. (I agree with him that the argument there does not go through, but we disagree on its exact structure. At any rate, it is worth noticing that, unlike David Lewis, Aristotle did think it was possible to debate with the opponent of PNC about PNC itself.) But despite the best efforts of dialetheists, the principle of non-contradiction and consistency are still widely viewed as cornerstones of the very concept of rationality.

However, in the spirit of my genealogical approach to philosophical issues, I believe that an important question to be asked is: What’s the big deal with consistency in the first place? What does it do for us? Why do we want consistency so badly to start with? When and why did we start thinking that consistency was a good norm to be had for rational discourse? And this of course takes me back to the Greeks, and in particular the Greeks before Aristotle.

Variations of PNC can be found stated in a few authors before Aristotle, Plato in particular, but also Gorgias (I owe these passages to Benoît Castelnerac; emphasis mine in both):

You have accused me in the indictment we have heard of two most contradictory things, wisdom and madness, things which cannot exist in the same man. When you claim that I am artful and clever and resourceful, you are accusing me of wisdom, while when you claim that I betrayed Greece, you accused me of madness. For it is madness to attempt actions which are impossible, disadvantageous and disgraceful, the results of which would be such as to harm one’s friends, benefit one’s enemies and render one’s own life contemptible and precarious. And yet how can one have confidence in a man who in the course of the same speech to the same audience makes the most contradictory assertions about the same subjects? (Gorgias, Defence of Palamedes)
You cannot be believed, Meletus, even, I think, by yourself. The man appears to me, men of Athens, highly insolent and uncontrolled. He seems to have made his deposition out of insolence, violence and youthful zeal. He is like one who composed a riddle and is trying it out: “Will the wise Socrates realize that I am jesting and contradicting myself, or shall I deceive him and others?” I think he contradicts himself in the affidavit, as if he said: “Socrates is guilty of not believing in gods but believing in gods”, and surely that is the part of a jester. Examine with me, gentlemen, how he appears to contradict himself, and you, Meletus, answer us. (Plato, Apology 26e- 27b)
What is particularly important for my purposes here is that these are dialectical contexts of debate; indeed, it seems that originally, PNC was to a great extent a dialectical principle. To lure the opponent into granting contradictory claims, and exposing him/her as such, is the very goal of dialectical disputations; granting contradictory claims would entail the opponent being discredited as a credible interlocutor. In this sense, consistency would be a derived norm for discourse: the ultimate goal of discourse is persuasion; now, to be able to persuade one must be credible; a person who makes inconsistent claims is not credible, and thus not persuasive.
As argued in a recent draft paper by my post-doc Matthew Duncombe, this general principle applies also to discursive thinking for Plato, not only for situations of debates with actual opponents. Indeed, Plato’s model of discursive thinking (dianoia) is of an internal dialogue with an imaginary opponent, as it were (as to be found in the Theaetetus and the Philebus). Here too, consistency will be related to persuasion: the agent herself will not be persuaded to hold beliefs which turn out to be contradictory, but realizing that they are contradictory may well come about only as a result of the process of discursive thinking (much as in the case of the actual refutations performed by Socrates on his opponents).
Now, as also argued by Matt in his paper, the status of consistency and PNC for Aristotle is very different: PNC is grounded ontologically, and then generalizes to doxastic as well as dialogical/discursive cases (although one of the main arguments offered by Aristotle in favor of PNC is essentially dialectical in nature, namely the so-called elenctic argument). But because Aristotle postulates the ontological version of PNC -- a thing a cannot both be F and not be F at the same time, in the same way -- it is difficult to see how a fruitful debate can be had between him and the modern dialethists, who maintain precisely that such a thing is after all possible in reality.
Instead, I find Plato’s motivation for adopting something like PNC much more plausible, and philosophically interesting in that it provides an answer to the genealogical questions I stated earlier on. What consistency does for us is to serve the ultimate goal of persuasion: an inconsistent discourse is prima facie implausible (or less plausible). And so, the idea that the importance of consistency is subsumed to another, more primitive dialogical norm (the norm of persuasion) somehow deflates the degree of importance typically attributed to consistency in the philosophical literature, as a norm an sich.
Besides dialetheists, other contemporary philosophical theories might benefit from the short ‘genealogy of consistency’ I’ve just outlined. I am now thinking in particular of work done in formal epistemology by e.g. Branden Fitelson, Kenny Easwaran (e.g. here), among others, contrasting the significance of consistency vs. accuracy. It seems to me that much of what is going on there is also a deflation of the significance of consistency as a norm for rational thought; their conclusion is thus quite similar to the one of the historically-inspired analysis I’ve presented here, namely: consistency is over-rated.


Servus, New York! Invitation to the MCMP Workshop "Bridges" (2 and 3 Sept, 2014)

MCMP Workshop "Bridges 2014"

New York City, 2 and 3 Sept, 2014

www.lmu.de/bridges2014


The Munich Center for Mathematical Philosophy (MCMP) cordially invites you to "Bridges 2014" in the German House, New York City, on 2 and 3 September, 2014. The 2-day trans-continental meeting in mathematical philosophy will focus on inter-theoretical relations thereby connecting form and content of this philosophical exchange. The workshop will be accompanied by an open-to-public evening event with Stephan Hartmann and Branden Fitelson on 2 September, 2014 (6:30 pm).

Speakers

Lucas Champollion (NYU)
David Chalmers (NYU)
Branden Fitelson (Rutgers)
Alvin I. Goldman (Rutgers)
Stephan Hartmann (MCMP/LMU)
Hannes Leitgeb (MCMP/LMU)
Kristina Liefke (MCMP/LMU)
Sebastian Lutz (MCMP/LMU)
Tim Maudlin (NYU)
Thomas Meier (MCMP/LMU)
Roland Poellinger (MCMP/LMU)
Michael Strevens (NYU)

Idea and Motivation

We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Our spectrum of interest includes: reduction and emergence, mechanistic links between causal theories, belief vs. probability, mind and brain, relations between formal and informal accounts in the special sciences, cognition and the outer world.

Program and Registration

Due to security regulations at the German House registering is required (separately for workshop and evening event). Details on how to register and the full schedule can be found on the official website:

www.lmu.de/bridges2014

Sunday, 24 August 2014

Extending a theory with the theory of mereological fusions

"Arithmetic with fusions" (draft) is a joint paper with my graduate student Thomas Schindler (MCMP).  The abstract is:
In this article, the relationship between second-order comprehension and unrestricted mereological fusion (over atoms) is clarified. An extension $\mathsf{PAF}$ of Peano arithmetic with a new binary mereological notion of ``fusion'', and a scheme of unrestricted fusion, is introduced. It is shown that $\mathsf{PAF}$ interprets full second-order arithmetic, $Z_2$.
Roughly this shows:
First-order arithmetic + mereology = second-order arithmetic.
This implies that adding the theory of mereological fusions can be a very powerful, non-conservative, addition to a theory, perhaps casting doubt on the philosophical idea that once you have some objects, then having their fusion also is somehow "redundant". The additional fusions can in some cases behave like additional "infinite objects"; positing their existence allows one to prove more about the original objects.

Friday, 22 August 2014

L. A. Paul on transformative experience and decision theory II

In the first part of this post, I considered the challenge to decision theory from what L. A. Paul calls epistemically transformative experiences.  In this post, I'd like to turn to another challenge to standard decision theory that Paul considers.  This is the challenge from what she calls personally transformative experiences.  Unlike an epistemically transformative experience, a personally transformative experience need not teach you anything new, but it does change you in another way that is relevant to decision theory---it leads you to change your utility function.  To see why this is a problem for standard decision theory, consider my presentation of naive, non-causal, non-evidential decision theory in the previous post.

Tuesday, 19 August 2014

Is the human referee becoming expendable in mathematics?

(Cross-posted at NewAPPS)

Mathematics has been much in the news recently, especially with the announcement of the latest four Fields medalists (I am particularly pleased to see the first woman, and the first Latin-American, receiving the highest recognition in mathematics). But there was another remarkable recent event in the world of mathematics: Thomas Hales has announced the completion of the formalization of his proof of the Kepler conjecture. The conjecture: “what is the best way to stack a collection of spherical objects, such as a display of oranges for sale? In 1611 Johannes Kepler suggested that a pyramid arrangement was the most efficient, but couldn't prove it.” (New Scientist)

The official announcement goes as follows:
We are pleased to announce the completion of the Flyspeck project, which has constructed a formal proof of the Kepler conjecture. The Kepler conjecture asserts that no packing of congruent balls in Euclidean 3-space has density greater than the face-centered cubic packing. It is the oldest problem in discrete geometry. The proof of the Kepler conjecture was first obtained by Ferguson and Hales in 1998. The proof relies on about 300 pages of text and on a large number of computer calculations.
The formalization project covers both the text portion of the proof and the computer calculations. The blueprint for the project appears in the book "Dense Sphere Packings," published by Cambridge University Press. The formal proof takes the same general approach as the original proof, with modifications in the geometric partition of space that have been suggested by Marchal.
So far, nothing very new, philosophically speaking. Computer-assisted proofs (both at the level of formulation and at the level of verification) have attracted the interest of a number of philosophers in recent times (here’s a recent paper by John Symons and Jack Horner, and here is an older paper by Mark McEvoy, which I commented on at a conference back in 2005; there are many other papers on this topic by philosophers).  More generally, the question of the extent to which mathematical reasoning can be purely ‘mechanical’ remains a lively topic of philosophical discussion (here’s a 1994 paper by Wilfried Sieg on this topic that I like a lot). Moreover, this particular proof of the Kepler conjecture does not add anything substantially new (philosophically) to the practice of computer-verifying proofs (while being quite a feat mathematically!). It is rather something Hales said to the New Scientist that caught my attention (against the background of the 4 years and 12 referees it took to human-check the proof for errors): "This technology cuts the mathematical referees out of the verification process," says Hales. "Their opinion about the correctness of the proof no longer matters."

Now, I’m with Hales that ‘software intensive mathematics’ (to borrow Symons and Horner’s terminology) is of great help to offload some of the more tedious parts of mathematical practice such as proof-checking. But there are a number of reasons that suggest to me that Hales’ ‘optimism’ is a bit excessive, in particular with respect to the allegedly expendable role of the human referee (broadly construed) in mathematical practice, even if only for the verification process.

Indeed, and as I’ve been arguing in a number of posts, proof-checking is a major aspect of mathematical practice, basically corresponding to the role I attribute to the fictitious character ‘opponent’ in my dialogical conception of proof (see here). The main point is the issue of epistemic trust and objectivity: to be valid, a proof has to be ‘replicable’ by anyone with the relevant level of competence. This is why probabilistic proofs are still thought to be philosophically suspicious (as argued for example by Kenny Easwaran in terms of the notion of ‘transferability’). And so, automated proof-checking will most likely never replace completely human proof-checking, if nothing else because the automated proof-checkers themselves must be kept ‘in check’ (lame pun, I know). (Though I am happy to grant that the role of opponent can be at least partially played by computers, and that our degree of certainty in the correctness of Hales’ proof has been increased by its computer-verification.)

Moreover, mathematics remains a human activity, and mathematical proofs essentially involve epistemic and pragmatic notions such as explanation and persuasion, which cannot be taken over by purely automated proof-checking. (Which does not mean that the burden of verification cannot be at least partially transferred to automata!) In effect, a good proof is not only one that shows that the conclusion is true, but also why the conclusion is true, and this explanatory component is not obviously captured by automata. In other words, a proof may be deemed correct by computer-checking, and yet fail to be persuasive in the sense of having true explanatory value. (Recall that Smale’s proof of the possibility of sphere eversion was viewed with a certain amount of suspicion until models of actual processes of eversion were discovered.)

Finally, turning an ‘ordinary’ mathematical proof* into something that can be computer-checked is itself a highly theoretical, non-trivial, and essentially informal endeavor that must itself receive a ‘seal of approval’ from the mathematical community. While mathematicians hardly ever disagree on whether a given proof is or is not valid once it is properly scrutinized, there can be (and has been, as once vividly described to me by Jesse Alama) substantive disagreement on whether a given formalized version of a proof is indeed an adequate formalization of that particular proof. (This is also related to thorny issues in the metaphysics of proofs, e.g. criteria of individuation for proofs, which I will leave aside for now.) 

A particular informal proof can only be said to have been computer-verified if the formal counterpart in question really is (deemed to be) sufficiently similar to the original proof. (Again, the formalized proof may have the same conclusion as the original informal proof, in which case we may agree that the theorem they both purport to prove is true, but this is no guarantee that the original informal proof itself is valid. There are many invalid proofs of true statements.) Now, evaluating whether a particular informal proof is accurately rendered in a given formalized form is not a task that can be delegated to a computer (precisely because one of the relata of the comparison is itself an informal construct), and for this task the human referee remains indispensable.

And so, I conclude that, pace Hales, the human mathematical referee is not going to be completely cut out of the verification process any time soon. Nevertheless, it is a welcome (though not entirely new) development that computers can increasingly share the burden of some of the more tedious aspects of mathematical practice: it’s a matter of teamwork rather than the total replacement of a particular approach to proof-verification by another (which may well be what Hales meant in the first place).

-----------------------------
* In some research programs, mathematical proofs are written directly in computer-verifiable form, such as in the newly created research program of homotopy type-theory.

Sunday, 17 August 2014

Bohemian gravity

Tim Blais, a McGill University physics student made this really great a capella version of "Bohemian Rhapsody", called "Bohemian Gravity", with physics lyrics explaining superstring theory, like "Manifolds must be Kahler!" (lyrics here).


Another article on this.

Thursday, 14 August 2014

L. A. Paul on transformative experience and decision theory I

I have never eaten Vegemite---should I try it?  I currently have no children---should I apply to adopt a child?  In each case, one might imagine, whichever choice I make, I can make it rationally by appealing to the principles of decision theory.  Not according to L. A. Paul.  In her rich and fascinating new book, Transformative Experience, Paul issues two challenges to orthodox decision theory---they are based upon examples such as these.

(In this post and the next, I'd like to tryout some ideas concerning Paul's challenges to orthodox decision theory.  The idea is that some of them will make it into my contribution to the Philosophy and Phenomenological Research book symposium on Transformative Experience.)

Worlds Without Domain

An article "Worlds Without Domain" arguing against the idea that possible worlds have domains. The abstract is: "A modal analogue to the "hole argument" in the foundations of spacetime is given against the conception of possible worlds having their own special domains".

Thursday, 24 July 2014

Mathematicians' intuitions - a survey

I'm passing this on from Mark Zelcer (CUNY):

A group of researchers in philosophy, psychology and mathematics are requesting the assistance of the mathematical community by participating in a survey about mathematicians' philosophical intuitions. The survey is here: http://goo.gl/Gu5S4E. It would really help them if many mathematicians participated. Thanks!

Tuesday, 15 July 2014

Abstract Structure

Draft of a paper, "Abstract Structure", cleverly called that because it aims to explicate the notion of "abstract structure", bringing together some things I mentioned a few times previously.

Friday, 11 July 2014

Interview at 3am magazine

Here is the shameless self-promotion moment of the day: the interview with me at 3am magazine is online. I mostly talk about the contents of my book Formal Languages in Logic, and so cover a number of topics that may be of interest to M-Phi readers: the history of mathematical and logical notation, 'math infatuation', history of logic in general, and some more. Comments are welcome!

Thursday, 10 July 2014

Methodology in the Philosophy of Logic and Language

This M-Phi post is an idea Catarina and I hatched, after a post Catarina did a couple of weeks back at NewAPPS, "Searle on formal methods in philosophy of language", commenting on a recent interview of John Searle, where Searle comments that
"what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight".
I commented a bit underneath Catarina's post, as this is one thing that interests me. I'm writing a more worked-out discussion. But because I tend to reject the terminology of "formal modelling" (note, British English spelling!), I have to formulate Searle's objection a bit differently. Going ahead a bit, his view is that:
the abstract study of languages as free-standing entities has replaced study of the psychology of actual speakers and hearers.
This is an interesting claim, impinging on the methodology of the philosophy of logic and language. I think the clue to seeing what the central issues are can be found in David Lewis's 1975 article, "Languages and Language" and in his earlier "General Semantics", 1970.

1. Searle

To begin, I explain problems (maybe idiosyncratic ones) I have with both of these words "formal" and "modelling".

1.a "formal"
By "formal", I normally mean simply "uninterpreted". So, for example, the uninterpreted first-order language $L_A$ of arithmetic is a formal language, and indeed a mathematical object. Mathematically speaking, it is a set $\mathcal{E}$ of expressions (finite strings from a vocabulary), with several distinguished operations (concatenation and substitution) and subsets (the set of terms, formulas, etc). But it has no interpretation at all. It is therefore formal. On the other hand, the interpreted language $(L_A, \mathbb{N})$ of arithmetic is not a "formal" language. It is an interpreted language, some of whose strings have referents and truth values! Suppose that $v$ is a valuation (a function from the variables of $L_A$ to the domain of $\mathbb{N}$), that $t$ is a term of this language and $\phi$ is a formula of this language. Then $t$ has a denotation $t^{\mathbb{N},v}$ and $\phi$ has a truth value $\mid \mid \phi \mid \mid_{\mathbb{N},v}$.

This distinction corresponds to what Catarina calls "de-semantificaiton" in her article "The Different Ways in which Logic is (said to be) Formal" (History and Philosophy of Logic, 2011). My use of "formal" is always "uninterpreted". So, $L_A$ is a formal language, while $(L_A, \mathbb{N})$ is not a "formal" language, but is rather an interpreted language, whose intended interpretation is $\mathbb{N}$. (The intended interpretation of an interpreted language is built-into the language by definition. There is no philosophical problem of what it means to talk about the intended interpretation of an interpreted language. It is no more conceptually complicated that talking about the distinguished order $<$ in a structure $(X,<)$.)

1.b "modelling"
But my main problem is with this Americanism, "modelling", which I seem to notice all over the place. It seems to me that there is no "modelling" involved here, unless it is being used to involve a translation relation. For modelling itself, in physics, one might, for example, model The Earth as an oblate spheroid $\mathcal{S}$ embedded in $\mathbb{R}^3$. That is modelling. Or one might model a Starbucks coffee cup as a truncated cone embeddied in $\mathbb{R}^3$. Etc. But, in the philosophy of logic and language, I don't think we are "modelling": languages are languages, are languages, are languages ... That is, languages are not "models" in the sense used by physicists and others -- for if they are "models", what are they models of?

A model $\mathcal{A} = (A, \dots)$ is a mathematical structure, with a domain $A$ and some bunch of defined functions and relations on the domain. One can probably make this precise for the case of an oblate spheroid or a truncated cone; this is part of modelling in science. But in the philosophy of logic and language, when describing or defining a language, we not modelling.

But: I need to add that Catarina has rightly reminded me that some authors do often talk about logic and language in terms of "modelling" (now I should say "modeling" I suppose), and think of logic as being some sort of "model" of the "practice" of, e.g., the "working mathematician". A view like this has been expressed by John Burgess, Stewart Shapiro and Roy Cook. I am sceptical. What is a "practice"? It seems to be some kind of supra-human "normative pattern", concerning how "suitably qualified experts would reason", in certain "idealized circumstances". Personally, I find these notions obscure and unhelpful; and it all seems motivated by a crypto-naturalistic desire to remain in contact with "practice"; whereas, when I look, the "practice" is all over the place. When I work on a mathematics problem, the room ends up full of paper, and most of the squiggles are, in fact, wrong.

So, I don't think a putative logic is somehow to be thought of as "modelling" (or perhaps to be tested by comparing it with) some kind of "practice". For example, consider the inference,
$\forall x \phi \vdash \phi^x_t$
Is this meant to "model" a "practice"? If so, it must be something like this:
The practice wherein certain humans $h_1, \dots$ tend to "consider" a string $\forall x \phi$ and then "emit" a string $\phi^x_t$
And I don't believe there is such a "practice". This may all be a reflection of my instinctive rationalism and methodological individualism. If there are such "practices", then these are surely produced by our inner cognition. Otherwise, I have no idea what the scientifically plausible  mechanism behind a "practice" is.

Noam Chomsky of course long ago distinguished performance and competence (and before him, Ferdinand de Saussure distinguished parole and langue), and has always insisted that generative grammars somehow correspond to competence. If what is meant by "practice" is competence, in something like the Chomskyan sense, then perhaps that is the way to proceed in this direction. But in the end, I suspect that brings one back to the question of what it means to "speak/cognize a language", which is discussed below.

1.c Über-language 
On the other hand, when Searle mentions modelling, it is likely that he has the following notion in mind:
A defined language $L$ models (part of) English.
In other words, the idea is that English is basic and $L$ is a "tool" used to "model" English. But is English basic? I am sceptical of this, because there is a good argument whose conclusion denies the existence of English. Rather, there is an uncountable infinity of languages; many tens of millions of them, $L_1, L_2, \dots, L_{1000,000}, \dots$, are mutually similar, albeit heterogenous, idiolects, spoken by speakers, who succeed to high degree in mutual communication. Not any these $L_1, L_2, \dots, L_{1000,000}, \dots$ spoken by individual speakers is English. If one of these is English, then which one? The idiolect spoken by The Queen? Maybe the idiolect spoken by President Barack Obama? Michelle Obama? Maybe the idiolect spoken by the deceased Christopher Hitchens? Etc. The conclusion is that, strictly speaking, there is no such thing as English.

It seems the opposite is true: there is a heterogeneous speech community $C$ of speakers, whose members speak overlapping and similar idiolects, and these are to a high degree mutually interpretable. But here is no single "über-language" they all speak. By the same reasoning, one may deny altogether the existence of so-called "natural" languages. (Cf., methodological individualism in social sciences; also Chomsky's distinction between I-languages and E-languages.) There are no "natural" languages. There are languages; and there are speakers; and speakers speak a vast heterogeneous array of varying and overlapping languages, called idiolects.

1.d Methodology
Next Searle moves on to his central methodological point:
Any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers. And that doesn’t happen now. What happens now is that many philosophers aim to build a formal model where they can map a puzzling element of language onto the formal model, and people think that gives you an insight. … 
The point of disagreement here is again with the phrase "formal model", as the languages we study aren't formal models! The entities involved when we work in these areas are sometimes pairs of languages $L_1$ and  $L_2$ and the connection is not that $L_1$ is a "model" of $L_2$ but rather that "$L_1$ has certain translational relations with $L_2$". And translation is not "modelling". A translation is a function from the strings of $L_1$ to the strings of $L_2$ preserving certain properties. Searle illustrates his line of thinking by saying:
And this goes back to Russell’s Theory of Descriptions. … I think this was a fatal move to think that you’ve got to get these intuitive ideas mapped on to a calculus like, in this case, the predicate calculus, which has its own requirements. It is a disastrously inadequate conception of language.
But this seems to me an inadequate description of Russell's 1905 essay. Russell was studying the semantic properties of string "the" in a certain language English. (The talk of a "calculus" loads the deck in Searle's favour.) Russell does indeed translate between languages. For example, the string
(1) The king of France is bald
is translated to the string
(2) $\exists x(\text{king-of-Fr.}(x) \wedge \text{Bald}(x) \wedge \forall y(\text{king-of-Fr.}(y) \to y = x)).$
But this latter string (2) is not a "model", either of the first string (1), or of some underling "psychological mechanism".
… That’s my main objection to contemporary philosophy: they’ve lost sight of the questions. It sounds ridiculous to say this because this was the objection that all the old fogeys made to us when I was a kid in Oxford and we were investigating language. But that is why I’m really out of sympathy. And I’m going to write a book on the philosophy of language in which I will say how I think it ought to be done, and how we really should try to stay very close to the psychological reality of what it is to actually talk about things.
Having got this far, we reach a quite serious problem. There is, currently, no scientific understanding of "the psychological reality of what it is to actually talk about things". A cognitive system $C$ may speak a language $L$. How this happens, though, is anyone's guess. No one knows how it can be that
Prof. Gowers uses the string "number" to refer to the abstract object $\mathbb{N}$.
Prof. Dutilh Novaes uses the string "Aristotle" to refer to Aristotle.
SK uses the string "casa" to refer to his home.
Mr. Salmond uses the string "the referendum" to refer to the future referendum on Scottish independence.
etc.
The problem here is that there is no causal connection between Prof. Gowers and $\mathbb{N}$! Similarly, a (currently) future referendum (18 Sept 2014) cannot causally influence Mr. Salmond's present (10 July 2014) mental states. So, it is quite a serious puzzle.

2. Lewis

Methodologically, on such issues -- that is, in the philosophy of logic and language -- the outlook I adhere to is the same as Lewis's, whose view echoes that of Russell, Carnap, Tarski, Montague and Kripke. Lewis draws a crucial distinction:
(A) Languages (a language is an "abstract semantic system whereby symbols are associated with aspects of the world").
(B) Language as a social-psychological phenomenon.
With Lewis, I think it's important not to confuse these. In an M-Phi post last year (March 2013), I quoted Lewis's summary from his "General Semantics" (1970):
My proposals will also not conform to the expectations of those who, in analyzing meaning, turn immediately to the psychology and sociology of language users: to intentions, sense-experience, and mental ideas, or to social rules, conventions, and regularities. I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics.
I will just call them (A) and (B). See also Lewis's "Languages and Language" (1975) for this distinction. Most work in what is called "formal semantics" is (A)-work. One defines a language $L$ and proves some results about it; or one defines two languages $L_1, L_2$ and proves results about how they're related. But this is (A)-work, not (B)-work.

3. (Syntactic-)Semantic Theory and Conservativeness

For example, suppose I decided I am interested in the following language $\mathcal{L}$: this language $\mathcal{L}$ has strings $s_1, s_2$, and a meaning function $\mu_{\mathcal{L}}$ such that,
$\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$
$\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$
Then this is in a deep sense logically independent of (B)-things. And one can, in fact, prove this!

First, let $L_O$ be an "empirical language", containing no terms for syntactical entities or semantic properties and relations. $L_O$ may contain terms and predicates for rocks, atoms, people, mental states, verbal behaviour, etc. But no terms for syntactical entities or semantic relations.

Second, we extend this observation language $L_O$ by adding:
  • the unary predicate "$x$ is a string in $\mathcal{L}$" (here "$\mathcal{L}$" is not treated as a variable), 
  • the constants "$s_1$", "$s_2$", 
  • the unary function symbol "$\mu_{\mathcal{L}}(-)$", 
  • the constants "the proposition that Oxford is north of Cambridge" and "the proposition that Oxford is north of Birmingham". 
Third, consider the following six axioms of semantic theory $ST$ for $\mathcal{L}$:
(i) $s_1$ is a string in $\mathcal{L}$.
(ii) $s_2$ is a string in $\mathcal{L}$.
(iii) $s_1 \neq s_2$.
(iv) the only strings in $\mathcal{L}$ are $s_1$ and $s_2$.
(v) $\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$
(vi) $\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$
Then, assuming $O$ is not too weak ($O$ must prove that there are at least two objects), for almost any choice of $O$ whatsoever,
$O+ST$ is a conservative extension of $O$.
To prove this, I consider any interpretation $\mathcal{I}$ for $L_O$, and I expand it to a model $\mathcal{I}^+ \models ST$. There are some minor technicalities, which I skirt over.

Consequently, the semantic theory $ST$ is neutral with respect to any observation claim: the semantic description of a language $\mathcal{L}$ is consistent with (almost) any observation claim. That is, the semantic description of a language $\mathcal{L}$ cannot be empirically tested, because it has no observable consequences.

(There are some further caveats. If the strings actually are physical objects, already referred to in $L_O$, then this result may not quite hold in the form stated. Cf., the guitar language.)

4. The Wittgensteinian View

Lewis's view can be contrasted with a Wittgensteinian view, which aims to identify $(A)$ and $(B)$ very closely. But, since this is a form of reductionism, there must be "bridge laws" connecting the (A)-things and the (B)-things. But what are they? They play a crucial methodological role. I come back to this below.

Catarina formulates the view like this:
I am largely in agreement with Searle both on what the ultimate goals of philosophy of language should be, and on the failure of much (though not all!) of the work currently done with formal methods to achieve this goal. Firstly, I agree that “any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers”. Language should not be seen as a freestanding entity, as a collection of structures to be investigated with no connection to the most basic fact about human languages, namely that they are used by humans, and an absolutely crucial component of human life. (I take this to be a general Wittgensteinian point, but one which can be endorsed even if one does not feel inclined to buy the whole Wittgenstein package.)
In short, I think this is a deep (but very constructive!) disagreement about ontology: what a language is.

On the Lewisian view, a language is, roughly, "a bunch of syntax and meaning functions"; and, in that sense, it is indeed a "free-standing entity".

(Analogously, the Lie group $SU(3)$ is a free-standing entity and can be studied independently of its connection to quantum particles called gluons (gluons are the "colour gauge field" of an $SU(3)$-gauge theory, which explains how quarks interact together). So, e.g., one can study Latin despite there being no speakers of the language; one can study infinitary languages, despite their having no speakers. One can study strings (e.g., proofs) of length $>2^{1000}$ despite their having no physical tokens. The contingent existence of one, or fewer, or more, speakers of a language $L$ has no bearing at all on the properties of $L$. Similarly, the contingent existence or non-existence of a set of physical objects of cardinality $2^{1000}$ has no bearing on the properties of $2^{1000}$. It makes no difference to the ontological status of numbers.)

Catarina continues by noting the usual way that workers in the (A)-field generally keep (A)-issues separate from (B)-issues:
I also agree that much of what is done under the banner of ‘formal semantics’ does not satisfy the requirement of sticking as closely as possible to the psychology of actual human speakers and hearers. In my four years working at the Institute for Logic, Language and Computation (ILLC) in Amsterdam, I’ve attended (and even chaired!) countless talks where speakers presented a sophisticated formal machinery to account for a particular feature of a given language, but the machinery was not intended in any way to be a description of the psychological phenomena underlying the relevant linguistic phenomena.
I agree - this is because when such a language $L$ is described, it is being considered as a free-standing entity, and so is not intended to be a "description". Catarina continues then:
It became one of my standard questions at such talks: “Do you intend your formal model to correspond to actual cognitive processes in language users?” More often than not, the answer was simply “No”, often accompanied by a puzzled look that basically meant “Why would I even want that?”. My general response to this kind of research is very much along the lines of what Searle says.
I think that the person working in the (A)-field sees that (A)-work and (B)-work are separate, and may not have any good idea about how they might even be related. Finally, Catarina turns to a positive note:
However, there is much work currently being done, broadly within the formal semantics tradition, that does not display this lack of connection with the ‘psychological reality’ of language users. Some of the people I could mention here are (full disclosure: these are all colleagues or former colleagues!) Petra Hendriks, Jakub Szymanik, Katrin Schulz, and surely many others. (Further pointers in comments are welcome.) In particular, many of these researchers combine formal methods with empirical methods, for example conducting experiments of different kinds to test the predictions of their theories. 
In this body of research, formalisms are used to formulate theories in a precise way, leading to the design of new experiments and the interpretation of results. Formal models are thus producing new insights into the nature of language use (pace Searle), which are then put to test empirically. 
The methodological issue comes alive precisely at this point.
How are (A)-issues related to (B)-issues? 
The logical point I argued for above was that a semantic theory $ST$ for a fixed well-defined language $L$ makes no empirical predictions, since the theory $ST$ is consistent with any empirical statement $\phi$. I.e., if $\phi$ is consistent, then $ST + \phi$ is consistent.

5. Cognizing a Language

On the other hand, there is a different empirical claim:
(C) a speaker $S$ speaks/cognizes $L$. 
This is not a claim about $L$ per se. It is cognizing claim about how the speaker $S$ and $L$ are related. This is something I gave some talks about before, and also wrote about a few times before here (e.g., "Cognizing a Language"), and also wrote about in a paper, "There's Glory for You!" (actually a dialogue, based on a different Lewis - Lewis Carroll) that appeared earlier this year. A cognizing claim like (C) might yield a prediction. Such a claim uses the predicate "$x$ speaks/cognizes $y$", which links together the agent and the language. But without this, there are no predictions.

The methodological point is then this: any such prediction from (C) can only be obtained by bridge laws, invoking this predicate linking the agent and language. But these bridge laws have not been stated at all. Such a bridge law might take the generic form:
Psycho-Semantic Bridge Law
If $S$ speaks $L$ and $L$ has property P, then $S$ will display (verbal) behaviour B.
Typically, such psycho-semantic laws are left implicit. But, in the end, to understand how the (A)-issues are connected to the (B)-issues, such putative laws need to be made explicit. Methodologically, then, I say that all of the interest lies in the bridge laws.

6. Summary

So, that's it. I summarize the three main points:
1. Against Searle and with Lewis: languages are free-standing entities, with their own properties, and these properties aren't dependent on whether there are, or aren't, speakers of the language.
2. The semantic description of a language $L$ is empirically neutral (indeed, the properties of a language are in some sense modally intrinsic).
3. To connect together the properties of a language $L$ and the psychological states or verbal behaviour of an agent $S$ who "speaks/cognizes" $L$, one must introduce bridge laws. Usually they are assumed implicitly, but from the point of view of methodology, they need to be stated clearly.
7. Update: Addendum 

I hadn't totally forgotten -- I sort of semi-forgot. But Catarina wrote about these topics before in several M-Phi posts, so I should include them too:
Logic and the External Target Phenomena (2 May 2011)
van Benthem and System Imprisonment (5 Sept 2011)
Book draft: Formal Languages in Logic (19 Sept 2011) 
(Probably some more, that I actually did forget...) And these raise many questions related to the methodological one here.

Tuesday, 24 June 2014

Sean Carroll: "Physicists should stop saying silly things about philosophy"

Readers probably saw this already, but I mention it anyhow. Physicist Sean Carroll has a 23 June 2014 post, "Physicists should stop saying silly things about philosophy", on his blog gently criticizing some recent anti-philosophy remarks by some well-known physicists, and trying to emphasize some of the ways physicists and philosophers of physics might interact constructively on foundational/conceptual issues. Interesting comments underneath too.

Saturday, 21 June 2014

Trends in Logic XIV, rough schedule

We now have a rough version of the conference schedule, including all the speakers and their titles. Here.

Rafal