Annoucements: Project Checkpoint due Tuesday, October 25th Want to see: three page status report
First year PhD students: apply for PLMW travel scholarships
Today's Speaker: Pritam C. Topic: Constructive Logic
Constructive logic: the right to say "I don't know". Constructivity, the "Science of How"
Brower's example: P There exists a consecutive sequence of 100 zeros somewhere in the decimale expansion pi.
Is P or not P true? Can't possibly show not P, but no one has proven P. Someday we me might prove P. Brower: It would be true at that time, not now.
How can the law of excluded middle not hold? I'm either eating or not eating. Sleeping or not sleeping etc.
But, every moment we are creating a world by our abstractions. Meaning is then given by our abstractions.
What is truth?
- Classical: something God knows (God is omnipotent)
- Constructive: something that I know (I don't know everything) something that I have seen, and have evidence for
Proposition vs. Judgement
- "2 = 2" is a proposition
- "2 = 2 is a proposition" is a judgement (knowledge what)
- "2 = 2 is true" is a judgement (knowledge how)
What gives us the right to make a judgement? We need to have evidence? To show that something is a proposition, means that we know what counts as a proof for this. Must be able to show that something is a proposition before showing that it is true.
Hypothetical judgements:
- phi1 is true, phi2 is true, ..., phin is true |- phi is proposition
- phi1 is true, phi2 is true, ..., phin is true |- phi is true
-
How do we show that a proposition is refutable? In particular, can we (and how can we) show that a proposition is refutable using the proof terms given in 12.2.2?
[SCW: here is a refutable statement: A /\ not A, i.e. A /\ (A -> 0) To prove that it is refutable, we need to invert it, i.e. show that it is false. Using the proof rules of 12.2.2, this derivation looks like this:
x : A /\ A -> 0 |- x .l : A x : A /\ A -> 0 |- x . r : A -> 0 ------------------------------------------------------------ x : A /\ A -> 0 |- (x.l)(x.r) : 0 --------------------------------------- |- \x. (x.l)(x.r) : (A /\ A -> 0) -> 0
]
-
There are logic systems in which /\ , / and exists can be defined by False, -> and forall. Why are the /\ , / and exists symbols remained in the constructive logic system instead of removed to form a smaller system?
[SCW: My guess is that Bob is trying to build intuition about the Curry-Howard isomorphism. So he includes redundant connectives so that you have more examples of how propositions and types correspond. Also, this chapter comes immediately after the one on products and sums, so it makes sense to include them here.]
-
If we look at the proof system, all proofs are just built up from true-I. This makes proof terms isomorphic to their types (propositions); there is just one proof for every type.
Without Gentzen's principles, it is possible to prove this and equip the system with the same proof dynamics where the proof terms, inverted and so on, are definitionally equal (identical) to the ones we intend.
Can I say that things only get more interesting when there are other proof terms introduced by additional axioms, so that we really can have different proofs for the same type? Or are we really only interested in having a single true-I to represent all proofs?
[SCW: this is actually incorrect. The proposition A -> A -> A has two different proofs. \x.\y. x and \x.\y. y. Furthermore, any proof involving sums needs to choose which side of the sum to prove. These proofs are not equivalent to true-I, even with Gentzen's rules. ]
-
Can we talk a bit about reversibility of proof? (12.3) It seems like this implies that the computation system derived from the logic herein is reversible, but that seems strange. What's going on here? Doesn't information get lost in beta-reduction (namely, what the form of the original redex was)? Conversely, what would it take to make a lambda calculus reversible in the sense that Bob seems to be getting at here?
[SCW: reversibility of proof doesn't mean that the computation is always reversible. It just marks the situtions where it sometimes is reversible. For example, the term \x. p(x) contains a beta-reduction, but no information is lost here --- the beta-reduction is trivial in some sense.
I think in this section, Bob is trying to motivate a second class of equivalence rules (called eta-equivalence rules) that are not directly defined by computation. It would be useful to work out some examples of what the reversibility rule for sums means, for example. ]
-
I am wondering why did the book introduce proof dynamics for this chapter? It doesn't seem that we would be running the proofs, plus, proof dynamics is introduced using definitional equations, which doesn't seem to show us how to run the proofs either (it looks like it's talking about proof equivalence?). What is the purpose of this?
[SCW: see below]
-
After reading chapter 12, I feel confused about the representation of proof dynamics. First, why we use definitional equations. Does it imply it is a bidirectional relation?
[SCW: There is a fine point that I think Bob wants to make. On one hand, "meaning" is determined by computation, even for these logics. On the other hand, often the most important thing that we care about is not what value a program will produce, but more generally, what other programs (including values) that it is equal too. Computation must be directed, but it still generates a symmetric equational system. However, more specifically, I don't know why Bob chooses to focus on equational dynamics here though.]
Moreover, in the disjunction rule, why should we have those premises?
[SCW: Which disjunction rule? What premises?]
Moreover, in language E and T, there is safety properties. Is there any property about constructive logic?
[SCW: we could define a safety property for this logic, but since we are talking about logic and not programming languages, it would not be very useful. Our method of proving type safety is weak --- it doesn't also prove that evaluation terminates. In this setting, it corresponds to showing that proof normalization is safe, but nothing more. What we care about in logic is "consistency" --- the fact that we cannot prove false things. We need a stronger proof technique to show this result though.]
Another thing is that in programming language, we do type check by apply static rule. In constructive logic, since a type is a proposition and a proof is a value of a proposition. Does it mean that checking the correctness of a proof is a process of type checking?
[SCW: Yes!]
Last but not least, it would be great if you could talk about you idea of relation between computation and logic proof (including similarity and difference).
[SCW: ???]
-
Is there any way to use the propositions as types principle to extract a small-step operational semantics, rather than the equational semantics presented in the chapter? Ideally we would like to be able to say something about how the language evaluates.
[SCW: Yes, we could define such a relation. See above though. ]