Speaker: Richard Zhang Topic: PCF
-
I am not clear about the partial function as a approximation of a total function and the process of "improvement". What is the intuition of it?
[SCW: Consider this "approximation" of the factorial function.
fact0 n = if n = 0 then 1 else (fact' n)
Pointwise, there is some input this function gets correct. Furthermore, on all of the other inputs; this function doesn't get it wrong, it just fails to return anything. If we wanted to do better, we could improve this function by returning more correct answers, thus
fact1 n = if n = 0 then 1 else if n = 1 then 1 else (fact' n)
So fact1 is a better approximation of the factorial function than fact0. If we have the function F(f) = \n. if n=0 then 1 else n * f(n-1) then fact1 is an improvement of fact0 in the sense that fact1 = F(fact0).]
-
I'd love to have a discussion about possible models for PCF (or any other language with general recursion, really). The only ones I'm aware of are the ones based on domain theory, which I find very "information" oriented - would there be more structural ones? I know this is a broad discussion, but I'd love to see if others have knowledge/ideas about that. I think that a more structural explanation of recursion would definitely bring a lot of insight on the topic (for me, at least)
[SCW: I'm not sure how to answer this question. What sort of structure are you looking for? Perhaps you are thinking of models based on coinduction? i.e. partiality monad (http://www.cse.chalmers.se/~nad/publications/altenkirch-danielsson-types2016.pdf) Or on "bar" types in type theory? https://ecommons.cornell.edu/handle/1813/6778 ]
-
How does one define the limit of a sequence of functions as Bob does in the chapter? Is this point-set topology all over again?
[SCW: The limit comes from a fixed point theorem that says that continuous functions over a complete partial order have a least fixed point, which is the least upper bound of f^i(bot).]
-
What is the intuition of minimization operation? I am confused about the definition and how to use it to prove theorem 19.3.
[SCW: Theorem 19.3 shows the equivalence of PCF with "partial recursive functions" i.e. the set of functions that can be defined using primitive recursion (System T) and a minimization operation.
For the (<=) direction, we have a partial recursive function, we need to show that we can express it in PCF. To do this, we need to show how to encode primitive recursion and minimation in PCF.
For the (=>) direction, we have a PCF function and we need to show that it can be expressed using primitive recursion and minimization. To do this we define an evaluator for PCF.]
-
Could we go technical and give an actual proof sketch for 19.3? I didn't get enough time to think about it - especially on how to use minimization to define the evaluator for PCF. Not a fully detailed proof of course, just the main arguments would be great
-
There is a discussion on eager and lazy dynamics with the example fix : nat is s(x) being a value with the lazy semantics but not with the eager dynamics. Why on earth would someone say that s(e) is a value for any e, even if we did not have recursion? What is the interpretation of an expression being a value in a language with laze dynamics. I thought values were not meant to step even once, let alone step infinitely many times when we have the fixpoint operator.
[SCW: Haskell defines values in this way. They are forms that do not step, even if we have s(looping_expression). They are useful in that you can test whether they are nonzero, you just can't take the predecessor.]
-
There is a further comment about why nat is a coinductive type with the lazy dynamics. Did we explain what it means for a solution to a type equation to be the least/largest possible solution? From a set-theoretic point of view we have intersections and unions in some universe, but type theory is not set theory is it? What makes one type lesser or greater than another?
-
Also, I would appreciate it if you can give more intuition on understanding Section 19.5 during class.
-
For the partial language, we can write a program without proving it will terminate while in the total language, the proof of termination is encoded in the program. Does it mean that the program itself is a proof of termination?
[SCW: sort of ---- the fact that we can define the program in the total language is the proof that the program terminates. But, this termination proof depends on the proof that all programs in the total language terminate.]
-
PCF is a programming language that allows partial functions. So it cannot be used as a programming logic, is that correct? Is that the reason why proof checkers like Coq don't allow partial functions? Can you also give an example of a useful strictly partial function, one that diverges only on some inputs and gives useful outputs on others?
[SCW: Yes, PCF isn't a logic, in the sense of the CH isomorphism. That is why Coq disallows partial functions. Partiality is not useful in itself---for any one I gave you that I claimed was useful, you could replace it with a total function. I can't use partial results. The practical importance is more in the definition of the function, not the actual identity of the function itself.]
-
I'm interested to know more about the Blum size theorem, if it's possible to fit it into the schedule at all.
[SCW: See below for more info.]
-
I hadn't seen Blum's Size Theorem before, so I fell into a (quite deep) rabbit hole chasing down its formal definition/proof. After a bit of reading (Cornell's advanced programming course http://www.cs.cornell.edu/courses/cs6110/2015sp/schedule.php being particularly helpful in providing explanation/references) I (think I) now have a basic understanding of the intuition behind the proof - using an indexing of the partial recursive functions and a "naive" algorithm to exhibit the desired witness function. However, I'm still missing the intuition as to how "baking in" the termination proof can be the cause of such a big size discrepancy. Is there an example of a general recursion scheme (let alone a useful one) that would be elaborated into 2^(2^n) successor elimination forms when translated to a primitive recursion?
[SCW: You know more than I do about Blum's size theorem at this point.]
-
Several times Bob Harper has mentioned "... fixed point of type operators up to isomorphisms". I don't actually know what this means. As in "up to". How do we work up to isomorphisms?
[SCW: save this for Tuesday.]
-
Bob Harper says, "... general recursion may, like other language features, be seen as a manifestations of types structure rather than as a ad hoc language feature." I guess I'm confused and intrigued by this statement. I always figured recursion was something more fundamental we added to a language. I don't really have a specific question about this statement. I would love to hear some expansion on this though.
[SCW: ????]