It seems to be the right choice provided you're computing contexts "on the way up" starting at the leaves of the expression tree. But it's typical to compute contexts "on the way down", passing the same context but with new bindings when appropriate to each subexpression when it's checked. Tactic systems are also usually made to work this way, or so it seems, with the context already known for any given hole that is to be filled. The standard typing rules need to be adapted for this top-down context checking, but I had assumed there would be no problem making this adaptation, since it seems standard.
Does it somehow go wrong for ETT? Works fine for Computational TT. Or is there some other reason to go bottom-up? Please explain. Well, it seems to me that using LF types as object-theory judgments is exactly what I would mean by LF as a meta-theory.
Thanks for your replies! I was hoping that Andrej would chime in too. I'm still confused about how exactly one programs in Andromeda. Maybe I should just go look at some code. The discussion suggests that there is confusion about what is Twelf and what is a logical framework. In Twelf you can both build derivations in your object logic and also reason about the properties of that object logic.
- How to Be a Dolphin Trainer;
- Postcards from Mr. Pish: East Coast Edition (Mr. Pishs Postcards Series Book 3).
- Early Los Angeles County Attractions (Postcard History Series).
- The Creation of the American Republic, 1776-1787 (Published by the Omohundro Institute of Early American History and Culture and the University of North Carolina Press).
- Barrio Dreams: Puerto Ricans, Latinos, and the Neoliberal City?
- Andromeda - Feeple65 Chloe;
We did a complete verification of the soundness of the SML type system in Twelf. This means encoding the statics and dynamics of SML as logics, and proving meta-theorems about them using the facilities in Twelf for reasoning about logics. Twelf makes it particularly easy to prove Pi2 properties of logics, which is quite enough for the purposes of that project. It is not at all correct to say that Isabelle is a logical framework and that Twelf is something else.
Instead of proving Pi2 sentences you can work with realizers of them. This would be functional programming over the object logic. That is what Delphin and Beluga are all about.
It's partly a matter of taste whether to prefer a relational formulation or a functional formulation of the metatheory. We've found Twelf quite adequate for many large-scale purposes 30K lines of code to do the verification mentioned. I don't see how judgments-as-types could be anything but using LF as a metatheory to describe an object theory.
Judgments are not types inside the object theory, so if they are LF types then LF must be the metatheory. For me, a metalanguage must be used to work with the syntax of the object language. In Twelf, LF doesn't do this; the totality checker the thing for proving metatheorems essentially ends up doing this, in a logic programmy sort of way.
The syntax of LF itself--extended with some signature--is manipulated.
There are only two levels, LF is the lower level, and assuming a signature in LF is no more meta than assuming axioms in Coq. The original object language as opposed to the LF representation is not formally involved. LF totally does work with the syntax of the object language. We could do the same thing in Coq.
Or to put it differently, perhaps it can if someone writes AML code which essentially does what Twelf does, but correctness of such a prover would depend on the user not assuming certain things. I need to think about this. We really are going back to LCF-style here. There's a base theory which the user may extend with a signature a bunch of constants and then can write programs that compute judgments.
That's all there is, really. When I said that the system would be like LF, I meant to say that the base theory Pi's and equality types would look a bit like LF's kinds and families.
The Oasis Bar, El Albir
What you are taking for granted, and not defending, is that the relationship of LF types to object language types is the same as, or at least analogous to, the relationship of metalanguage types to object language types. It simply doesn't seem that way to me. And apparently not to Bob either, from his above comment. With judgments-as-types, the whole business is syntactic. The syntax of the object language is represented using the syntax of LF. With metatheory, you have a traditional logic, with syntax and semantics, and you represent the syntax of an object language using metalanguage objects whose meaning is just syntax.
LF has nothing at all to say about meaning. For its use as a logical framework, LF terms may as well be meaningless. The whole damn language is just a fancy syntax notation.
Atnel Andromeda ATB A development kit from - Electronic components parts
It turns out you can try to ascribe a meaning to judgments-as-types representations. But it's probably not a good idea in the case of LF.
Many adequate representations would not yield the intended semantics of the object logic. With extensional LF, OTOH, analytic-style representations seem to assign the right meaning to ordinary not modal or substructural dependent type systems with unique typing. This can be seen as a generalization of universe coding. Meanwhile, in Isabelle, a synthetic-style judgments-as-types representation into HOL can be seen as assigning a refinement-style semantics to pretty much any object language. Here's an idea I had at some point, if you're going that way.
Twelf-style metatheorem proving seems like it needs a dramatic overhaul to do more impressive metatheory, like normalization, logical relations, and semantics. But, step 1, you can assume another signature to be used as a richer metalanguage. Step 2, you represent the object language in the representation of the metalanguage, in some appropriate way. Step 3, you prove the metatheorem in the new metalanguage. But how do you know the object language representation in the new metalanguage agrees with the original representation you want to use?
Step 4: use a Twelf-style metatheorem to formally establish the adequacy of one representation w. Step 2 becomes much easier if the logical framework has support for quoting. In regular LF it wouldn't be needed, but in extensional LF, we need a way to suppress the equations, to get back to syntactic equality.
I don't think there is anything to "defend" about why the relationship of LF types to object language types is analogous to that in other metalanguages: it's literally the same, you have an LF-type that represents each judgment, just as when describing the syntax of type theory as an initial algebra for a many-sorted algebraic theory in ZF you would have a ZF-set of derivation trees for each judgment. I think that your point is not about the relationship between types; it's about the fact that when we describe the object language in LF using HOAS, the reason that that description is correct is a meta-theorem involving the weakness of LF e.
So I agree, there is something different about it, but I also think it's reasonable to consider it as a different sort of metatheory. I don't see it that way. A type of derivation trees is an inductive type. A judgment-as-type is an abstract type. Actually, I'm not sure a weak system like LF is a true requirement for judgments-as-types.
What goes wrong if you assumed the same signature in Coq? It seems nothing goes wrong, because of parametricity. It makes adequacy less direct because you need to invoke parametricity.
Clearly, these abstract types are useless for carrying out metatheory internally to Gallina. After all, judgments-as-types is not metatheory. I'm not sure what correctness property you're talking about. It doesn't make sense to prove adequacy internal to ZF unless you have multiple formal definitions of the language internal to ZF. I think we are just arguing about the meanings of words.