-
Notifications
You must be signed in to change notification settings - Fork 15
Types and Programming Languages Chapter 11 Redux Simple Extensions
To recap, then, this meeting was intended to:
- Cover the last two extensions from Chapter 11:
- General recursion
- Lists
- Mob an implementation of some of the extensions
We took a brief look at general recursion and decided it looked complicated, so started with lists.
Leo: Why do the evaluation rules E-IsNilNil
and E-IsNilCons
have different type variables in them (S
and T
)?
Mudge: because at the point of evaluation, the types don't really matter?
Tom: yep, it's related to type erasure, so our assumption at the moment is that you can basically delete all the types and proceed as if this were the untyped lambda calculus. Also it leads on to later in the book when we've got subtyping, and the types may really not be equal, e.g. isnil[Numeric]
applied to List Integer
.
Question: why do we have a type annotation on nil
? Why on all the other things?
[I got a bit lost here]
Leo: nil
is basically the only way of introducing a list, so it has to have an annotated type because at that point you've got no other information to go on.
Tom: Right, this relates to uniqueness of types - if we had effectively
type_of(Γ, nil, T1).
for all T1, then you could show nil
has any type, which violates other assumptions we're relying on.
Exercise: Verify that progress and preservation theorems hold for simply-typed lambda-calc with booleans and lists.
Tom: what about if we ask for head[Bool] nil[Bool]
- this is a well-typed expression per T-Head
, but what does this mean for evaluation? Well-typed expressions should guarantee progress, but what does evaluating this produce?
We've only got one evaluation rule for head, E-Head
, so we've got a rule that doesn't evaluate, and is well-typed, but isn't a value - doesn't this violate the progress property?
[we check the solutions]
SURPRISE! The progress theorem does NOT hold it says. Well done Piercey Pierce, you got us BIG TIME.
Apparently in a proper language this would be handled with exceptions.
Chris Z: Why can't we return nil
?
Tom: this is an unfortunate collision with other languages we know, e.g. Ruby - in this context nil
means the empty list, and the function head[Nat]
takes a List Nat
and returns a Nat
, not a list.
We decide to skip general recursion as no-one ever uses it, and head straight to ruby land and mob stuff a little bit.
Tom: What are we going to actually start with then? We're clearly not going to get through it all.
[via a series of mumblings we collectively conclude to start with pairs]
We start with checking { true | true }
and successfully get Bool×Bool
by type-checking term.left
and term.right
and creating a new Type::Product
with the results, as foretold by the elders.
Then we try { true | 0 }
and get a WEIRD error...
...that turns out to be because we didn't actually implement type-checking of Nat
s last time. This doesn't raise a TypeError
as we don't have a catch-all statement; our case statement falls through and returns a nil
.
We still want a more interesting example than a pair of bools, so Leo does something TOTALLY RIDICULOUS with nested boolean pairs and ifs and gets Bool×Bool×Bool
first time. Which is nice.
[we decide to neither implement Nat
, nor raise exceptions on unhandled types, because we are mavericks]
Okay, we can type-check a pair, what about a projection of a pair?
Take the example: { true | (λx. true) }.1
[Here there was a little discussion of why our tests are dealing with strings, and what our type_of
method is actually handling and returning. Our implementation is always dealing with objects representing the terms and types, but our tests are using string representations of them thanks to James C's wonderful work implementing the parser for all these extensions]
Tuzz: Do we need to handle projecting fields > 2? Couldn't we handle this in the parser by rejecting higher numbers?
Tom: Unless you have different projection syntax for the different things we project from, the parser can't know. Even though the syntax for pair/tuple/etc literals is different, what if we're looking at a projection of a variable that might turn out to be a pair or a tuple or a record at runtime?
Okay, so pairs are done. Do we carry on with tuples etc. or something completely different?
All: Let's do sum types
Tom: Mathematically there's something interesting going on under the surface here that's not worth talking about - essentially pairs and sums are duals of each other [FIXME I didn't capture the rest of this, not because it's actually not worth talking about, but because I'm slow]
We type check the term = inl true as Bool + (Bool×Bool)
Term::Inl
represents the components as follows:
true
= term.term
Bool + (Bool×Bool)
= term.type
(Chris Z points out could be more explicitly named "type annotation" or similar)
So we need to check that term.term.type == term.type.left
- AND WE DO
- AND IT WORKS
Leo: What if we do inl true as Bool
, eh? WHAT THEN?
Okay, so we need to raise a TypeError
unless the ascription is a sum type.
Some majestic copy-pasting to implement inr
ensues.
Emboldened, we move on to case
statements. We found this hard to think about, but Leo comes up with a good example to help us:
let istruthy = λx:Bool + Nat. case x of
inl b => b
| inr n => if iszero n then false else true
in istruthy (inl true as Bool + Nat)
... istruthy (inr (succ 0) as Bool + Nat)
So if we assume that b
is Bool
, can we prove that b
is also Bool
? Well, duh.
And if we assume that n
is Nat
, can we prove that if iszero n then false else true
has type Nat
? Less straightforward, but sure we can.
So we're doing recursive typeof
calls on the case bodies, with added type assumptions depending on the branch we're checking.
Tom: Okay, so in English, what are we doing?
- Is
t0
of sum type? If not, fail. - Check the type of
inl
branch withx1:T1
inΓ
, wherex1
is in the term,T1
is the left type of the sum type. - Check the type of inr branch with
x2:T2
inΓ
, wherex2
is in the term,T2
is the right type of the sum type - Check 2 and 3 are the same - if not, fail
- Return that type!
[variable naming interlude]
Mudge: BEEEEEEEE
Tom: BEE PEE
Mudge: What
Tom: I thought you said BEE PEE
[mudge had not in fact said BEE PEE]
[end of interlude, organ music plays]
With all that lovely prep, implementation goes smoothly.
Mudge: When y'all were implementing your implementations, as one does, what did you do about let
? Did you desugar it or handle it explicitly? Is there a difference when it comes to type checking the statement, or is it all the same?
Tom: For evaluation purposes, let n = 5 in iszero n
is the same as (λn. iszero n) 5
, so the first form is just syntactic sugar for the latter. But the difficulty is that the latter isn't syntactically valid in typed lambda calc land - where's the type annotation for λn. iszero n
? It needs to be λn:Nat. iszero n
, but that means you need to typecheck 5
before desugaring the let. So your type checker needs to be interleaved with your desugaring AST manipulation - both need to know about each other, but it does save you implementing brand-new type checking and evaluation rules.
Is that better or worse than a whole new typing rule? Who even knows.
Tuzz: This never gets stuck in a loop, right?
Tom: No, because you're always making the AST smaller. If the desugaring ever built a let, it might be a problem, but no, it's fine.
Tom: I did stuff but it's not that interesting. It's all just programming innit.
Leo: Haskell! Case statements were interesting because they end up handling a bunch of different types later in the chapter.
[sorry, my dad called at this point and I missed some stuff]
Tom: It'd be really cool if your quickcheck tests could discover the lack of progress we talked about earlier with head[T] nil[T]
.
Leo: Yes! I want to do that now.
Do we want to do an interstitial? We talked about this previously. Parsers were much discussed, but (after a tiny Ch 12 and an enormous Ch 13) we're not very far from Part III of the book.
[we resolve to sort this out pronto on slack]
Charlie volunteers to organise the next meeting! Thank you Charlie!
Bit of discussion about having the pre-written parser and AST - this was great for readability of tests and Getting Stuff Done, but without a little pause to at least behold the AST we're dealing with, some found it a little harder to follow the mobbing. Having the pre-written parser was great, but maybe we could spend a little bit of the time we saved on just looking at the structures we're dealing with, rather than going for FULL SPEED mobbing.
Mudge: Anything we want to change?
Tom: I'm not sure the mobbing is working brilliantly - there's only a couple of drivers, and only a couple of people giving direction to them.
Mudge: Could we be a bit more structured about it? Make the driver a more deliberately passive role, and make sure to go to the whiteboard more frequently to make sure we're all on the same page?
No firm conclusion was reached. Bear it in mind next time I guess.
[Exeunt omnes, with a possibly weather-induced failure to pub]
Thanks to Leo for providing beverages and driving, to Geckoboard for hosting, to Chris for a copious amount of snacks and to Simon for taking notes throughout.
- Home
- Documentation
- Choosing a Topic
- Shows & Tells
- Miscellaneous
- Opt Art
- Reinforcement Learning: An Introduction
- 10 Technical Papers Every Programmer Should Read (At Least Twice)
- 7 More Languages in 7 Weeks
- Lua, Day 1: The Call to Adventure
- Lua, Day 2: Tables All the Way Down
- Lua, Day 3
- Factor, Day 1: Stack On, Stack Off
- Factor, Day 2: Painting the Fence
- Factor, Day 3: Balancing on a Boat
- Elm, Day 1: Handling the Basics
- Elm, Day 2: The Elm Architecture
- Elm, Day 3: The Elm Architecture
- Elixir, Day 1: Laying a Great Foundation
- Elixir, Day 2: Controlling Mutations
- Elixir, Day 3: Spawning and Respawning
- Julia, Day 1: Resistance Is Futile
- Julia, Day 2: Getting Assimilated
- Julia, Day 3: Become One With Julia
- Minikanren, Days 1-3
- Minikanren, Einstein's Puzzle
- Idris Days 1-2
- Types and Programming Languages
- Chapter 1: Introduction
- Chapter 2: Mathematical Preliminaries
- Chapter 3: Untyped Arithmetic Expressions
- Chapter 4: An ML Implementation of Arithmetic Expressions
- Chapter 5: The Untyped Lambda-Calculus
- Chapters 6 & 7: De Bruijn Indices and an ML Implementation of the Lambda-Calculus
- Chapter 8: Typed Arithmetic Expressions
- Chapter 9: The Simply-Typed Lambda Calculus
- Chapter 10: An ML Implementation of Simple Types
- Chapter 11: Simple Extensions
- Chapter 11 Redux: Simple Extensions
- Chapter 13: References
- Chapter 14: Exceptions
- Chapter 15: Subtyping – Part 1
- Chapter 15: Subtyping – Part 2
- Chapter 16: The Metatheory of Subtyping
- Chapter 16: Implementation
- Chapter 18: Case Study: Imperative Objects
- Chapter 19: Case Study: Featherweight Java
- The New Turing Omnibus
- Errata
- Chapter 11: Search Trees
- Chapter 8: Random Numbers
- Chapter 35: Sequential Sorting
- Chapter 58: Predicate Calculus
- Chapter 27: Perceptrons
- Chapter 9: Mathematical Research
- Chapter 16: Genetic Algorithms
- Chapter 37: Public Key Cryptography
- Chapter 6: Game Trees
- Chapter 5: Gödel's Theorem
- Chapter 34: Satisfiability (also featuring: Sentient)
- Chapter 44: Cellular Automata
- Chapter 47: Storing Images
- Chapter 12: Error-Correcting Codes
- Chapter 32: The Fast Fourier Transform
- Chapter 36: Neural Networks That Learn
- Chapter 41: NP-Completeness
- Chapter 55: Iteration and Recursion
- Chapter 19: Computer Vision
- Chapter 61: Searching Strings
- Chapter 66: Church's Thesis
- Chapter 52: Text Compression
- Chapter 22: Minimum spanning tree
- Chapter 64: Logic Programming
- Chapter 60: Computer Viruses
- Show & Tell
- Elements of Computing Systems
- Archived pages