Seventh Post
Ok, So I have calmed down from last weeks post.
I thought about it and it wasn't as cool as I initially hoped, but I think it's still an idea worth pursuing. I did start pursuing it and read a lot about lots of different things. I have come to the conclusion that it's probably a form of dependent typing. I have not yet found the idea replicated anywhere but I am definitely thinking it's probably some weird subset of a pre-existing theory. I think it is also probably related to recursive typing. One last thing is that it appears to not really hold all that much extra power in the realms of normal typing algorithms, but I have had fun with it.
The overall summary (just in case last weeks post was a bit too scatterbrained) is that I think we could enhance simply typed lambda calculus where instead of having variables (a -> a, a is the variable) in the type, we can create a way of describing acceptable inputs to the function. We can do this using context-free grammar (cfg) on lambda calculus terms.
For example, we can describe a natural number (church encoded) as having the following cfg
Nat -> (\f . \x . NatRec )
NatRec -> x | (f NatRec)
Using this system we can meaningfully type check functions that act on church encoded values. This is particularly useful in the case where the type of a church encoded value changes depending on its value (i.e: if you have a fully described cons list (without allowing for a generic variable to capture any part of the list) then it's type increases with every item added. You can now specify cons lists by writing a cfg). This is also helpful, in specific response, to recursive types, as it feels to me that recursive types still rely on the types, which can be shared over very different values within church encoding. Meaning that the type checker can allow some very odd and unintended things to happen.
Something that will be a challenge later on in writing up this type system will be the formal mathematical notation to describe it. I am thinking of doing sequent calculus stuff because that is what I have seen the most and I think I'm comfortable enough in it that I can write it up in this language. But it will still be a bit of a struggle.
As this seemed like a cool and good idea I start to implement it and got as far as a lambda calculus interpreter with some simple usability benefits (constants and the sort) before I realized that I should ask Mr Griffin how appropriate this would be to the future assignment as well as yicte (young ICT explorers).
After talking to him it seemed like it would be a very unwise choice to go for it in yicte, and something to think about for the future assignment. I would like to write up this typing idea as it's quite cool but I am concerned about if I can show off both projects in the final assignment. There are a few more problems with it than that so I'll have to talk with Mr Griffin once we are back at school.
I have now started preparing for yicte and in doing so decided that I should definitely get a real definition of terminating functions to implement in my language so that it at least works somewhat. I did a bit of looking into what I could do (I read the original paper on total functional programming languages) but all roads lead back to Walther recursion, of which I had struggled to get my hold on the paper, because of the lacking UX of SpringerLink. My sister then showed me how to get around this using google scholar and I now have my hands on the original paper! I then started reading it and promptly stopped, it is very densely written and I do not have enough time before yicte to understand and implement these ideas. So I decided to just implement classic primitive recursion. This was easier than I thought but it has lead me to conclude that my tuple ast representation is even uglier than I realized. This will need to be refactored if I develop this project any further than the current assignment.
Something interesting that happened to me while I was researching termination was that I came across the original paper that defines and describes total functional programming and finally, I understand codata (which seems important to have known before I started this project lol because it's kind of a response to the idea but I digress). The original idea for total functional programming was not to eliminate non-terminating code from a programming language. It was instead to clean the type system of normal functional programming to allow for easier proofs in maths. It was dirty because of the non-terminating or erroring code which created the bottom type/value. This value represents non-terminating / non-completing code and it is a nightmare to deal with for mathematicians, as their proof now needs to include the possibility that the code wouldn't finish in every proof about the program. This is what the initial proposal was about and is why total functional programming is so popular with proof assistants (i.e: coq).
This discovery was huge for me and it cleared up a lot of things. It also re-affirmed my criticisms of this approach. My criticisms (still allows non-terminating code) have not changed, only I understand the argument with more nuance and understand why my approach has not been tried (to my knowledge) yet.
I am also now on the fence about implementing a type system in the total functional programming language as I might just stop developing this here and move on to the dependently typed language in future.
edit: Ok so now I'm fairly certain that my type system is inductive types. Here is a link about this https://www.cs.ru.nl/~herman/talk_TLCA2015.pdf (I'm pretty sure?)
Comments
Post a Comment