Abstraction isn’t a goal in and of itself, no. But rejecting abstraction abilities in a language, as Go does aggressively, is a major red flag.
Ada itself can’t prove properties. The tooling around that is based on Why3 and the array of provers it supports, some of which are fully automated but necessarily inherently limited, some of which are proof assistants, like Coq. Coq itself is a purely functional programming language whose proof support is based on dependent types. Why3 also offers WhyML, a programming language you can directly write programs-and-proofs in. Beyond that, there’s the Frama-C for verifying C and Krakatoa for verifying Java, and various tools relating various means of verifying code, e.g. by extracting it from Coq proofs or encoding verification conditions in source and extracting them for verification with Coq. And this is ignoring languages like Scala, F*, Idris, Agda, ATS… that are also dependently typed and support both proof (with some level of pain) and programming.
The halting problem is not an issue here. All you need is a type system that refuses to accept the program unless there is also a proof that the program is "correct" for some notion of correctness (e.g. that it terminates). It's not the job of the type system to automatically invent said proof (which would reduce to the halting problem). That's your job. All it has to do is verify the proof that you supply, which is relatively straightforward.
Coq is probably the language most commonly used for "provably correct" programs (although that's not a high bar to clear). Also, interestingly enough, "provably correct" programs such as CompCert still have bugfix releases. Probably because bugs can still creep into the specification, or when the output of the program is targeting machines or systems with less rigorous behaviour.
The language is Coq.
The file contains: - some high-level view what a contract can do - a very-partial EVM interpreter that describes what instructions do - an example specification - an example program and - a proof that the program and the specification match
Exhaustively checking for correctness seems awfully close to proving correctness of your implementation. There are provably correct implementations of AVL trees (implementations with proofs that the implementations adhere to some specification). For example, AVL trees have been formalized in the Coq proof assistant: https://coq.inria.fr/library/Coq.FSets.FMapAVL.html
Besides CPDT, a huge change document-wise is Software Foundations, Benjamin Pierce online book on proofs (in Coq) and programming languages theory (in Coq). It's excellent, and it is aimed at beginners -- unlike CPDT which is a great book, but rather for people that already know Coq or at least want to go deep into the more difficult stuff. You cannot advertise Coq to students without mentioning it.
Changes in Coq since 2008. I would say adding eta-equality on functions in the definitional equality was a big change. At a smaller level, bullets definitely had an influence on how people write the usual tactic script (or SSReflect if you use that -- and the Feit-Thompson theorem is an achievement you may want to talk about).
There are exciting things in the development version of Coq, or planned to land before the next release: universe polymorphism, parallel checking of completed proofs, and the new tactics engine (which is more of an internal change that will allow for new developments in the future).
The credits section of the Coq manual is a fascinating read that will let you know everything about new features in the core distribution from whichever version you're interested in.
Set Printing Universes
is your friend here. The 8.5 beta (or the trunk version) of Coq will print the universe in the error message if it's needed to distinguish otherwise equal-looking Type's.
Alternatively, you can use the polymorphic universes support added in 8.5.
I assume that your nat
type is defined something like
Definition nat := forall (X:Type), (X -> X) -> X -> X.
You might try
Polymorphic Definition nat := forall (X:Type), (X -> X) -> X -> X.
Then, your exp
definition works for me. You might need to use Polymorphic
with the exp
definition to use it later :P
Or, do Set Universe Polymorphism
at the top of your file, which will make all definitions
polymorphic in the universe. You can use Monomorphic Definition
(or Lemma
, etc. for other
'definition' keywords) to restore monomorphism for one definition.
More info in Universe Polymorphism chapter of the 8.5beta reference manual.
Quoting the manual on decide equality
:
> This tactic solves a goal of the form forall x y:R, {x=y}+{~x=y}, where R is an inductive type such that its constructors do not take proofs or functions as arguments, nor objects in dependent types. It solves goals of the form {x=y}+{~x=y} as well.
You may want to have a look at Martin Escardo's ordinals in TT.
One possible (different) fix for your approach to work would be to use Impredicative Sets. Or give up Church encodings entirely because they're so darn inefficient anyways.
FYI, Coq's standard library provides the excluded middle (LEM) as an optional axiom (postulate in Agda-speak, i.e. your 2nd option) and derives a bunch of lemmae from it which you might take as inspiration. In the linked file, you can think of Prop
and Type
as Agda's Set
.
Be aware that as you go down the path of classical logic, computation becomes less useful since every application of LEM blocks it. Therefore, it's good practice to keep developments as constructive as possible.
Software Foundations is (despite the title) mostly a hands-on course in Coq, a mature and somewhat practical theorem prover. It contains a ton of exercises (that is, formal proofs for you to write), and doing them is an essential part of reading the book.
The theoretical portions are kept relatively terse, though, and sometimes a bit more depth wouldn't hurt. Nonetheless, in the later chapters, there's some insight to be had into Lambda Calculus and mathematical models of programming languages, both functional and imperative. Lots of Hoare logic as well, if you're into that. The dependent typing part is (unfortunately) kept mostly in the background, but you'll at least end up understanding the Curry-Howard Correspondence a bit better.
If it is your intention to actually prove software correctness then the choice of the tool will likely matter more than the choice of the underlying formalism (which in all cases will be some variant of type theory, unless you go in the direction of those first-order logic-based tools, of which I know very little).
For example, you could have a look at F* (https://www.fstar-lang.org) or Coq (https://coq.inria.fr), both of which are real working tools. Pick whichever you find more intuitive.
Constructive mathematics made a lot more sense to me once I started using Coq to prove theorems. You can use the law of the excluded middle, but you have to invoke it explicitly each time, making you aware of when you use it.
Coq indeed has multiple evaluation mechanisms available, including call-by-value and call-by-name, which have different performance characteristics but evaluate terms to identical results. The same should be possible for Agda. (However, Coq's formulation of corecursion requires partially lazy semantics, so call-by-value is not fully strict. I'm not sure about Agda's.)
let
, if I understand the docs correctly, is entirely syntactic sugar, so there should be no sharing. Under a lazy evaluation strategy, auxiliary definitions in where
clauses would probably be shared.
> What do I have to trust when I see a proof checked by Coq? You have to trust:
> * The theory behind Coq > - The theory of Coq version 8.0 is generally admitted to be consistent wrt Zermelo-Fraenkel set theory + inaccessible cardinals. Proofs of consistency of subsystems of the theory of Coq can be found in the literature.
> * The Coq kernel implementation
> * You have to trust that the implementation of the Coq kernel mirrors the theory behind Coq. The kernel is intentionally small to limit the risk of conceptual or accidental implementation bugs.
> * The Objective Caml compiler
> * The Coq kernel is written using the Objective Caml language but it uses only the most standard features (no object, no label ...), so that it is highly unprobable that an Objective Caml bug breaks the consistency of Coq without breaking all other kinds of features of Coq or of other software compiled with Objective Caml.
> * Your hardware
> * In theory, if your hardware does not work properly, it can accidentally be the case that False becomes provable. But it is more likely the case that the whole Coq system will be unusable. You can check your proof using different computers if you feel the need to.
> * Your axioms
> * Your axioms must be consistent with the theory behind Coq.
Previous definitions within a file can be used in subsequent proofs. To use definitions and theorems from another file, compile that file with the coqc
command and then import it into your current file. Most commonly one does this with Require Import <Name>.
, but there are variations. See the documentation for more detail:
https://coq.inria.fr/distrib/current/refman/proof-engine/vernacular-commands.html#coq:cmd.require
For larger scale developments, you'll likely want to research the _CoqProject
file format and coq_makefile
command.
Triple secret: Do some more problems on top of that.
Less secret: Mathematical writing has its own style, and just like in art, you just copy the masters until you develop your own.
The first ingredient is you have to understand the argument you wish you communicate. That can be hard enough by itself. You really have to understand the structure of the proof.
To practice this, it's good to know the various techniques of proof. Induction, proof by contradiction, by contrapositive, by an application of a lemma or theorem. Know the usual tricks. If I need to prove A or B, I can assume Not A and deduce B. Know how to work backwards: the phrase "it suffices to show" is pretty magical. Learn to identify when you perform a case analysis. And lastly, know when you can declare something obvious when you aren't sure if it's true. That last one is key.
The second ingredient is arguably harder, but no one puts as much attention to it. Once you have the structure of the proof laid out, you have to write it in a way which is suitable for teaching it to someone else. It's easy enough to throw your proof onto paper as if it were a Coq source file. But mathematical writing should be aimed at helping other humans understand. Not just to verify mere truthiness of a statement.
In intuitionistic type theory, you need to assume at a minimum the law of the excluded middle in order to prove the axiom of choice, because the axiom of choice implies the law of the excluded middle.
Coq includes the corresponding proofs, under suitable assumptions. The statements are in the ClassicalDescription library and the proofs are in the source code.
The other commenters have linked to some great resources, but personally what I've found to be the best way to learn type theory is to just download Coq (or something similar, like Agda) and play with it. Software Foundations is a good way to learn Coq in particular.
> What I'm getting at is how can we change the language to actually create rational discussion? Or is this an impossible task?
A lot of Chomsky's works are dedicated to studying exactly this topic.
I am a big fan of logical positivism. It was criticised (by Chomsky nonetheless) for not being compatible with how humans think, but today we have powerful enough helpers in the form of computers to overcome a lot of the problems it poses.
It requires learning and using some form of a formal language. The main shortcoming of formal languages are verbosity (due to being a subset of natural languages). People that learn formal languages (like programming languages) from a young age don't have much trouble using them, but (as with every other language) it requires practice. If your area of work is not related to heavy computer science theory, formal verification systems development or writing software tests, you will lose the ability to 'speak' it. It makes it incompatible with the vast majority of Earth's population.
Anyway, even if you don't want to speak it all the time, just learning a bit of how formal systems work is a great improvement. An undergrad that knows his way around Coq or Haskell is already miles ahead of someone who doesn't.
And there's the major problem - there is no need to change the language (a lot of decent ones exist), but people should be changed instead.
> How would you tackle this issue?
By writing x => x
as the default case rather than _ => x
. The behaviour is the same but the type is not.
As for the question about intrinsically typed vs. raw data + predicate, Coq people tend to favour the second option because the language constructs to manipulate intrinsically typed data are not very convenient. You can have a look in the manual for the extended match allowing you to write complex dependent pattern matchings. You may also want to use equations rather than writing these nasty dependent pattern-matches yourself.
In this regard, I actually learned the basics of proving things with what is now the volume one of the series of books software foundations.
It works in coq, so there are a few things to say about it...
It works with a type theory and mostly with a constructive logic.
It's impossibly pedantic and sometimes frustrating since you might now very well why something is true, but not how to let coq know.
You get immediate feedback as to whether your proofs are correct, and what changes in your intermediate steps, and also, solving the problems is a fun thing to do.
You may need some experience with functional programming.
All in all I enjoyed working through it and after that I started doing reasonably well with Linear Algebra done Right and Understanding Analysis, at least having acquaintance with proofs and logic wasn't the issue there.
> At the same time, DAE boolean expressions are unmaintainable?
Can confirm, I'd rather have Prop
Its pretty hard to come up with something original you can 1) estimate how difficult and long can it be and 2) be doable with just an undergraduate background.
Without knowing too much, my advise would be to ask a professor you are familiar with to give you an example of some result you can tackle and write a detailed account of what is going on.
The thing is that its hard to know what are you capable to achieve since nobody here knows how comfortable are you with the material you list ( as the titles of the classes dont say that much really ).
However, I realize that its annoying for me to just tell you to ask a professor, so if you really want a suggestion: Since you mentioned category theory and a possible relation to CS, you can try to write some interesting proof ( maybe the adjoint functor theorem, or maybe yoneda, or something like that ) using coq. It's a hot topic now and I think it would be easier to find a problem you can do some original work on.
The approach to "not refer to names in proof" is an interesting option. To avoid having to write match goal with ... end
, a lightweight syntax to select hypotheses by type rather than by name could be very helpful -- I have observed this feature being put to good use by Isabelle users.
I was sure that there was a bugreport against Coq to request this feature, but I could not find it back. I created a new ticket to track it.
Maths and proofs in particular are mostly about abstract thinking which is the essence of good programming. It's worth doing because firstly maths skills opens up greatly the range of problems you can tackle, and secondly because abstract thinking developed in maths allows you to tackle more complicated problems, i.e it raises the limit of the complexity you can handle. Thus I would encourage you to stick with it. It's ok that it is hard. Dealing with difficult material is a skill on its own. Confusion is a natural first step! Asking questions is next. Also formulaic lecture based teaching is the worst thing for maths. One suggestion is to look for alternate explanations on youtube etc. MIT OCW Linear Algebra course by Gilbert Strang is fantastic. My second suggestion is that I have found that students learn a great deal from playing with proofs in a formal theorem proving system like Coq or Agda. You can do some proofs about programs and also learn about programming languages here (includes great tutorial on proving things): https://www.cis.upenn.edu/~bcpierce/sf/current/index.html Another Coq tutorial here: https://coq.inria.fr/tutorial-nahas Also see the wonderful theorem proving book Coq d'art. The Agda system has similar tutorials. But the main idea is that if you don't give up, remain curious, keep asking questions, and don't judge yourself for results but instead for effort you will succeed.
One approach is to write the formal specification in Coq and then extract a program that implements the specification (currently Coq supports OCaml, Haskell and Scheme). For example, the CompCert verified compiler was written this way.
Isabelle isn't dependently typed. (Specifically, its type system is essentially ML + type classes.) Since you're posting here, this might matter. But if not, you might like its Isar proof language.
Note that Coq also has something similar, though it doesn't seem to be used very much.
Ah, I see where you are coming from.
A lot of mathematics is experimentally falsifiable. Just look at any right angled triangle to verify the Pythagoras theorem. However, sometimes the phyical world is simply too limited to express what mathematics can express.
But number theory is physically representable(see a computer), and you can represent any mathematical statement(written in a well defined formal system) in number theory using Gödel encoding, so you could prove mathematical statements physically. See Coq for an example of how this can be done.
If decidability is what you are getting at, then no, mathematics is not generally decidable. However, you can find counter examples to any mathematical theorem and represent them phyically, thus "falsifying" the theorem.
You might find Coq interesting.
I think that though we use specific language to describe mathematics, it is more the thought process and the resulting (abstract) ideas themselves that I would call mathematics.
> My question to you is do you know that Lochness Monster is logically possible?
Presumably you would have to spell out in some kind of formal language what being a monster consists of exactly (like we have a formal definition for the circle) and the same for Lochness and then look for a contradiction in that. Mathematicians have tools to do that kind of stuff I think (e.g. something like https://coq.inria.fr).
That said I wouldn't be surprised if some of these would turn out to be undecidable. More problematic is maybe providing a formal definition in the first place, since most of the concepts we use are rather vague.
>Please use {} instead []
tells you all you need to know. Additionally, https://coq.inria.fr/refman/language/extensions/implicit-arguments.html
I don't know this particular definition, but I have a previous recent comment where I discuss the general concept of why definitions are the way they are.
To determine if a definition is syntactically correct (as opposed to useful or interesting), I'm good enough at Coq to type definitions into Coq and Coq can tell me whether the definition is well-formed or not. Coq also supports, via setoids, the common situation where you have to prove something extra in order to show that a definition involving choices is well-defined.
You can do that in Coq: ``` Implicit Type n : nat. Definition f n := n.
(* f
typechecks as nat -> nat
*)
```
There are situations where this comes in quite handy
Per provare formalmente che l'implementazione di un dato programma faccia esattamente quel che dice di fare si puo' usare Coq oppure un'altro proof assistant. Un esempio ne e' sel4.systems, un kernel a prova di bug verificato con uno di quelli. Se ne voleste provare ad usare uno - buona fortuna; spero che abbiate esperienza in haskell, e tanta - ma veramente tanta - pazienza, perche' e' un cristone
I think this points to the reason why that part of your proof was impossible: https://stackoverflow.com/questions/4519692/keeping-information-when-using-induction
Basically information was "lost" in the induction and so the proof became impossible. You'll want to introduce an equality with a fresh variable or try out the dependent induction tactic (which I believe essentially does the same thing but automatically).
edit: more details.
What blew my mind a few years back was when I looked at dependently typed programming languages / theorem provers. Coq, to be specific. Download it and have the patience to work through the initial examples, and it may be very rewarding. It made a wonderful knot in my brain to see different meta levels blended, and to grasp how proofs by induction and recursive algorithms are actually the same, among other things.
Yes, Coq is also constructive. However, the standard library provides a couple of common axioms, among them LEM, whose compatibility with Coq's logic has been proven meta-theoretically (with varying degrees of rigour).
Even if you're not interested in the library ecosystem, I would still consider generalised rewriting a killer feature if you expect to do significant amounts of equational reasoning. Regarding some of the desiderata you mention in another comment:
(I can't speak to the comparison with Isabelle/HOL, by the way, which may well be even better suited to your task.)
It's not done with testing, but machine-checked proofs using specification languages and stuff like wp-calculus/Hoare logic, etc. It is incredibly more time consuming than just testing, but possible.
Examples for such proof assistants:
The problem seems to come from coqdoc: *)
is recognized as a token closing verbatim and it gets messed up because we have some nesting of different verbatim modes here. Basically: (**
(...) >>
(...) (*
(...) *)
(...) <<
(...) *)
Edit: filed a bug report
Edit2: using the patched version it should hopefully be possible to compile CPDT. \o/
Using pdflatex, it's possible to generate a pdf which is better suited for ebook readers by using:
\documentclass[a5paper]{book} \usepackage[margin=0.1in]{geometry}
instead of Adam's \documentclass[12pt]{report}
LebNotation? The comment says that this style is used in Mergesort and you can indeed see it in action there.
Edited with link.
The Kozen & Silva thing covers the pen-and-paper aspect. If afterwards you want to do bisimulation proofs in Coq, you need Giménez & Castéran's A Tutorial on [Co-]Inductive Types in Coq, along with Catalin Hritcu's problem sets. There is a barebones stream implementation in the Coq standard library.
To add files, put 'Add LoadPath <file directory location>' at the top of the file you want to add your module too
This command will add <file directory location> to the coq load path so that it gets searched when coq runs Import commands
Next put 'Require Import <file name>' under the 'Add LoadPath ...' command
Note you can just put the full path and name of the file you want to add into a single 'Require Import' statement
here is a partial guide to coq modules: https://coq.inria.fr/tutorial/3-modules
I'll ignore the personal insult and focus on the factuals.
You're equating open bugs in a (major) implementation with the Haskell language being a failure. By this metric all major languages (and most of the minor ones) are are also failures.
Here's another interesting one: bugs in the Coq proof assistant.
Is Coq a failure as well?
There's been quite a few winter / spring / summer schools around Coq in France (mostly in the South-East region, one in Paris).
One of them is actually about to start. They are usually announced a long while in advance on the relevant mailing lists (TYPES as shown above but also coq-club I'd expect).
> You can avoid this if, instead, you restrict the form of recursion, but you have to choose how to restrict it. There's no "best", "natural", or "free" way.
That's certainly a good point.
> Primitive recursion, for a tired example, is obviously too restricted. So what precise total language do you have in mind?
Offhand, I'd have to say Gallina.
Naive set theory as used by most informal mathematics can easily be encoded in Coq's CIC. A set is represented by its characteristic function, and using the axioms of classical logic (functional extensionality/LEM/double negation elimination/choice/...) one can recover all the usual rules. See Coq's standard library for an (annoyingly bare-bones) implementation.
The same approach can apparently also be used to encode proper ZFC, as demonstrated here.
Well, you could start here.
AFAIK, the state of the art is what can be found in COQ, which is the generation of computer programs from a formal mathematical specification of what the program is meant to do. You can also research Automated Theorem Proving with the Curry-Howard Correspondence in mind.
However, it should be noted that this topic is much more the subject of ongoing investigation than something with existing generic tools available. It's a tough problem to crack.
Math is creative, but what is the brain if not a squishy computer? While computers are nowhere near replacing mathematicians, they'll almost certainly get there eventually. As it is they're already capable of doing some serious heavy lifting with some guidance (e.g. tactic-based automation in the Coq proof assistant). They're not any good at informal proofs for now, but I suspect that too will change if natural language generation improves.
It's more like this.
x : Real y : Real
If I were to just give you those two variables and assert their types to you you would have to play with a lot of fire to do math on them because they "might" be equal and unless you have a pre-set tolerance for how close is "close enough" you could hit infinite loops all over.
You might be fine of course, you just don't know. Anything which accidentally produces an infinite representation (in whatever representation you have) will cause danger.
To see this playing out in Coq take a look at the standard lib
https://coq.inria.fr/distrib/V8.3pl1/stdlib/Coq.Reals.Reals.html
In Rdefinitions you see that we... actually have no representation of the reals at all. They're just abstract things which have abstract operations like R0, R1, Rplus/Rmult. In Raxioms we start to see assertions about how these abstract operations behave: we assert that addition is commutative, for instance. But there's no proof—we couldn't prove it if we wanted to. It's axiomatic alone.
We can inject naturals into the reals with INR, but really all we're doing is noting that as soon as you have R0, R1 and Rplus you can embed O and S from nat.
And so that's all there is to it in Coq—a totally abstract definition useful only for properties, not computation.
Try it - it's a fun way to code.
> or if there are alternatives.
Of course there are. You could try out proving your functions work instead of just showing they work for a few given inputs.
Yes, PL is where Haskell is relatively popular. Although if I would have to pick one language that dominates in PL research then I would say it is Coq, but perhaps Haskell is the second or third most popular language in PL research.
To start you off, take a look at Coq and Isabelle. There are many other mecahnised proof assistants exists, but I'd say those two are the most well-developed and are definitely the most used.
As for is it possible - it's more than possible. There are many papers being published where the results have been established using theorem proving software (as a matter of fact, the core part of my PhD thesis was formalized in Coq). There are even entire scientific conferences specialized in theorem provers and results obtained using them (e.g., CPP). Lastly, even tech companies like Google have started explicitly looking for people experienced with proof assistants.
Get some Coq going, solve through Logical Foundations. Worked for me. Or you can try Lean — I did not solve this specific tutorial but I know that Lean has a strong mathematical community.
On aura du mal à faire moins que 4 dans le cadre du théorème des quatre couleurs [1] : ce problème se ramène de manière équivalente au coloriage d'un graphe planaire (c'est-à-dire, dont les arrêtes ne se croisent pas). Donc en particulier, ce genre de trucs:
A - B - C \ | / D
Il faut te faut au moins 4 couleurs pour colorier ce graphe-là.
Y'a un léger débat sur ce théorème, dont on n'a pas de preuve mathématique à l'heure actuelle. On a une preuve informatique en revanche, qui repose sur le fait de se ramener à un certain nombre de cas critiques à tester. ET la meilleure preuve se fait avec Coq [2], un assistant de preuves développé par l'INRIA :)
[1] https://fr.wikipedia.org/wiki/Th%C3%A9or%C3%A8me_des_quatre_couleurs
1.This is what I am talking about, when I talk about extraction: https://coq.inria.fr/refman/addendum/extraction.HTML
What mechanism does that use? Genuine question!
The semi-automated build system for my most used language (that being Coq) auto-generates Makefiles with many an rm
in the clean
target.
It isn’t true that the language used to describe the contract is at least as complex as C itself. And this is the key point about formal methods generally. Granted, it’s probably preferable to write in something like the Low* dialect of F* than to annotate C, but it’s good to have the option of annotating C in the meantime, and the range of provers we can bring to bear on the proof process include the most powerful proof assistants there are, like Coq. This level of formalization should be legally mandated for avionics, automotive braking systems, etc.
> Theorem: let v and w be good vectors with the same contents. Then v and w have the same length. Proof: induction on v. If v is empty, then w must also be empty. If v = (Cons a v’), then we must have that a is in w. Then w has the same contents and length as (Cons a w’). By goodness, we have that w’ and v’ have the same contents (namely, the contents of v, excluding a). Then w’ and v’ have the same length by the inductive hypothesis.
I'm not sure that this proof goes through since we don't have that w = (Cons a w')
.
But I think I understand your general strategy and actually it seems quite similar to the one described by the Coq MSets library where a set is a record with the underlying data type (say, a list or AVL tree) together with a predicate IsOk
which says that the data type doesn't contain duplicates or in the case of AVL trees, probably some more complicated predicate.
Following: http://coq-blog.clarus.me/tutorial-a-hello-world-in-coq.html
Have you correctly installed coq-io-system
?
opam repo add coq-released https://coq.inria.fr/opam/released opam install coq-io-system
I like this book for people coming from computer science. I learned to do proofs with it and I think it's good enough to continue learning, say, linear algebra and introductory analysis. It's also suited to self-learning since coq checks that your proofs are actually correct (iirc, I learned a good deal from being yelled at by coq that proofs I thought were correct weren't), and it has lots of problems I found interesting when working through it.
You might consider installing Coq and its tooling, and working through Software Foundations, which covers a great deal of the math and logic pertinent to computer science.
It's not exactly a book on mathematical logic, but <em>Software Foundations</em> is a (free) classic 4 volume series on applying mathematical logic to programming language theory using Coq. The first volume is titled Logical Foundations and covers the basics of using mathematical logic to reason about programs.
All theorems and proofs are given as valid Coq code (in fact, the entire work, including exercises, was automatically checked for correctness by Coq before they put it online).
>debunking some of the math behind Bloom Filters
I followed that link as well. The autor of that article says has used Coq to proof that there is an error.
If I look at the Sourcecode of the proof I see things like this. I have no knowledge on how Coq works and I can't read any of this. However, I know proof require that there are no errors in you assumptions AND no logic errors in your proof. And I am an experienced software developer and I am nearly 100% sure that no person can write 1200 of code (without any comments and in a, to my looks, very abbreviated and compact form) without bugs.
How the heck can the author be sure that there are no bugs in his proof?
It sounds like you want PLT (Programming Language Theory) resources. The best of these are based on type theory rather than set theory, and the logic involved (via the Curry-Howard-Lambek correspondence) is constructivist rather than classical. But if you’re up for it, I suggest looking at Software Foundations, which uses the Coq proof assistant for the formalizations.
That particular thing seemed like a complicated logic issue, not really something any programming language except perhaps Coq and its ilk could address.
A rewrite always has the chance of introducing bugs that weren't there before, but it also has a much better chance to avoid any fundamental issues in the architecture that lead into some security or performance bugs.
There's a pretty good thread on the subject here.
> How does “throwing exceptions” follow from “unconstructed IO or partiality“?
A total function doesn't throw exceptions, or loop infinitely, or exit the program. What follows from this is that you can do equational reasoning on expressions involving such functions.
To maybe make this (slightly) more concrete: the Coq proof assistant is a purely functional programming language without mutation, I/O, or exceptions. It's a bit of a long story, but the tl;dr I'll try to offer is that it's desirable to be able to compose your whole program from smaller expressions that compose according to algebraic laws, and those algebraic laws don't hold if your expressions throw exceptions.
This isn't as bad as it may sound. Any expression that might throw an exception can be rewritten to one that doesn't, but returns a value of some sum type, where the sum type is generally of the form errorType | successType
. errorType
is also generally of some type that obeys laws defined by the ApplicativeError
or MonadError
typeclasses, so you can deal with them in nice algebraic ways, and so on.
The ultimate point is to be able to reason algebraically about your code. Admittedly, a vanishingly small number of programmers in the world care about this. But for those of us who do, exceptions are impermissible.
Those are mostly reasonable, and I've most certainly seen people make similar changes to avoid using hypothesis names, but unfortunately I don't have too many tips myself. I'll comment a tiny bit, but otherwise 100% of the things I said before apply still.
Your unfld ltac does the same thing (or rather, less than) as try unfold es
, so I would absolutely not create it.
Your map-length ltac could be replaced with rewrite hints: http://adam.chlipala.net/cpdt/html/LogicProg.html (find Rewrite Hints on the page). You'll want to learn about hint dbs, if you have not done so already. In fact: there is already such a db, list in https://coq.inria.fr/distrib/current/stdlib/Coq.Lists.List.html . autorewrite in * with list
should be enough here.
You can use a proof assistant such as Coq for that purpose. Example:
assert Syllogism {
all Socrates: univ, Man, Mortal: set univ |
-- every man is mortal
Man in Mortal
-- Socrates is a man
and (Socrates in Man)
-- implies Socrates is mortal
implies Socrates in Mortal
}
check Syllogism
Most real-world examples, however, will send you crying for your mother because of their complexity and incomprehensible formalisms ...
If you can use ssreflect's rewrite, you can chain them as explained there: https://coq.inria.fr/refman/proof-engine/ssreflect-proof-language.html#chained-rewrite-steps
Your rewrites could become rewrite !op_assoc -op_inv_l !-op_id_l in H
(roughly, I'm too lazy to test).
I don't see the match. It is the point of the Arguments firstn : simpl nomatch.
to make sure that firstn
is not unfolded unless it can be reduced. It is documented here
The stronger lemma indeed looks more useful.
Where are you getting list_max
from, or how have you defined it? It’s not in the standard library, so I guess you’ve defined it yourself; but the proof of this will depend on how you’ve defined it.
Generally, for questions like this, people can help you much more easily if you give a “minimal working example” — i.e. a stripped-down version of your file, including the necessary import statements and definitions, so that we can copy-paste that into a file and try it out ourselves, rather than having to guess all your definitions!
That said: assuming your list_max
is defined by recursion on the list (or equivalently by fold
-ing over the list with max
), I can suggest two strategies:
a direct proof by induction on the list (which should work as both
a version formalising the “human proof” you suggest. For this, you’ll need two lemmas corresponding to the facts you use in that proof: one giving the relationship between leb
and forallb
, and another giving the relationship between forallb
and existsb
.
Here´s a website where you can find information about homotopy type theory: https://homotopytypetheory.org/ Here´s an example of a proof assistant: https://coq.inria.fr/
But I don´t have much of a clue what the best way to start is.
> However, as anarchists you resist control if it becomes a self-conscious entity why not just give it rights and respect?
Rights and respect are kind of not a concept in anarchism (:
There are tools for processing information (Coq, dedicated conlangs), but the problem is it's very hard to describe existing systems to a passable degree.
I'm confused. Coq's "built in" logic is constructive, not classical. You can implement classical logic in Coq (as is done in the sandard library).
In general, quotient types are a bit more problematic than many other kinds of type constructions. There are two possibilities in Coq:
You could use setoids. A setoid is a type together with an equivalence relation, i.e., we form the quotients freely by simply carrying around the equivalence relation by which the type ought to be quotiened. There is a lot of machinery in Coq to support setoids.
The new and fancy way to do quotients is to use homotopy type theory. Quotients can be constructed as higher-inductive types: to quotient the type A
by the equivalence relation R
we introduce a new type A/R
which has a constructor q : A → A/R
, and for all x, y : A
and all r : R x y
we add a path p x y r : q x = q y
(in homotopy type theory u = v
is thought of as the type of paths from u
to v
). This constructions should remind you not of quotients, but of homotopy quotients: we are not actually identifying q x
and q y
, we're connecting them with a path.
The HoTT approach can be seen in the HoTT library, while the setoids are part of the Coq standard library.
> So I'm curious whether you have an example you think that a human could hand optimize that a computer could not.
I don't think any such cases exist in principle, because a sufficiently smart compiler (there it is again!) could be as smart as a human, or even be a human. In practice, such cases are common whenever an optimisation can be done based on domain knowledge that is not embedded in the code written in the language, but transmitted out-of-band. For example, this C function for copying an array:
// dest and src may not overlap. void copy(int *dest, int *src, int n) { for (int i = 0; i < n; i++) { dest[i] = src[i]; } }
The program contains very useful information that a human could use to vectorise or parallelise the loop. This is hidden from the compiler, although it might in this case generate two versions of the loop, and check for aliasing of dest
and src
before picking the proper loop (but this is still not as good as what the human did).
For purposes of automatic optimisation, it is important that our languages embed as much useful information as possible directly (which is often done by limiting the human programmer in ways the compiler can exploit, e.g. regarding aliasing).
> Do you have any links I could take a look at where the descriptions of those important properties of 3rd party code is codified?
I'm not actually that experienced a Coq programmer, but we can look at for example the standard Bintree module. The documentation here seems to embed (part of?) the implementation, but you have lemmas that express properties without reference to internal details. My impression from the Coq I've done myself is that writing new lemmas often requires direct reference to implementation details, but I don't have enough experience with large-scale Coq to say whether that also applies for nontrivial programs.
If you like this game you might like interactive theorem provers like Coq and Isabelle.
The Software Foundations is the go to starting point to learn Coq nowadays, in particular the first book Logical Foundations. I'm not sure how interesting you'll find the focus on programming languages but the first few chapters (up to IndProp) should be enough to start playing independently with abstract algebra and number theory.
The 3-5 = 0 viewpoint is not that stupid; this is how nat subtraction in Coq works, and it's more useful than it might appear to an integer-trained eye. But this is no excuse to mislead students with bullshit.
If we agree on the axioms and rules of inference etc., then probably I can prove lots of things in a way that you would accept. If we don't agree on the axioms and rules of inference, you will very likely not accept my argument as a proof. What counts as a proof depends on a system of logic etc.
For example, perhaps you would like: https://coq.inria.fr/
Fro what I know, Singer never even posed a rights based argument for animal protection. Saying he started animal rights philosophy is confusing.
> It's not based on emotion, but logic
So where’s the coq-file that implements that "purely logical" argument?</sarkasm> It’s either preposterous to reduce ethical discourse to logic or just shows that one doesn’t understand the domain of logic.
> it's almost universally accepted within academic philosophy
Calling bullshit here. Any data that it’s even any more than a niche-topic?
This looks really nice, congratulations! Could you also comment to point towards your work on my Bug 4451 -- Feature request: light syntax to select hypotheses by type rather than by name?
> Idris is supposed to be easier for writing programs than proofs, whereas for Agda it's the other way around
Agda is first and foremost a programming language so I wouldn't say it's supposed to be easier to write proofs rather than programs. Coq does present itself as a formal proof management system on the other hand.
> What are Coq tactics written in again?
A language called Ltac, which is integrated into Coq itself. You can find the definition here.
> I've tried to learn Coq a few times but I'm not a huge fan of coqide or proof general.
That's fair enough. Is it proof-general proper that you don't like, or the emacs environment?
This sort of reminds me of checking code in a proof assistant-style programming language like Coq. Proofs are done statically and you can do a good number of them just by unfolding/folding and simplifying to get down to reflexivity (an equality where both sides are the same, like they are here after the optimizer).
In a strange way, that's kind of what the optimizer is doing here (when interpreted as a test).
> If I recall correctly, because you need it to write a total language implementation.
That may yet be (assumed to be) true, but IIRC, this seriously calls it into question. In any case, I don't know that there's an actual theoretical limitation. I do know a lot of people wave their magic "halting problem/Gödel's incompleteness theorems" wands without actually demonstrating a connection.
> Other than that, I'm concerned that you may be giving people the wrong impression about how a practical total language would work. I don't see that such a language would forbid partial functions, but rather require them to have special types like a -> Partial b. I.e., a practical total language is likely to be one where partiality is an opt-in feature rather than a pervasive default.
I did refer to the partiality monad elsewhere in the thread, but on the whole I think you're right. Thanks for cleaning up after me. :-)
Update: It may also be worth pointing out that, even in a total language like Coq's Gallina, you can <u>model</u> general recursion; you just can't execute it within Coq, which just means your Gallina program is no longer a proof of the proposition represented by its type according to Curry-Howard. Whether or not this matters to you is, of course, context-dependent.
> If you really want proven correctness types won't help you'd have to use something like Coq
Seems excessive that you're getting downvoted, but as an FYI: (from https://coq.inria.fr/a-short-introduction-to-coq)
> Properties, programs and proofs are formalized in the same language called the Calculus of Inductive Constructions (CIC). Then, all logical judgments in Coq are typing judgments: the very heart of Coq is in fact a type-checking algorithm.
Types in general don't prove things, but good luck doing it without types.
There are systems like Coq (https://coq.inria.fr/) that allow you to express human readable specifications and write a machine checkable proof that the code satisfies the specification. The code cannot get out of sync with the spec.
However, even if a spec is not written in a system like Coq, and hence could get out of sync with the code, I still believe there is value. In my opinion, it is almost always worthwhile to write an abstraction of what your code does, even without tests or any other guarantee that the code matches the spec. Of course, you have to be disciplined enough to update this spec when you change the code.
> If you assume that, then you have to assume every piece of open source and proprietary software ever is also backdoored.
If there is enough incentive for somebody and it is possible then it might happen, yes.
I asking in this post whether it is possible and why not. Given that I'm investing in the system I believe it is a legitimate question to ask.
> Define.
By formally verified I've mean specification and proof in proof assistant like Isabelle or Coq.
The main use of dependent types is in theorem proving via the Curry-Howard Isomorphism. C++ is quite ill-suited for that. In contrast, a language like Coq allows you to develop fully certified programs, such as the CompCert C verified compiler.
this lecturer is what I came across most recently that reminded me of it, as well as earlier today, this CoQ tutorial mentions "intuistionistic logic" as a pre-requisite.