Worth noting that GHC Haskell has:
Obligatory example:
12:32:31 a@link ~ ~/ghc-head/bin/ghci GHCi, version 7.3.20110921: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> :set -XConstraintKinds Prelude> type Stringy a = (Read a, Show a) Prelude> :i Stringy type Stringy a = (Read a, Show a) -- Defined at <interactive>:1:6 Prelude> :k Stringy Stringy :: * -> Constraint Prelude> :{ Prelude| data Bit = T Prelude| | F Prelude| deriving (Show, Eq) Prelude| :} Prelude> :t T T :: Bit Prelude> :k Bit Bit :: * Prelude> :i Bit data Bit = T | F -- Defined at <interactive>:1:6 instance Eq Bit -- Defined at <interactive>:3:26 instance Show Bit -- Defined at <interactive>:3:20 Prelude> :show bindings type Stringy a = (Read a, Show a) data Bit = T | F instance Eq Bit instance Show Bit Prelude>
Here's my crappy implementation in Haskell:
user@server ~ $ ghci
ghci, version 7.8.4: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> [ if x mod
15 == 0 then "FizzBuzz" else if x mod
3 == 0 then "Fizz" else if x mod
5 == 0 then "Buzz" else show x | x <- [1..1100] ]
You're not compiling with optimisation. Also you're not using unboxed vectors. To fix the former you only need to add -O to the ghc invocation (ghc might not be truly magical, but it's more magical than you think!). The latter requires changing the Data.Vector import to Data.Vector.Unboxed and adding a couple of type annotations (I had to specify the type of numvec, and the type of round used in main): http://hpaste.org/53543
These changes brought it down to 0.057s on the test image on my system :)
(e: To clarify what's going on, as I understand it code using Data.Vector is hugely dependent on optimisation for performance: Without it, every time you use a vector function you're actually constructing a whole new vector. Which means a hell of a lot of copying data around. I think it's actually not so much ghc that's incredibly magical here but the inlining and fusion rules in the Data.Vector library. They basically do the equivalent of replacing map f . map g (where in order to compute the result you first need to compute an intermediate vector) with map (f . g), except in a far more general way.
In this case it seems that ghc -O also picks up on the (fft_CT xse) subexpression in fft_CT being reused, and other similar cases, so that it can avoid recomputing those values. Surprisingly you can't always depend on that happening, though. So if you want to make sure a computation's result is shared, bind it explicitly.
As for switching to unboxed vectors, that simply means that you're using arrays which actually directly contain their values, rather than pointers to heap objects containing the values. Unboxed vectors are faster than boxed vectors but can't be defined in terms of themselves (for instance, let v = cons 0 (generate 10 (\i -> 1 + (v ! i))) in v == fromList [0,1,2,3,4,5,6,7,8,9,10] can be evaluated with boxed vectors but not unboxed vectors).)
As some commenters have said, the example here is a little silly. I think this page on the Haskell wiki, linked to from the post above, gives a more compelling example, where FormData Unvalidated
represents unvalidated data that comes from a user, and FormData Validated
represents data that we know to be valid.
> This technique is perfect for e.g. escaping user input to a web application. We can ensure with zero overhead that the data is escaped once and only once everywhere that it needs to be, or else we get a compile-time error.
(If you don't understand the syntax, just say so and herds of type-system fanatics from /r/haskell will magically appear in this thread, to help you out.)
It's a lot like what's described in Joel Spolsky's "Making Wrong Code Look Wrong"
> All strings that come from the user must be stored in variables (or database columns) with a name starting with the prefix "us" (for Unsafe String). All strings that have been HTML encoded or which came from a known-safe location must be stored in variables with a name starting with the prefix "s" (for Safe string). [...] Every line of code can be inspected by itself, and if every line of code is correct, the entire body of code is correct.
The big difference is that Spolsky's plan establishes a convention and relies on programmers to follow it consistently to prevent bugs, whereas phantom types express that convention in a way the compiler can understand, and then the compiler can enforce it for you!
I bought myself a copy and have been skimming it. One thing I noticed is this must be one of the last books to recommend the Haskell Platform despite explaining the use of Stack in subsequent chapters
> At this point you are probably feeling the need to try Haskell on your own computer. The first step for this is, of course, to have a working Haskell installation on your system. Haskell developers worried in the past about how to get people ready fast and easily. > > So, they created the Haskell Platform, a distribution containing the GHC compiler, the Cabal build and library system, and a comprehensive set of libraries. To get the Haskell Platform, go to http://www.haskell.org/platform/. Then, follow the steps corresponding to the operating system you will be using.
The book does a good job at being impartial about "Cabal and Stack" and in a respective subsection very briefly describes the differences as
> A fair question to ask is what the differences between them are. In general, Stack is focused on having reproducible builds, whereas Cabal encompasses many more usage scenarios.
and then goes on to briefly describe Hackage and Stackage and concludes by punting on a verdict which tool you should use
> If you are in doubt of which tool to use, don’t worry and start with any. As I discussed above, both share the same package description format, so changing from one to the other is fairly easy.
Later chapters walk you through performing the same tasks with either tool and even mention Cabal's modern new-*
commands.
It happens more often than you'd think, which is why it's so handy that you can use Hoogle to search by type signature.
I don't know many, if any, serious Haskell programmers who would say or think or even sort of think that "monads are the fundamental method of abstraction." Most would probably reply that the sentence as formulated isn't even wrong so much as incoherent.
You also don't understand monads in haskell very well. An instance of Monad in Haskell isn't necessarily a container type. For example, the continuation monad newtype Cont r a = Cont {runCont :: (a -> r) -> r}
. If you have a value in the Cont r
monad, then it takes a value of type a -> r
and returns a value of type r
. You could argue that in some sense the continuation "contains" the a
, but that would conceal more than it explains.
In fact, while the monad-as-container analogy has strong limits, and the monad-as-computation analogy also does, the latter is in some ways closer to the truth (see here for example: http://www.haskell.org/haskellwiki/Monads_as_computation)
Bartosz' C++ code is doing exactly what he claims. He's using monadic constructs to thread around state explicitly, in a way directly inspired by Haskell. And in so doing, he's produced a neat, understandable, way of writing a fairly complex template metaprogram.
And you're wrong, the return and bind functions (along with fmap) are precisely what a monad is, if they obey the proper laws. In fact, the etymology of the term monad (or at least one etymology) isn't via philosophy or whatever but from "monoidal triad", which is to say a triple or triad of functions (classically unit, join, and map, but swap join for bind and it still works) that obey a few laws.
I did this
$ git clone https://github.com/mightybyte/monad-challenges.git $ cd monad-challenges $ stack init $ stack install $ stack ghci
Configuring GHCi with the following packages: monad-challenges GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help [1 of 1] Compiling MCPrelude Ok, modules loaded: MCPrelude. *MCPrelude MCPrelude>
Compile your program with the ghc option -fwarn-incomplete-uni-patterns
and you'll get the warning "Pattern match(es) are non-exhaustive" on your definitions.
The docs say that incomplete pattern warnings are not "enabled by default because it can be a bit noisy, and it doesn't always indicate a bug in the program."
As already discussed, it's not actually a type error, and making it an error would disallow many valid pattern matches, for example:
if length xs == 1 then let [x] = xs in x else 0
The specifics of this example aside, it's impossible in general for the compiler to prove that the pattern match in such code is safe.
There are quite a few functions in the Haskell standard library that can trigger the equivalent of a pattern match error at runtime: one example is the list function head
, which you could imagine being implemented like this:
head (x:_) = x
This gives an error on an empty list. It's an example of a partial function. The Safe library defines a number of safe alternatives to partial functions in the standard library.
The thing is that while inspiring, Bret's visualizations seem to be special purpose; every time you want something like this, you have to write a new visualization. Example: Did he cheat slightly on the platformer game? I could see the path of the main character, but I didn't see the path of the clouds in the background. (The turtle didn't move because he jumped on it.)
So, what I think is more important than any particular visualization are powerful tools that make it easy to write one yourself. If these things were easy to create, people would have done so already. That's why I'm working on functional reactive programming and my reactive-banana library.
My latest suggestion for a Summer of Code project aims to pick the low-hanging fruit from the other side. The idea is to put visuals and interactivity on top of the GHCi interpreter. It won't be the same as Bret's seamless demos, but it's completely generic and one step closer to the goal.
> goto
Available as a library. The true and proper way to scare people is to use full-blown continuations without actually needing to, though.
Well if you have read 'Real World Haskell' (http://book.realworldhaskell.org/) then I would recommended watching The Casters Category Theory Youtube Videos so that you know that. If you know Category Theory then read the Papers on the Haskell Wiki (http://www.haskell.org/haskellwiki/Research_papers). Do all of that and you will have a really solid background in the Maths that surrounds Haskell.
Edit: To get all of the casters videos from youtube open a terminal:
$ cd /location/to/put/the/videos $ wget 'massaioli.homelinux.com/~robert/casters.youtube' $ parallel youtube-dl -qt {} < casters.youtube
N.B. The last step will take a long time. It's a 2.0GB download.
But then you will have all of the videos and the casters.youtube file has them in the order that they are supposed to be watched in. And you will need the youtube-dl program and GNU parallel (http://www.gnu.org/software/parallel/) to make it work: on Ubuntu 'sudo apt-get install youtube-dl', parallel has no .deb so you will have to install it manually but it is easy, just a typical './configure && make && sudo make install'.
Edit2: Because I am feeling really nice, once you have downloaded all of the videos you can watch them using mplayer:
$ wget 'massaioli.homelinux.com/~robert/casters.list' $ mplayer -fs -playlist casters.list
And then just sit back and enjoy.
P.S. If you don't trust me and think that I am trying to get you to download something that you shouldn't then just look inside the files; they are plain text. One is full of youtube URLs and the other is full of file-paths to the (now downloaded) videos. It's safe and I'm not trying to get you to run them anyway.
This works better on newer versions of GHC:
GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. >>> :t length . show length . show :: Show a => a -> Int >>> let foo = length . show >>> :t foo foo :: Show a => a -> Int
Why?
What you are running into is something called the MonomorphismRestriction
. It exists because without it you can lose sharing whenever there is polymorphism in the result. It was an early compromise in the design of Haskell.
let foo = <insert some huge computation that takes an hour and spits out a number>
when used in two different contexts at two different types would compute the answer separately. Why? Because 'sharing' foo, would otherwise actually sharing something of the form
foo :: Num a => a
its sharing the 'function from a dictionary to a' rather than the actual values.
On newer GHC's (7.8+) we turn on {-# LANGUAGE NoMonomorphismRestriction #-}
at the REPL.
Xmonad is a fantastic Tiling Window Manager.
I spent the better part of a week trying all kind of tiling window managers and Xmonad just seemed to be the best to me. Awesome is, well, another awesome window manager. Try 'em all!
I have very little experience with Openbox, but it seems very nice. Daph gave some good info there.
> The operators |> ?>> |>> are good for interactively development in the ghci shell where you can add the next operation at the end of the line.
If you want to have those operators in your GHCi, simply put them into your <code>.ghci</code> file:
$ cat ~/.ghci let (|>) x f = f x let (|>>) x f = map f x let (?>>) x f = filter f x $ ghci GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> [1..10] ?>> even |>> (+1) [3,5,7,9,11]
>With it I can do most of what Haskell can do, semantically - but I do not have to worry that much about the type system, nor dribbling the lack of IO with monads.
The more you use Haskell, the more confident you'll become and the less you'll worry about the type system. Eventually you'll look to cause type errors on purpose in order to help you better understand your system. With a little help from tools and new features such as TypeHoles, you'll start to see the type system for the powerful tool that it is.
>If the code compiles, it works (almost all the time)! Contrast this with Ruby, for example, where the test code can be 2-3 times the size of the app, and even then, one is not quite sure if the runtime code is correct.
http://www.drdobbs.com/architecture-and-design/in-praise-of-haskell/240163246
>A lot of people experience a curious phenomenon when they start using Haskell: Once your code compiles it usually works. This does not seem to be the case for imperative languages
Not even a Haskell fanatic, but when they've explicitly put together a list of all commercial projects written in Haskell it's a little disappointing that you would write something so inflammatory.
Especially when a good portion of those are banks and HFT which are looking for high-reliability software.
What happens if you want to calculate s 3
? You need to calculate s 0
, s 1
, and s 2
. What do you need for s 2
? You need to calculate s 0
, s 1
, again. You end up with a ridicolous high number of evaluations.
What if you instead memoize the evaluated numbers?
s r = sList !! r sList = map sWorker [0..] where sWorker 0 = 0.2 -- replace with your value sWorker r = (1 - p) * sum [g i * (s (r - i)) | i <-[1..r]] -- call to s This will be much faster, since every value needs to be evaluated only once and can then be retrieved from the list.
Can't speak for recursion as a whole, but a subset of it, tail calls, can be optimized away:
http://www.haskell.org/haskellwiki/Tail_recursion
edit:
Not all languages/platforms support TCO. Something to make note of. The HotSpot JVM does not. The Avian JVM does. Haskell definitely does.
Although it's not necessarily the best way to do it, you can also define an infinite list of prime numbers:
primes = 2 : 3 : ([5,7..] minus
unionAll [[p*p, p*p+2*p..] | p <- tail primes])
EDIT: And don't forget that you can simulate an infinite list in most strict languages.
I don’t know why, but I had to double check it:
C:\…\…>ghci GHCi, version 8.4.3: http://www.haskell.org/ghc/ :? for help Prelude> let what _ _ _ = 21 Prelude> let to _ _ _ _ _ = 21 Prelude> let is:answer:the:life:and:everything:universe:_ = repeat () Prelude> let (™) = (+) Prelude> Prelude> what is the answer™ to life the universe and everything 42 Prelude>
The main things that ghc
does to make the Prelude faster are rewrite rules. The classic example is the fold/build
rule which can sometimes fuse away intermediate lists.
You can find lots of examples of this if you go through the source code for GHC.Base and GHC.List and search for RULES
. To learn more about how rewrite rules work, you can read this.
Complex like a game, a database, an IDE, a server, a compiler or a window manager?
You need to install Hoogle and then you can add
:def doc \x -> return $ ":!hoogle --info \"" ++ x ++ "\""
to your ~/.ghci file and use it like this
~ % ghci GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ghc-paths-0.1.0.9 ... linking ... done. Prelude > :doc Maybe Prelude data Maybe a
The Maybe type encapsulates an optional value. A value of type Maybe a either contains a value of type a (represented as Just a), or it is empty (represented as Nothing). Using Maybe is a good way to deal with errors or exceptional cases without resorting to drastic measures such as error.
The Maybe type is also a monad. It is a simple kind of error monad, error monad can be built using the Data.Either.Either type.
From package base data Maybe a
I proposed this half a year ago, but few people seemed to care. I let it die silently, and every now and then I thought whether I should have been more vocal about it. Anyway, here's the link:
http://www.haskell.org/pipermail/libraries/2013-December/021833.html
Nah, just post your code and complain how slow Haskell is somewhere dons can hear you :)
Seriously, the one sort of thing that seems be easier to get wrong in Haskell is allocating too much memory by letting unevaluated thunks pile up. To get a handle on that GHC has the best tools for heap profiling I've seen. Any binary can give you a basic breakdown of memory by constructor http://www.haskell.org/ghc/docs/latest/html/users_guide/runtime-control.html#rts-profiling (if hp2ps graphs look familar, that's because Julian Seward worked on GHC before writing valgrind). Compile for profiling and you can get a lot more detailed information http://www.haskell.org/ghc/docs/latest/html/users_guide/prof-heap.html#rts-options-heap-prof
For the rest, it doesn't seem to different from other languages. Think about locality, think about the number of indirections, glance at the compiler outputs for the bits you really care about, etc. GHC isn't quite as good at numeric loops as GCC, but if you use the same data structures it should be easy to get at least half as fast as C (I assume your idea of "as fast as C" doesn't involve tons of automatic vectorization and loop reblocking and stuff).
The hard part there is that you can write working code without being forced to think about that stuff so you have to remember you're still allowed to, and on the other hand it might take a bit more digging into the documentation to find where things like memory layout of objects is described, again because it's not important aside from speed.
If you're looking for a general solution I'm afraid its impossible, however in this case it has to do with the parametric type being passed as an argument. Since Eq
makes no garuntees as to order (hense, not Ord
), we have to look towards the implimentation of the function handling the infinite structure.
My haskell-fu isn't the strongest but as I understand it type declarations give no indication to strictness but arguments can be made strict with the BangPatterns
pragma. I would suggest looking here for more info about strictness as my knowledge of strictness in haskell is at best incomplete and at worst flat out wrong.
Purely functional languages. OCaml with ref is not pure in the strict sense like how Lisp/Scheme/Racket is not pure. In contrast, Haskell is pure which makes direct representations difficult but thanks to laziness, possible, e.g., Tying the Knot and Representing Graph Data Structures in Haskell.
> Am I the only one confused/frustraited about this?
Nope. Problems with cabal install plague pretty much everyone, especially beginner/intermediate haskellers.
> Why is Haskell Platform so far behind?
Well, in fairness, GHC 6.12 is only 2 years old (12 June 2010 GHC 6.12.3 released), although most people will tell you that pre-GHC7 is ancient history. The Haskell ecosystem is moving ridiculously fast right now. The HP release with GHC 7.4.* was scheduled for a May release (if I'm not mistaken); I'm not sure where we're at with that but I'd expect an official HP with the latest GHC pretty soon.
The best advice I can give you is to frequent #haskell irc.
Alas, I couldn't find any official announcement yet...
However, the notable changes since 0.9 wikipage might be useful to see what's new
Some people were wondering about the status of Hackage 2.0 which (among other things) was supposed to include dependency metrics to help decide which packages are used how much. This is what a quick search revealed:
In April 2010 Matt Gruen asked for Feedback on a possible GSoC project Hackage 2.0. The corresponding Trac ticket has been closed as fixed three months ago (despite mentioning "Mentor: not-accepted") with a comment that, according to Matt, "the new Hackage server is pretty much code-complete, it just needs some effort to replace the existing Hackage instance".
Is more recent information according to the status of Hackage 2.0 available?
My favorite example of how Haskell error messages are less friendly that other languages is this: suppose you meant to type 1 + 1
, but you missed the +
key and accidentally typed 1 1
instead. What error message do you get?
Here's Python:
% python Python 2.7.5 (default, Mar 9 2014, 22:15:05) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 1 1 File "<stdin>", line 1 1 1 ^ SyntaxError: invalid syntax
Now here's Haskell:
% ghci GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> 1 1
<interactive>:2:1: Could not deduce (Num (a0 -> t)) arising from the ambiguity check for ‘it’ from the context (Num (a -> t), Num a) bound by the inferred type for ‘it’: (Num (a -> t), Num a) => t at <interactive>:2:1-4 The type variable ‘a0’ is ambiguous When checking that ‘it’ has the inferred type ‘forall a t. (Num (a -> t), Num a) => t’ Probable cause: the inferred type is ambiguous
Python tells you: "You typed something that makes no sense." Haskell instead tells you (simplifying a bit): "You're trying to call the number 1
as a function, passing it the number 1
as its argument, but you have not provided any definitions that allow me to treat numbers as functions that take numbers as arguments." Except that it doesn't tell it as succinctly as I just did...
There's a common misconception that you can't have mutable data structures or variables in haskell - you certianly can. http://www.haskell.org/haskellwiki/Mutable_variable describes both simulating mutability and using actual immutability inside the IO monad.
In short, dependently-typed languages allow for terms to appear in types. Expanding on this idea, think of regular functions such as increment (written in an ML-style style):
fun (x:int): int = x + 1
As a mapping from terms to terms. In contrast, you can think of a polymorphic function, e.g., the polymorphic identity:
fun (x:'a): 'a = x
As a mapping from types to terms. That is, to use this function, first you must instantiate the type variable a'
to an actual type, e.g., int
. This produces a standard function which itself is a term.
Another possibility is a function mapping a type to a type. Haskell exposes type-level functions via its type families mechanism.
The final possibility is a function mapping a term to a type. These are called dependent function types because the function type depends on some term.
The canonical example of a dependently-typed function is a list parameterized by its length. For example:
["a", "b", "c", "d", "e"] :: list 5
The list containing five elements would have type list 5
(for simplicity's sake, we'll make these lists only range over Strings). This is useful because now we can write types such as:
cons :: (n:int) -> (s:String) -> (l:list n) -> list n+1
That is, cons
is a function that takes a list of size n
and something to add onto the list and produces a new list of length list n+1
. The benefit here is that type-checking ensures that the produced list has the appropriate length. Another example is appending two lists:
append :: (n1:int) -> (n2:int) -> (l1:list l1) -> (l2:list l2) -> list n1+n2
Where the type of append expresses precisely that the resulting list's length is the sum of the lengths of the two input lists.
haskell.org's Hoogle searches a smaller set of packages by default. You can add additional packages to search using +packagename:
http://www.haskell.org/hoogle/?hoogle=parseJSON+%2Baeson
I don't know if there's a flag to make it search all Hackage packages, like the FPComplete one does. I think that's the sensible default, actually.
That sort of rigidity has more about having an static types & a rigid language than with OOP per se. For example, I doubt its as easy to search for API functions in Java as it is to do so in Haskell.
This is really cool. I'm not surprised that phantom types are expressible in Scala, but I didn't realize that it was almost exactly the same in Java. What's also cool is that the phantom type can be bounded. Contrast with Haskell (or rather, with its primary implementation, GHC), which did not gain the ability to restrict the kind of type parameters in a comparable way until recently.
{-# LANGUAGE DataKinds, KindSignatures #-} import Unsafe.Coerce
data FlightStatus = Flying | Landed
data Plane (status :: FlightStatus) = Plane { blah blah ... }
newPlane :: Plane Landed newPlane = Plane { blah blah ... }
land :: Plane Flying -> Plane Landed land = unsafeCoerce
takeOff :: Plane Landed -> Plane Flying takeOff = unsafeCoerce
GHC 7.4.1, officially released in February of this year, was the first version to include the DataKinds
extension.
http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/release-7-4-1.html#id3015054
Meanwhile, Java has apparently had full support of bounded phantom types since Java 1.5 (released 2004). Quite amusing.
It's true that linked lists are hugely important in Haskell-land, but they're supposed to replace loops (according to the "code is data, data is code" principle). Hence, whereas loops are the bread and butter of the C family, lists are the bread and butter of Haskell.
Linked lists are not an all-purpose data structure for storing every kind of data you need to work with. Notably, list fusion doesn't help you a whole lot when you stash a bunch of data in a list and keep it around for random access. This guy is representing a 2D image as a linked list of linked lists, which is absurd in any language. I'm surprised it's not slower. Maybe he just didn't care to learn a serious vector library like Repa for code that's essentially a proof of concept.
wasn't this posted earlier today?
anyway, why don't you link to release notes ( http://www.haskell.org/ghc/docs/7.2.1/html/users_guide/release-7-2-1.html ) which contain useful info instead of that web2.0-social mumbo-jumbo?
I think with this reddit, haskell-cafe, haskell-beginners, #haskell on IRC and the Haskell tag on stack overflow, we're pretty well covered.
Ah, a variant on the classic Wat talk. Nice one. Maybe he should've tried Haskell:
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help Prelude> [1, 2, 3] + 2
<interactive>:2:1: Non type-variable argument in the constraint: Num [t] (Use FlexibleContexts to permit this) When checking that ‘it’ has the inferred type it :: forall t. (Num t, Num [t]) => [t]
> because other than a few abstract frameworks, there are no Haskell programs.
False. Just one example: Torchlight 2 inventory editor written using a GUI library I maintain, all written in Haskell.
> So shut up or put up.
I am not fond of uncivilized discussion.
I notice that for a lot of people, when you search for “Cloud Haskell”, you get Jeff's prototype implementation, rather than the newer distributed-process which Duncan is talking about. See the Cloud Haskell wiki page for more details.
I came here to do exactly that.
The article mentions three styles:
What I would like to point out is that in Haskell, you can choose between all three of them, as you see fit. The article argues that the 3rd choice is best, and I tend to agree, but Haskell offers you free choice between all of these options. Neither of them is baked into the language, instead you can implement each of them as a library.
This implementation will take the form of a monad. Basically, "monad" is just a fancy word for "you can redefine the semi-colon ;
". You can redefine your control flow in any way you like. For instance, you can allow coroutines, or you can disallow them and choose blocking instead, and so on.
If you want to redefine control flow, check out my operational library, it greatly simplifies the process. Have a look at the examples to see what is possible.
TD;DR In Haskell, you can implement everything as a library (monad in this case), you don't need to rely on support from the language.
It's not detailed, but it's correct.
In my experience, when someone says they have trouble with arrays, what they really mean is that they have trouble managing mutability with monads in general, and arrays are the first case where they can't get by with persistent data structures. (For example, they aren't comfortable with MVars or STRefs either.) To solve that problem requires study and practice with monadic types to get the hang of it.
If you want to be quick and dirty, you can just work in IO and use IOArrays (boxed or unboxed) just like arrays in impure programming languages.
If you are comfortable with Data.Array in the small but find that you can't shoehorn your arrays into the rest of your program, this page will help you pinpoint why that is, and guide you to an appropriate alternative array library: http://nix-tips.blogspot.com/2011/03/how-to-choose-haskell-array-library.html
I'm not a haskell programmer today but almost each time I see an article or tutorial about this language I feel that there are too much introduction and insistence on how much this language is hard and different and not enough concrete things.
Normal coders don't just learn a language just because it's awesome and different but because it gets things done in an efficient and durable way. So I found it easier to read the samples of the Rosetta code or the IRC bot sample than reading pages after pages of list manipulations without the first trace of a main.
withFile, withForeignPtr, etc. I think every language with closures quickly adopts that idiom, it's a powerful remedy for chronic headaches.
Note that the idiom alone doesn't guarantee safety, though: You can still return the handle from your action and mess things up. That can be fixed by quantifying the handle properly.
Release notes for this series: http://www.haskell.org/ghc/docs/7.0.2/html/users_guide/release-7-0-1.html
Will be part of Haskell Platform 2011.1 due in a few days (so you can wait for those installers if you wish)
I can imagine that URef
beats STRef
, but what about URef
versus Int
when you can write your loop counter mutation as a recursion? I'm inclined to think the GHC inliner will work better with an Int
by simply removing all the box . unbox
occurences, to finally generate a perfectly unboxed Int
which fits in a register.
On the other hand, the URef
will always have an indirection and I don't think the optimization pass can remove it, because it will change the semantic of URef
which can be modified by an other thread.
A quick and totally irrelevant micro benchmark:
uglyLoop :: Int -> Int uglyLoop n = go n 0 where go !0 !c = c go !n !c = go (n-1) (c+1)
uglyLoop' :: Int -> Int uglyLoop' n' = runST $ do n <- newRef n' :: ST s (URef s Int) c <- newRef 0 :: ST s (URef s Int)
whileM_((0/=) <$> readRef n) $ do modifyRef n (subtract 1) modifyRef c (+1)
readRef c
Gives:
λ skolem ~ → ghci -fobject-code -O2 ~ GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help Prelude> :l BenchGHC.hs Ok, modules loaded: Main (BenchGHC.o). Prelude Main> :set +s Prelude Main> uglyLoop' 100000000 100000000 (0.24 secs, 103,632 bytes) Prelude Main> uglyLoop 100000000 100000000 (0.06 secs, 98,800 bytes)
I don't know how much it is relevant, because some other optimisation which favorise the Int
may be involved (and I may have missed something else).
>> They are all 100% convinced that Haskell is the only language worth writing code in.
> I'd say its the most practical of the languages with powerful type systems, but if you need to do very low level work or interact with ecosystems that aren't easy to get at via a C api then haskell isn't the best tool
It seems that Haskellers, given a task that isn't well suited to Haskell, do it in Haskell anyways.
For example: need to do embedded programming? I'll just write a DSL in Haskell that generates well-behaved C code, and then program in Haskell using your DSL.
Need to write Javascript? That's fine, we'll compile Haskell to JS, or use Haskell's macro system to make writing JS less terrible.
Changes to the Haskell standard pretty much don't happen. It's a lot of work, with little perceived gain.
Changes to the Haskell language happen because people add features to GHC. Most of the interesting changes have papers that accompany them.
The Haskell community is great. But, we are probably not where we should be in terms of involvement with GHC. SPJ is currently trying to transition GHC from a vehicle for research to a community supported project. Which is great! But, I think it does show that Python was always a community supported project, and GHC has traditionally been an open-source research project.
That said -- there are community driven changes. We do often see library submission proposals:
http://www.haskell.org/haskellwiki/Library_submissions
And things like Functor, Applicative, Monad changes in GHC 7.10 have come out of the community,
http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal
So, in summary, yes, you are seeing a hole in the Haskell ecosystem. We really need to create more transparency, policy, and tools for contributing. I'm not sure if anyone has taken leadership on that though. IMO, it needs to be someone's full-time paid position.
Nice review Bryan!
BTW if anyone wants his book signed, come to ZuriHac 2014 this summer where Simon Marlow and also Edward Kmett will be giving talks.
The post is from April 2010 (2007 is his Haskell starting date). GHC 7.0.1 was released later that year, but you're right, had he used the functionality from then-HEAD he would probably have mentioned it.
This comes up periodically. See here for some old discussion: http://www.haskell.org/haskellwiki/Top_level_mutable_state
A more recent thread (from 2008) had what I think is a very cogent reply from dcoutts on this topic: http://www.haskell.org/pipermail/haskell-cafe/2008-August/046437.html
Ah, it's our good old monomorphism restriction in action. And defaulting, though I'm not sure why the default gets to be () (see the report for more info). In any case it's not particularly related to uncurry.
You can turn it off with:
> :set -XNoMonomorphismRestriction
I'm pretty sure there's also a command line option.
If you're not aware of the monomorphism restriction, check out the Haskell Wiki and Stackoverflow, there are multiple questions there related to that.
You can manage to disable your debug output by sufficient lazyness, and in a monadic context by optimisation, but that's not really a problem.
e.g.
foo = do trace "bar" (return ())
will be optimised out. All you have to care about is that the result of trace is actually needed, somewhere, and it will print the message.
It's usually described as "A refreshing desert in an oasis of referential transparency".
Have you accepted referential transparency into your heart? Ha ha
Seriously though, I'm not saying that its impossible to explain monads. I'm just saying that its quite hard, and I'm not a good enough teacher to pull it off. The best way I know to learn the concept is to actually write code that uses the concept.
I'm not even trying to make an argument - I'm just sharing my experience in learning functional programming. Because the web is full of bad monad tutorials, it helps that the Haskell monad page points to a bunch of pretty good ones written by people smarter than me. My main point is that simply reading these tutorials is usually not enough. Just like reading about calculus doesn't mean you know how to solve a differential equation - you have to work through the math and actually do it.
Question: have you worked with monadic code and decided you don't like that style of programming? Or have you just decided that its impractical because it has such a steep learning curve? Either one of those is a perfectly reasonable opinion, although not one I share.
Did I mention my reactive-banana library and corresponding GUI examples already?
> being able to mark a specific variable as 'dynamic'. Like in C#. That variable can still be used as if it was statically typed with any other method and api.
Haskell has this and probably had it before C#. Research on integrating dynamic typing into a statically typed language dates to at least 1991, so it's an old idea.
If you want to reify types, you want to use some kind of metaprogramming language. Types are compile-type information and hence syntax. That's where macros shine (the most powerful of which you can get in Scheme, but even Haskell has Template Haskell).
You can use lucid instead, because it's a transformer already:
$ stack build lucid $ stack exec ghci GHCi, version 8.2.2: http://www.haskell.org/ghc/ :? for help > :set -XOverloadedStrings -XExtendedDefaultRules > import Lucid > import Control.Monad.Trans.Reader > import Control.Monad.Trans > runReaderT (renderTextT (do p_ (do title <- lift ask; strong_ (toHtml title)))) "Hello, World!" "<p><strong>Hello, World!</strong></p>" >
(Blaze's MarkupM a
is not a transformer and therefore cannot transform another monad. Lucid's HtmlT m a
can.)
I can speak for the author of aeson, but here are my thoughts:
Exception e => Either e a
says that "whatever exception type you choose, if there is a failure I will produce one of those". That's pretty hard to implement.
IsString e => Either e a
is reasonable to implement, but the cost is type errors when the function is used in a polymorphic context, as GHC can't figure out the type:
GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help > let foo :: IsString s => Either s Int; foo = Right 5 > either (const 0) id foo
<interactive>:4:21: error: • Ambiguous type variable ‘b0’ arising from a use of ‘foo’ prevents the constraint ‘(IsString b0)’ from being solved. Probable fix: use a type annotation to specify what ‘b0’ should be. These potential instances exist: instance a ~ Char => IsString [a] -- Defined in ‘Data.String’ ...plus one instance involving out-of-scope types (use -fprint-potential-instances to see them all) • In the third argument of ‘either’, namely ‘foo’ In the expression: either (const 0) id foo In an equation for ‘it’: it = either (const 0) id foo
There is always a conceptual cost to overly general code, and sometimes a technical one too.
We were able to get the armv7-linux-androideabi
target for ghc working. But while building aarch64-linux-android
target, we got some compiler panics.
(GHC version 8.3.20170530 for aarch64-unknown-linux-android): ghc-stage1: panic! (the 'impossible' happened) LlvmCodeGen.Ppr: Cross compiling without valid target info.
(GHC version 8.3.20170530 for aarch64-unknown-linux-android): Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug LlvmCodeGen.Ppr: Cross compiling without valid target info.
Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug
tc-DTargetArch=\"aarch64\" -optc-DTargetOS=\"linux_android\" -optc-DTargetVendor=\"unknown\" -optc-DGhcUnregisterised=\"NO\" -optc-DGhcEnableTablesNextToCode=\"YES\" -static -eventlog -O0 -H64m -Wall -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i -irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen -Irts/dist/build/./autogen -O2 -Wnoncanonical-monad-instances -c rts/RtsUtils.c -o rts/dist/build/RtsUtils.l_o rts/ghc.mk:251: recipe for target 'rts/dist/build/StgStartup.o' failed
Any idea what's going on here?
I think there are two things here.
The first is, is the language capable of production work. And the answer at this point is clearly yes (even if it is only used by a reasonably small number of shops, it is used, and at reasonable scale to validate that it's not falling down). There is a little disconnect between building a toy app and building something for real (for example, the type system cannot and is not intended to catch all bugs, so you should still be testing).
The second is does it have tooling / conventions that get developed over the course of doing production work. And I think in this case the answer is... not yet, really. As a simple example, the jury is still somewhat out as to how you actually run an application in production. Some people are betting on Nix, others on Docker, and other people are just building on production instances (which is a bad idea, due to the high ram usage of linking), or on machines running the same system as production (whether virtual or not-in-use production instances), and some people have figured out how to get it running on Heroku (but that solution seems to me worse than the problem!)... But all of this is more complicated because most web development has been in dynamic languages, where this is a non-issue (and Go side steps this by having completely static linking).
As for your last concern, there are definitely people you can throw money at. Here's a starting point: http://www.haskell.org/haskellwiki/Consultants
This is not about how the operator itself associates, but rather how GHC implements the deriving Generic
mechanism. If I were to write the data declaration
data T = A | B | C deriving Generic
GHC would generate a Generic
representation corresponding to that data type which is roughly of the form a :+: b :+: c
. At present, it appears to generate a :+: (b :+: c)
(which you can verify by writing the above declaration in a file with the appropriate pragmas and then compiling it with -ddump-deriv
) but the manual, as linked above, indicates that a user should not rely on a particular nesting being generated.
Among other things, this means that GHC could hypothetically change the way it implements deriving Generic
and consequently the serialization strategy presented here would produce a different encoding of the same value when run with a different version of the compiler.
The question being asked is not, "Is the nesting of these expressions reliably the same?", because the manual clearly indicates that you can't rely on a particular nesting. The question is, rather, "In light of the fact that the compiler can choose an arbitrary nesting, should the serialization example be revised to produce a consistent serialization regardless of how the expression is nested?"
Just +pipes in the search in hoogle would have yielded what you wanted. People run into this time and time again, I wonder if there isn't some better way to advertise how hoogle works.
I've come to prefer full-blown parsing libraries to regular expressions whenever I can get away with it.
For what it's worth, the example in the article (/^[A-Z]{5}-\d$/
) would be written
partnumber = Word(alphas, max=5) + '-' + Word(nums, max=1)
in Pyparsing. The nice thing is I can look at the above and it reads like plain English, unlike all but the simplest regular expressions.
cabal files were originally haskell files...
But we moved away from that since its much harder to statically analyze turing-complete dependency and library specifications. Also, you have to execute arbitrary machine code to check the package when you upload it to Hackage...
This way lies madness.
For old time's sake: http://www.haskell.org/cabal/proposal/x138.html#AEN226
> Setup.lhs is an executable Haskell program which conforms to a particular specification
It's totally worth it. Functional programming in general will change the way you think about programming, and Haskell in particular will change it even more. Higher order functions are super useful in the real world (for example, callbacks and mapreduce are both instances of higher order functions). Static typing is super useful because you never have bugs introduced from implicit coercion. Haskell's lazy evaluation lets you work with infinite lists very easily, which in turn lets you deal with lots of problems in simpler ways than you might previously have tried.
Python is easy to code, but also easy to make mistakes in (not to mention slow and memory inefficient). Haskell is harder to code, but if your code compiles, it's almost certainly bug-free. It runs about as fast as C++, and can be just as memory efficient (admittedly, it's possible to mistakenly use a lot of memory due to the lazy evaluation if you structure your code poorly). Haskell is used in a nontrivial number of companies, particularly in the financial sector.
John Hughes was that "clever person" not mentioned in the "Improving Performance" chapter - hence DLists are often called Hughes Lists. I've seen comments (possibly from Lennart Augustsson here on Reddit) that Hughes Lists were common folklore before Hughes's "A Novel Representation of Lists" paper.
Incidentally there was a long thread on Haskell Cafe in 2010 regarding whether DLists take their antecedence from Prolog's difference lists or whether Prolog's difference list is a different thing (and Haskell's DList is mis-named). The thread starts here and continues into November: http://www.haskell.org/pipermail/haskell-cafe/2010-October/085757.html
I hope the supported Haskell subset gets larger soon; Fay looks very promising but it feels unsatisfying to me coding in Haskell without support for typeclasses (mostly for being able to use the standard typeclasses mentioned in the Typeclassopedia)
also very interesting is the take on this problem from the Haskell language community, presented e.g. by Simon P. Jones on the YOW! conference: The Future is Parallel, and the Future of Parallel is Declarative
I found The Typeclassopedia in TMR to be a great read when going over applicative(and many other things) it leads you to cool observations like this. I think understanding Applicative itself was one of coolest aha moments in Haskell for me.
Basically
Just so it's clear: Type level programming in Haskell is, with some existing extensions, Turing complete*. C++'s templated metaprogramming is also Turing complete. So you could have done the equivalent of C++ template metaprogramming in Haskell for a while now.
However, in Haskell type level numbers were very annoying: They could only be represented in Peano number form: Successor (Successor Zero) is 2 and so on. Don't even ask about floating point numbers! It was/is quite annoying. Now, with this feature, Haskell programmers can use regular-ol numbers just like C++ can use.
Peano numbers in Haskell: http://www.haskell.org/haskellwiki/Peano_numbers
I didn't know what else to link to, but I noticed this morning it seems the merge went in from looking at my cvs-ghc
email.
Obligatory example:
(vh-head)~HSENV > ghci -XDataKinds GHCi, version 7.5.20120408: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done Loading package base ... linking ... done. Prelude> import GHC.TypeLits Prelude GHC.TypeLits> let x = sing :: Sing 4096 Prelude GHC.TypeLits> fromSing x 4096 Prelude GHC.TypeLits> let y = sing :: Sing "OMG" Prelude GHC.TypeLits> fromSing y "OMG" Prelude GHC.TypeLits>
I imagine the work is still rough right now. In particular I would say it'd make more sense to have type literals behind their own flag rather than piggybacking DataKinds
, but that's just this mans' opinion.
EDIT: I borked the title. I meant for it to say "GHC HEAD now has type level natural number/string literals."
EDIT 2, ELECTRIC BOOGALOO: Here is another mindless example I whipped up right fast:
{-# LANGUAGE DataKinds #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE ScopedTypeVariables #-}
module Main (main) where import GHC.TypeLits
data Proxy p = Proxy
f :: SingI n => Proxy n -> SingRep n f (_ :: Proxy n) = fromSing (sing :: Sing n)
main :: IO () main = mapM_ putStrLn [show $ f n, show $ f s] where n = Proxy :: Proxy 4096 s = Proxy :: Proxy "lol"
For anything but throwaway code, this is a bad idea: you'll be catching and hiding unintended error conditions (for example, keyboard interrupts), and you may introduce strictness where it's not intended.
The right way to do this depends on the function. For head
, there's <code>listToMaybe</code>.
For array bounds checking, or more generally any evaluation depending on a specific condition, you can use something like the following:
given :: a -> Bool -> Maybe a
x given
cond = if cond then Just x else Nothing
lookupArray i arr = arr ! i given
inRange (bounds arr) i
Actions of the form
f >>= return . g >>= h
are equivalent to
f >>= h . g
due to the first monad law. So you can get away with the cleaner-looking:
main = getContents >>= putStr . processInput
A simplified example, which shows the input-output mechanism, and then a wee String -> String
function sneaked in to the middle there.
Prelude> getLine >>= putStrLn foo foo Prelude> getLine >>= putStrLn . reverse foo oof
Ultimately you can replace the whole line with interact processInput
though.
Haskell solution :
let set = "abcd" [(a,b,c,d) | a <- set, b <- set, c <- set, d <- set]
or
liftM4 (,,,) set set set set
See haskell list comprehension for the link between comprehensions and the specific list monad.
Gems is a package manager for Ruby. The Haskell equivalent would be Cabal. Note, that Cabal is not specific to Yesod (just like RubyGems is not specific to Ruby on Rails).
Various ByteString instances are best for memory usage, if you have a lot of strings (you can have UTF-8 encoded data also in these). The plain old String is unfortunately alias for [Char], meaning linked list of Char's - which is one of the stupidest ideas in the core Haskell library imho.
http://www.haskell.org/ghc/docs/7.2.1/html/libraries/bytestring-0.9.2.0/Data-ByteString-Char8.html
The first thing I think of when hearing the words "composability" and "Haskell" together has nothing to do with OOP: Lazy evaluation.
One example from these slides is: minimum = head . sort In this example let's assume sort is implemented like here. Now until the end only the "lesser" part will evaluate until we get the first element. The complexity will be: n + (1/2)n + (1/4)n + ... = 2n = O(n), instead of O(n*log n), which means sorting the whole array and then take only the first element. So, in Haskell composing two functions can reduce the amount of calculations you do, to get only what you need.
There are probably some better examples of that but I am, er... to lazy to find one :P.
> For a class, I once had to code in a new language which had only one compiler implementation, and that had this flaw. The most common way the compiler would "report" a bug in the program was to just drop into an infinite loop. When you finally gave up on it, of course you received no error message, let alone a line number.
Yeah, but was this because of a buggy, non-terminating implementation of a language with decidable compilation, or because the language was such that it is mathematically impossible to write a compiler that will terminate for all possible programs?
There are languages where the second holds. The two most mainstream examples I can recall:
I'm not sure I understand the example perfectly (I don't recognize the language. Is it Scala? Javascript?), but I think a language with a type system like Haskell will compile this. The type of args depends on the return value of f, which has some type f::A->B. args then has the polymorphic type B. The type of memoize is then
memoize :: (a->b) -> (a->b)
And the type of memoized is of course a->b
memoized :: a -> b
I know you can also get something like that to compile in C++, but it's a total pain in the ass. You'll never get it to compile in Java et al though.
Now, this gets more complicated when f takes multiple arguments, but it's still doable.
EDIT: More details.
>Well, obviously that was a bit of hyperbole, but I think it is fair to demand more than a few programs nobody has ever heard of before you start taking the language seriously.
The way I look at it is that there's enough non-trivial programs written in Haskell to demonstrate that the language is mature enough to write serious software in. Beyond that, whether to take the language seriously or not should really be based on whether the language provides features you find useful.
>And the original point that I was trying to reinforce was that people who like Haskell should be out there making those programs, rather than just endlessly talking about the language.
I don't see how these things are mutually exclusive, people are out there making programs in Haskell, but obviously there aren't as many people using Haskell as Java. Does this mean Java is a superior language?
>Also, last I heard Haskell is only theoretically good for concurrency, and in practice a lot of the magic that would make it good is just not there yet.
One advantage Haskell has is that your programs will at least be correct in a concurrent environment. In mainstream languages it's non trivial to even get to that point.
Your other wish is already fulfilled:
"The language extension -XNumDecimals allows you to also use the floating literal syntax for instances of Integral, and have values like (1.2e6 :: Num a => a)"
http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#num-decimals
I cannot help but feel that monads are made more complex than they actually are. I think the best way to think about them is to recognize that various kinds of wrappers can be very useful compared to working just with raw values, but that they can be bothersome to work with. Wrappers with a minimal, yet still small and easy to use interface, would therefore be nice to have.
A monad can be seen as a kind of "wrapper interface", that mandates that for a given wrapper to be a monad, it must partly implement some specific functions for working with that wrapper, and partly those functions must obey some simple properties (properties that amongst other things means that rewrites such as "m >>= return
equals m
" can be safely made, which are occasionally useful. I think a good example is do-notation (which is basically syntactic sugar for monads): Do-notation relies on the monad properties to ensure that its semantics are consistent, meaning that it can be safely used in place of using the monad functions directly). It turns out that the monad "wrapper interface" is rather general, useful and relatively easy to work with, which is one reason it is popular. In Haskell, another "wrapper interface" is Applicative, which has fewer requirements to potential wrappers than Monad does, making it more general, but also giving less functionality to its users than Monad.
Mutation, certainly, causes great problems for FP.
For example, many important optimizations in Haskell are invalid if random mutation or IO can occur. For example, there's an optimization feature in GHC called rewrite rules, where library authors provide annotations like
{-# RULES "map/map" forall f g xs. map f (map g xs) = map (f.g) xs #-}
That rule removes the creation of an intermediate list, and is also completely unsafe if f and g are doing anything that isn't pure. The current state of the art for doing this sort of thing is generalized stream fusion which takes code that looks like it's working on lists, rephrases it in terms of streams, eliminates as many intermediate streams as possible, and ultimately compile high level code like
dotproduct :: Vector Double -> Vector Double -> Double dotproduct v w = msum (mZipWith (*) v w)
into something that's faster than hand-optimized C compiled with GCC.
If you really want to marry FP with OO, you'd end up with something like OCaml or Haskell with an object system bolted on, and I'm not certain the second provides any real utility.
edit: expanded on a few things
You could have invented monads is a much-liked tutorial in the Haskell community.
Coming from a more theoretical point of view, the monad section of the Typeclassopedia is pretty good. It avoid talking about the "magic" of monads, and just lays them out as what they are.
Of the reasons given, stack thrashing was the least convincing. If I recall correctly, GHC implements a bit of hysteresis - I'm not taking the time now to find a proper reference, but this message from when stack chunks were being designed describes copying the top 1K or so of old stack into a new chunk. Then that sort of thrashing can only occur if the function actually allocates and frees more than the amount covered by the hysteresis. Perhaps rust programs frequently stack-allocate large data structures? As for the rest, I'm pretty sure rust programs call into C functions more often than most Haskell programs.
While not a physics engine, does a graphics engine count? (The stunts example uses bullet as a physics engine).
Ultimately it's up to the users. However, generally if something violates the laws it tends to not behave the way you expect. A great example is ListT, which does unexpected things because it violates the monad laws. Obviously it's out there and people use it, but they often tear their hair out when it breaks.
The reason category theory works so well is because it is like a universal language for every field, not just math and programming. This means that if you can formulate your thoughts in whatever concept you are interested in terms of categories, then you can then leverage your intuition to reason about anything that you can naturally transform it into. If you violate the laws, though, then your intuition doesn't match reality and things start behaving unexpectedly. This is why pipes are easier to use and more intuitive than other iteratee implementations, because they are structured along category theory idioms and they strictly obey the laws.
A great example of this is how I knew his implementation violated the category laws before I had even proven the counterexample. I've built up a completely non-mathematical intuition for how pipes work, but because there is a categorical translation of my ideas then I can reason about any pipe implementation mathematically by transforming my intuition directly into the mathematical equivalent. This natural transformation of my intuition into the mathematical description is why I know that any implementation that allows bidirectional information transfer will violate the identity laws and I can work backwards from the mathematical statement to generate a constructive proof of the violation (i.e. the counter-example). However, doing so requires defining a natural transformation from his implementation to my implementation and that is what takes up the bulk of the time for my disproofs.
Part of the problem with monads is that, as an early addition to the Haskell ecosystem, they got far more attention than they deserve. It's almost easier to wrap your mind around them if you bear in mind Eric Raymond's description of them as "a hack to force sequencing in an otherwise lazy environment." Take them off their CT pedestal, and they become much easier to play around with.
You may want to give to give Brent Yorgey's Typeclassopaedia a thorough read. The scarier typeclasses in Haskell form a rough sort of continuum from Functor to Applicative to Monad to Arrow. It is easier to understand monads after you've grappled with functors and applicatives.
Another resource that really helped me was Simon Peyton Jones's paper "Tackling the Awkward Squad" (pdf), which allowed me to understand why monads were brought to Haskell in the first place.
Other than that, the only way to learn IO and monads is to play around with them a lot. Get used to the idea that the IO monad is Haskell's roach motel -- computations go in, but they never come out. This is on purpose, and it contributes to cleaner code. But it feels like a hassle until you're used to dealing with it.
Keep at it, and good luck!
EDIT: fixed link to SPJ paper...
Why get close when you can make it exact? ;-)
crabgrass:~% ghci GHCi, version 7.3.20110927: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> :m + Data.Char Prelude Data.Char> let uppercase = isUpper; (?) = ($) Prelude Data.Char> uppercase? 'a' False Prelude Data.Char> uppercase? 'A' True Prelude Data.Char>
MPTCs not needed in the end. Turns out the story was that GHC's already quite-good rewrite rules for realToFrac
weren't firing because of the (in my opinion, somewhat strange additional) definition
cFloatConv = realToFrac
and the subsequent use of cFloatConv
everywhere instead of realToFrac
.
Bottom is a wrap-up term for all values that aren't actually real values, the most important being non-termination: A non-terminating function can be of any type, so any Haskell type includes bottom among its possible values. As writing
head (x:_) = x head [] = head []
would be rather sense-free, exceptions are yet another way to generate bottom values:
error :: String -> a
see, they are of truly any type.
(Exceptions are catchable, though, and compiled ghc code would throw one for that second head case... but when thinking in that way we're quite far indeed from pure, denotational thought and deeply in the dark and stormy topic of operational semantics and getting-stuff-done)
Conversely, "non-bottom", in the Haskell sense, means "Terminating and not an exception".
Many topics normally considered basic are left uncovered, such as tail recursion
jerf: Note, that traditional tail recursion is usually a bad thing in Haskell due to laziness. You are more interested in definitions that are productively corecursive or guardedly recursive.
It doesn't matter that I could build the whole structure in one go with a series of nested calls that would be turned into a loop in a strict environment, instead I care how efficiently I can deconstruct the resulting structure or construct the next layer, as I'll (usually) be building it on demand as I inspect it.