I would say a function which is more than 5 lines long in Haskell (not including where
blocks, or large matches with a lot of alternatives e.g a syntax tree) is a code smell and it's not something I usually see. More than four parameters is more common in Haskell, but if it gets too big you're probably operating on the wrong level of abstraction. Haskell functions often have more parameters simply because stuff can't be implicitly passed in by mutable state (something Clean Code stupidly advocates).
As usual, Ubuntu users can use my GHC ppa to conveniently install binary packages specifically built for the currently non-EOL'ed Ubuntu releases
Installing is simply a matter of
sudo apt-add-repository ppa:hvr/ghc sudo apt update sudo apt install ghc-8.2.1-prof ghc-8.2.1-htmldocs cabal-install-2.0
and add /opt/ghc/bin
to your $PATH, and you should be ready to go. Please refer to the PPA description for additional information on how to manage multiple GHC versions installed side-by-side.
The design looks fantastic. One thought, though -
When I saw the "Try it" bit, my first intuition was to try to enter the "primes" example from the top right corner. Since it's been a while since I've played with ghci, and I'd forgotten its... peculiarities, it was a bit off-putting to find:
let
That's gotta be positively frustrating for a beginner. I feel like a free editing space with a "Run" button, like the one on coffeescript.org is a lot more user-friendly - to the extent that I actually regularly use that one to quickly test things out.
Obligatory example:
12:32:31 a@link ~ ~/ghc-head/bin/ghci GHCi, version 7.3.20110921: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> :set -XConstraintKinds Prelude> type Stringy a = (Read a, Show a) Prelude> :i Stringy type Stringy a = (Read a, Show a) -- Defined at <interactive>:1:6 Prelude> :k Stringy Stringy :: * -> Constraint Prelude> :{ Prelude| data Bit = T Prelude| | F Prelude| deriving (Show, Eq) Prelude| :} Prelude> :t T T :: Bit Prelude> :k Bit Bit :: * Prelude> :i Bit data Bit = T | F -- Defined at <interactive>:1:6 instance Eq Bit -- Defined at <interactive>:3:26 instance Show Bit -- Defined at <interactive>:3:20 Prelude> :show bindings type Stringy a = (Read a, Show a) data Bit = T | F instance Eq Bit instance Show Bit Prelude>
I just want to say that I think I stated my point most clearly in this particular email, somewhat deeper in the thread. The whole thread is about precision and pedagogy, and I feel like the title I gave that thread in the context of /r/haskell misrepresents what I was trying to get at. It's about using the word "monad" in a precise and helpful way.
P.S. /u/alexander_b, I tend to think of the mailing list as a somewhat private place to test out and discuss ideas. I think it's great to share this general idea on /r/haskell where it is quite relevant, but if my name is going on it like that, I'd prefer to present my ideas on my own terms.
Clean Code is a bunch of folklore, blatantly obvious stuff, and flat out wrong advice from someone who really doesn't know what he's talking about.
Haskell's notation is similar to mathematics. I personally find it much easier to understand than longer variable names, because the structure of the code is more visible.
I'm more concerned with how data flows through the program, I don't care what the data is called, just how it's processed.
You're not compiling with optimisation. Also you're not using unboxed vectors. To fix the former you only need to add -O to the ghc invocation (ghc might not be truly magical, but it's more magical than you think!). The latter requires changing the Data.Vector import to Data.Vector.Unboxed and adding a couple of type annotations (I had to specify the type of numvec, and the type of round used in main): http://hpaste.org/53543
These changes brought it down to 0.057s on the test image on my system :)
(e: To clarify what's going on, as I understand it code using Data.Vector is hugely dependent on optimisation for performance: Without it, every time you use a vector function you're actually constructing a whole new vector. Which means a hell of a lot of copying data around. I think it's actually not so much ghc that's incredibly magical here but the inlining and fusion rules in the Data.Vector library. They basically do the equivalent of replacing map f . map g (where in order to compute the result you first need to compute an intermediate vector) with map (f . g), except in a far more general way.
In this case it seems that ghc -O also picks up on the (fft_CT xse) subexpression in fft_CT being reused, and other similar cases, so that it can avoid recomputing those values. Surprisingly you can't always depend on that happening, though. So if you want to make sure a computation's result is shared, bind it explicitly.
As for switching to unboxed vectors, that simply means that you're using arrays which actually directly contain their values, rather than pointers to heap objects containing the values. Unboxed vectors are faster than boxed vectors but can't be defined in terms of themselves (for instance, let v = cons 0 (generate 10 (\i -> 1 + (v ! i))) in v == fromList [0,1,2,3,4,5,6,7,8,9,10] can be evaluated with boxed vectors but not unboxed vectors).)
Don't use pacman to install stack (or any haskell libraries). It pulls in ghc and all the dependent libraries - and will update them frequently. https://www.archlinux.org/packages/community/x86_64/stack/ is terrifying. It's on release 101!
Just install using the instructions on haskellstack.org. I also use stack to install haskell executables.
As for the resource, u/lexi-lambda wrote https://lexi-lambda.github.io/blog/2018/02/10/an-opinionated-guide-to-haskell-in-2018/ which gives a lot of detail (but you did ask for comprehensive).
Just remove the explicit type signature from your getName
function and you get the following general signature inferred by the compiler:
λ>:t view [l|name|] view [l|name|] :: FieldOwner "name" a s => s -> a
which will work on any record that has a field "name". So you can use that signature instead.
Generally it is the area of Extensible Records. I expect that some work will be done in that direction.
I think R might be a better fit for what you want to do. As a language it's weird (based partially on Scheme, but with objects and a lot of annoying special cases thrown in over the years), but it has MUCH better support for exploring data, visualizing it, and doing statistics than other languages. You might also find the RStudio IDE helpful.
Read Pragmattic Programmer. It had a bunch of good ideas, many of which are now integrated in common agile frameworks. I enjoyed it, in any case.
I have Code Complete 2 still sitting on my shelf. I should get around to it.
It's not Haskell, but I think you might find Elm pretty interesting. It has a similar flavor to Haskell but is geared towards the type of stuff that Processing is for, if I understand correctly. Give it a try! There are some nice demos on the website, IIRC.
I bought myself a copy and have been skimming it. One thing I noticed is this must be one of the last books to recommend the Haskell Platform despite explaining the use of Stack in subsequent chapters
> At this point you are probably feeling the need to try Haskell on your own computer. The first step for this is, of course, to have a working Haskell installation on your system. Haskell developers worried in the past about how to get people ready fast and easily. > > So, they created the Haskell Platform, a distribution containing the GHC compiler, the Cabal build and library system, and a comprehensive set of libraries. To get the Haskell Platform, go to http://www.haskell.org/platform/. Then, follow the steps corresponding to the operating system you will be using.
The book does a good job at being impartial about "Cabal and Stack" and in a respective subsection very briefly describes the differences as
> A fair question to ask is what the differences between them are. In general, Stack is focused on having reproducible builds, whereas Cabal encompasses many more usage scenarios.
and then goes on to briefly describe Hackage and Stackage and concludes by punting on a verdict which tool you should use
> If you are in doubt of which tool to use, don’t worry and start with any. As I discussed above, both share the same package description format, so changing from one to the other is fairly easy.
Later chapters walk you through performing the same tasks with either tool and even mention Cabal's modern new-*
commands.
Check out how Elm is doing it:
The type annotation is saying: [[ some type ]]
But I am inferring that the definition has the type: [[ some other type ]]
Incredibly clear and informative.
It happens more often than you'd think, which is why it's so handy that you can use Hoogle to search by type signature.
I can highly recommend Okasaki's book on data structures: https://www.amazon.com/Purely-Functional-Data-Structures-Okasaki/dp/0521663504, if you are looking for inspiration or techniques.
I would suggest discarding the "Clean Code" book entirely, since it is an inconsistent mess of "let's do OOP for the sake of OOP" with an emphasis on having mutable state and other stupid idea.
Although those things you mentioned are probably not a bad idea in specific. And 1 letter identifiers are usually used when more than 1 letter doesn't gain us any real descriptiveness: f vs func, x vs "firstElementOfList" and so on.
I did this
$ git clone https://github.com/mightybyte/monad-challenges.git $ cd monad-challenges $ stack init $ stack install $ stack ghci
Configuring GHCi with the following packages: monad-challenges GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help [1 of 1] Compiling MCPrelude Ok, modules loaded: MCPrelude. *MCPrelude MCPrelude>
Ur, which is reportedly very fast, is a language targeted at full-stack web work. It doesn't provide "full" dependent types, but it provides a few important cases of "types with dependency" (my terminology, not theirs) especially geared for the needs of integrated well-typed code all the way through the stack from DB to web.
http://www.impredicative.com/ur/
I don't think it has a lot of widespread usage, but I do know a significant portion of https://bazqux.com/ is written in it, and bazqux is a very fine google-reader like rss reader which works very efficiently.
(If I recall, the frontend of bazqux is all ur, and the backend process that does feed fetching is straight Haskell).
As /u/andrewthad said, there are methods to make Cabal have deterministic building and dependencies, and of course Stack does this by default. But in addition, Nix is designed for this. It's definitely much harder to learn how to use, but the determinism and capability is much better than any of the other options. It's designed to be a language agnostic build tool, which ends up making it a suitable package manager. This is where the complexity comes from; it's not just meant to be Haskell. I'd probably use stack for small things because it solves the problem quite simply. But for anything nontrivial, I'd immediately choose Nix because of how many guarantees and abilities it gives you.
The thing is that while inspiring, Bret's visualizations seem to be special purpose; every time you want something like this, you have to write a new visualization. Example: Did he cheat slightly on the platformer game? I could see the path of the main character, but I didn't see the path of the clouds in the background. (The turtle didn't move because he jumped on it.)
So, what I think is more important than any particular visualization are powerful tools that make it easy to write one yourself. If these things were easy to create, people would have done so already. That's why I'm working on functional reactive programming and my reactive-banana library.
My latest suggestion for a Summer of Code project aims to pick the low-hanging fruit from the other side. The idea is to put visuals and interactivity on top of the GHCi interpreter. It won't be the same as Bret's seamless demos, but it's completely generic and one step closer to the goal.
I feel the same way. "Read the types" people say, but that's often coming from people who already grok them. Coming from the other side is much harder. Hackage has a clunky documentation interface, IMHO. I wish it used something a little more common (like Markdown) and looked prettier. Do we have much of a presence on https://readthedocs.org/? Examples are key.
This works better on newer versions of GHC:
GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. >>> :t length . show length . show :: Show a => a -> Int >>> let foo = length . show >>> :t foo foo :: Show a => a -> Int
Why?
What you are running into is something called the MonomorphismRestriction
. It exists because without it you can lose sharing whenever there is polymorphism in the result. It was an early compromise in the design of Haskell.
let foo = <insert some huge computation that takes an hour and spits out a number>
when used in two different contexts at two different types would compute the answer separately. Why? Because 'sharing' foo, would otherwise actually sharing something of the form
foo :: Num a => a
its sharing the 'function from a dictionary to a' rather than the actual values.
On newer GHC's (7.8+) we turn on {-# LANGUAGE NoMonomorphismRestriction #-}
at the REPL.
AFAIR not that good. There are some tooling that may work (look for Keera Studios Blogposts), but basically its compiling the thing as native Code and then use the JNI with a small Java-Wrapper to bootstrap your App.
For simple things i would go down the GHCJS-Route (basically build a "static" webpage which gets displayed when the app is run) and use that webpage with something like http://jster.net/blog/tools-to-package-your-html5-app-for-mobile-devices or http://phonegap.com/ to display it on any device (there is something similar for iOS) so you get that for free.
You should not do fancy stuff with JavaScript (like games), but for a simple app that querys some data or just displays/calculates things this is fine.
Dependencies are flat, and semantic versioning is strictly and automatically enforced by the package manager -- as in the package manager actually type checks your package and diffs the public APIs to assert with high confidence that breaking changes actually do require a major version increment.
You can read some about it in the elm 0.14 release and this discussion.
Overall the site is great. But again I'm concerned about the downloads section. What are these packages? What am I downloading? They don't look like Platform builds.
We should be shipping the Platform from haskell.org. Then we can be sure of a standard environment for new users. If for some reason you think the package is inappropriate, fix it (e.g. upgrade).
This is the same point I raised the first time the site was demoed, and there was agreement to fix it.
--
I'm not the only one : https://news.ycombinator.com/item?id=9052616 -- we can't reinvent a common platform here. Download section needs to point at or reuse the HP content.
I'm working on a Yesod application and starting to implement some tests with HSpec's discovery testing.
If you do a stack new webtests yesod-postgres
, you'll get a project template that has some testing facilities built in. The withApp
function lets you do normal website stuff and run database queries, but since it loads the application configuration for each test, it adds about 0.2 seconds per test case. This is a drag. Is there an accepted way to have the database/application initialization done once, and share that among the various things that need the information, and retain HSpec's test discovery?
EDIT:
this SO post is making me think it isn't possible without some hackery :(
> The operators |> ?>> |>> are good for interactively development in the ghci shell where you can add the next operation at the end of the line.
If you want to have those operators in your GHCi, simply put them into your <code>.ghci</code> file:
$ cat ~/.ghci let (|>) x f = f x let (|>>) x f = map f x let (?>>) x f = filter f x $ ghci GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> [1..10] ?>> even |>> (+1) [3,5,7,9,11]
> To be honest, I'm not entirely sure why this hasn't been done before.
Because it's a lot harder than we think.
Disclaimer: I'm not a data scientist, but I work with a lot of them. I have therefore been in a position to see the R vs. Python wars from the outside, to to speak. And I can tell you that even with all the underlying advantages going for it, including its massive community, Python is only now getting to the point where it can seriously compete with R in this area.
The Python infrastructure for data scientists is now massive, yet still not as unified as that of R. That said, tools like Anaconda are now making it possible even for less technically inclined scientists to install and maintain their own Python data analysis stack, including:
In short, it's getting to the point where it's becoming conceivable to use Python as a viable replacement for R (or Mathematica) for data analysis.
I'd love to see Haskell getting to that point, but it'll be a long road. For one thing, we don't have a community the size of Python's, especially not in data science.
PS: Anyone who is … aware enough of PLT to be reading /r/haskell and yet who still uses R should read the following paper:
http://r.cs.purdue.edu/pub/ecoop12.pdf
Once you read that and understand it, you will ought never to want to touch R again. If the authors are right (and I see no reason to doubt them), programming in R should be considered positively hazardous. And we probably ought to re-evaluate the level of trust we put into any data produced by R.
>With it I can do most of what Haskell can do, semantically - but I do not have to worry that much about the type system, nor dribbling the lack of IO with monads.
The more you use Haskell, the more confident you'll become and the less you'll worry about the type system. Eventually you'll look to cause type errors on purpose in order to help you better understand your system. With a little help from tools and new features such as TypeHoles, you'll start to see the type system for the powerful tool that it is.
I am writing a book introducing people to Agda programming in the "Learn you a" style. I have an illustrator working on illustrations for it as well. While it's still very early in development, my github page will have updates soon.
Michael Snoyman mentioned this on the mailing list (https://groups.google.com/forum/#!topic/haskell-stack/8HJ6DHAinU0), it did the trick for me :
cabal configure --package-db=clear --package-db=global --package-db=$(stack path --snapshot-pkg-db) --package-db=$(stack path --local-pkg-db)
>If the code compiles, it works (almost all the time)! Contrast this with Ruby, for example, where the test code can be 2-3 times the size of the app, and even then, one is not quite sure if the runtime code is correct.
http://www.drdobbs.com/architecture-and-design/in-praise-of-haskell/240163246
>A lot of people experience a curious phenomenon when they start using Haskell: Once your code compiles it usually works. This does not seem to be the case for imperative languages
What happens if you want to calculate s 3
? You need to calculate s 0
, s 1
, and s 2
. What do you need for s 2
? You need to calculate s 0
, s 1
, again. You end up with a ridicolous high number of evaluations.
What if you instead memoize the evaluated numbers?
s r = sList !! r sList = map sWorker [0..] where sWorker 0 = 0.2 -- replace with your value sWorker r = (1 - p) * sum [g i * (s (r - i)) | i <-[1..r]] -- call to s This will be much faster, since every value needs to be evaluated only once and can then be retrieved from the list.
Origin stories
"We all went into the woods with pens and pads of paper and ate hallucinogenic berries we found to come up with ideas. Someone suggested two equal signs followed by a greater than sign and we all burst into uncontrollable laughter."
I don’t know why, but I had to double check it:
C:\…\…>ghci GHCi, version 8.4.3: http://www.haskell.org/ghc/ :? for help Prelude> let what _ _ _ = 21 Prelude> let to _ _ _ _ _ = 21 Prelude> let is:answer:the:life:and:everything:universe:_ = repeat () Prelude> let (™) = (+) Prelude> Prelude> what is the answer™ to life the universe and everything 42 Prelude>
In my utopic view, packages are hosted on IPFS, a permanant peer to peer web. Where a piece of data is only deleted if there are no 'seeders' anymore. Every piece of data can be versioned like in git, and is immutable. In other words, make it theoretically impossible to delete something once its out. I think it would be perfect for package distribution https://ipfs.io/
Edit: wow apparenlty, it exists already https://github.com/whyrusleeping/gx
Lens is a great example of the kind of docs I love. However, standard Haskell libs seem to be somewhat lacking.
For example, take a look at Haskell's Data.List: https://hackage.haskell.org/package/base-4.8.1.0/docs/Data-List.html . For people familiar with FP, all of these definitions might be straightforward and understandable, but for people coming from an OO background, examples would explain a lot.
If you'd like to see a good example of this, I'd suggest you to take a look at Elixir's list docs: http://elixir-lang.org/docs/stable/elixir/List.html .
This is exactly what makes Elixir so accessible IMO, and makes it harder for Haskell to get accepted by the masses. People read the explanation, don't understand the concepts mentioned, and need to start googling. A simple example sometimes makes way more sense.
You need to install Hoogle and then you can add
:def doc \x -> return $ ":!hoogle --info \"" ++ x ++ "\""
to your ~/.ghci file and use it like this
~ % ghci GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ghc-paths-0.1.0.9 ... linking ... done. Prelude > :doc Maybe Prelude data Maybe a
The Maybe type encapsulates an optional value. A value of type Maybe a either contains a value of type a (represented as Just a), or it is empty (represented as Nothing). Using Maybe is a good way to deal with errors or exceptional cases without resorting to drastic measures such as error.
The Maybe type is also a monad. It is a simple kind of error monad, error monad can be built using the Data.Either.Either type.
From package base data Maybe a
that's just nonsense. a successful timing attack gives you information disclosure about the keys. that's not as much as heartbleed, but it's very close: you'll have to record connections as well.
openssl has no stellar record and there is much to say about it, but it tries to prevent timing attacks.
a naively written haskell implementation will be wide open to easy side-channel attacks that cannot be fixed easily. there is securemem
, but that will only eliminate the simple comparisons. i am very eager to get to know if (and what) tls
does to eliminate side-channel attacks. i am pretty sure that the (practical with attacker on same host machine in case of virtualization) branching-attacks can't be eliminated with haskell w/o ffi. i hope i am wrong though.
btw, that's one of the things why cryptol
by gallois is interesting. one spec will generate a haskell implementation as well as a c (and fpga) implementation. i am sure that can be done sensibly without introducing side-channels. i have absolutely no idea whether that is the case though.
edit: oh, i see. vincent answered on the hackernews discussion
It's a bit shocking to me to be honest. The Go code is all reasonable, there's no benchmark-hacking in it. Yet even then, the fastest Haskell implementation, Yesod/MySQL, would require four times the hardware to handle the same load. If Postgres were a hard requirement, Yesod would require seven times the hardware.
Lets say you have two 60GB Linode Instances for your Postgres master and slave, and four 4GB servers for application-level stuff, written in Go. Your annual bill is $6,720. If you rewrite in Haskell/Servant, you'll need five times the number of application servers, and so your annual bill will go up to $10,560.
Relative to developer time, particularly for debugging, it's maybe not awful, but it still makes Haskell a hard sell.
I proposed this half a year ago, but few people seemed to care. I let it die silently, and every now and then I thought whether I should have been more vocal about it. Anyway, here's the link:
http://www.haskell.org/pipermail/libraries/2013-December/021833.html
I tried to decide some times ago. In the end my preference goes to Yesod. The advantages are tiny and most subjectives, here are the reasons:
Hamlet is almost Haml, Cassius is almost SASS, Lucius is almost SCSS and Julius is javascript with type safety added. And type safe is good.
Yesod is almost at its 1.0, from now, yesod 0.9.2 is more a realease candidate for the 1.0
Contrary to the examples given in the documentation, most of the time you don't use quasiquotation.
Greg Weber (one of the main Yesod contributor) was the first to give a way to deploy yesod (and more generally Haskell) to heroku.
Widget is a very clever idea I never saw anywhere before.
From what I understand you can use many part of the yesod web framework inside snap and conversely.
Also, from some benchmark, it seems the standard way of deploying yesod is a bit faster than the standard way of deploying snap. But, I also believe you can use each method with snap and yesod.
I didn't looked at snap neiver at happstack from some time now. But to be short:
You should read the introduction of the yesod book.
Also here is an example that proove yesod rocks. Recently, somebody posted a troll article "node.js is cancer". He gave an example of with a fibonnacci function. Somebody answer that haskell might be the cure and used snap to demonstrate its point (http://mathias-biilmann.net/posts/2011/10/is-haskell-the-cure). Here is the equivalent solution using yesod:
http://gist.github.com/1261882
Its behaviour is perfect as expected. Only the first access is long. Once the fibonnacci value is calculated, the answer is cached and served extremely fast.
As the code is minimal I used quasi quotes.
Also, w.r.t. pipes
/conduit
I wrote up an SO answer a while back on that topic: What is pipes/conduit trying to solve?
If you're looking for a general solution I'm afraid its impossible, however in this case it has to do with the parametric type being passed as an argument. Since Eq
makes no garuntees as to order (hense, not Ord
), we have to look towards the implimentation of the function handling the infinite structure.
My haskell-fu isn't the strongest but as I understand it type declarations give no indication to strictness but arguments can be made strict with the BangPatterns
pragma. I would suggest looking here for more info about strictness as my knowledge of strictness in haskell is at best incomplete and at worst flat out wrong.
Nix has some really great features for building docker images. There is a good description how to build small docker images for Haskell applications here. One thing not discussed there is the fromImage
argument that you can pass to buildImage
to allow you to make layered images. More details can be found in the docker tools section of the nixpkgs manual.
Building the image will only work with Nix on Linux (not macOS), but the docker images will work anywhere of course.
While NixOS doesn't implement all the stuff you've been talking about, I think NixOS is such a huge improvement over what you see elsewhere in OS design that anyone who wants to theorize about how to improve OSes better have used NixOS in anger. This is analogous to the fact that, although Haskell is an incomplete implementation of FP ideas compared to, say, Idris, it has such a mature and developed ecosystem that by working with it you'll get a better perspective of the challenges and tradeoffs in language design (in this analogy, MS Windows would be COBOL, macOS would be Pascal, Debian would be C, Gentoo would be C++, Arch would be Java, NixOS would be Haskell, and Genode/seL4 would be Idris).
It may not look like it, but NixOS is surprisingly production-ready (Awake Security ships a product running it, IOHK runs it on the Cardano core nodes, etc.), and the community around it is fantastic (especially #nixos
on FreeNode).
BTW, if you want more exposure to NixOS's ideas (beyond the advertising copy on the website), you might be interested in the PhD thesis in which Eelco Dolstra originally described it.
One small side note that might be of interest for people reading this article. I used the silver searcher for awhile, but I recently discovered another tool called ripgrep which is much better. It's faster and it doesn't have the issues with gitignore behavior that have plagued silver searcher.
> Am I the only one confused/frustraited about this?
Nope. Problems with cabal install plague pretty much everyone, especially beginner/intermediate haskellers.
> Why is Haskell Platform so far behind?
Well, in fairness, GHC 6.12 is only 2 years old (12 June 2010 GHC 6.12.3 released), although most people will tell you that pre-GHC7 is ancient history. The Haskell ecosystem is moving ridiculously fast right now. The HP release with GHC 7.4.* was scheduled for a May release (if I'm not mistaken); I'm not sure where we're at with that but I'd expect an official HP with the latest GHC pretty soon.
The best advice I can give you is to frequent #haskell irc.
Alas, I couldn't find any official announcement yet...
However, the notable changes since 0.9 wikipage might be useful to see what's new
Some people were wondering about the status of Hackage 2.0 which (among other things) was supposed to include dependency metrics to help decide which packages are used how much. This is what a quick search revealed:
In April 2010 Matt Gruen asked for Feedback on a possible GSoC project Hackage 2.0. The corresponding Trac ticket has been closed as fixed three months ago (despite mentioning "Mentor: not-accepted") with a comment that, according to Matt, "the new Hackage server is pretty much code-complete, it just needs some effort to replace the existing Hackage instance".
Is more recent information according to the status of Hackage 2.0 available?
This is kind of a cop-out, but practical: Until you figure out how to get the binary size down using GHC configuration, you can always use upx. It typically cuts down GHC-generated executable sizes by about 80%.
My favorite example of how Haskell error messages are less friendly that other languages is this: suppose you meant to type 1 + 1
, but you missed the +
key and accidentally typed 1 1
instead. What error message do you get?
Here's Python:
% python Python 2.7.5 (default, Mar 9 2014, 22:15:05) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 1 1 File "<stdin>", line 1 1 1 ^ SyntaxError: invalid syntax
Now here's Haskell:
% ghci GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> 1 1
<interactive>:2:1: Could not deduce (Num (a0 -> t)) arising from the ambiguity check for ‘it’ from the context (Num (a -> t), Num a) bound by the inferred type for ‘it’: (Num (a -> t), Num a) => t at <interactive>:2:1-4 The type variable ‘a0’ is ambiguous When checking that ‘it’ has the inferred type ‘forall a t. (Num (a -> t), Num a) => t’ Probable cause: the inferred type is ambiguous
Python tells you: "You typed something that makes no sense." Haskell instead tells you (simplifying a bit): "You're trying to call the number 1
as a function, passing it the number 1
as its argument, but you have not provided any definitions that allow me to treat numbers as functions that take numbers as arguments." Except that it doesn't tell it as succinctly as I just did...
There's a common misconception that you can't have mutable data structures or variables in haskell - you certianly can. http://www.haskell.org/haskellwiki/Mutable_variable describes both simulating mutability and using actual immutability inside the IO monad.
Bazel only supports a few languages.
For the languages it does support, you get great granularity of build products, and bazel is a great tool in this case. However, I would say Nixpkgs has more support with around 28 languages.
But nix isn't aware of the underlying language, so granularity of build artifacts is up to you. However, it generally encompasses a full vertical slice (compliation + linking in C for example).
This is an example of the web site that is helpful both for newcomers and current users.
It has package search right there. I use it all the time. For haskell though i always have to go to a dozen of different web sites. hoogle, hayoo, fpcomplete.
It has installation instructions. I use it from time to time when i need to install new machine. Because i forget things.
It has prominent news section right in the middle.
It has easy access to forums. For questions and general discussions on haskell i have to again go to a dozen of different places: reddit, google groups, stackoverflow etc.
It has the best and most useful wiki i have ever seen. Even users of other linux distros routinely refer to it. For haskell know-how i have to again scour the internet: LYAH, realworld haskell, wikibooks, stackoverflow, google groups, various blogs.
Always a good way to spend half an hour, listening to SPJ, and interesting to hear of his involvement in computing in schools.
When I was at a UK school (in the 80s) there were two parts of the school which contained computers (well three towards the end when the library got a PC to view CD-ROMs). The maths department had BBCs (later supplemented with ARM-based Archimedes), they were used to program BASIC as part of a subject called computing. There was also a room full of IBM PC compatibles which had replaced typewriters, and typing lessons, with word processing in a subject known as Business Studies. As typing had been earlier, it was seen as training for future clerical workers, not preparation for academic study; so it was another two decades before I finally taught myself to touch-type.
It sounds like "computing" died out and "business studies" morphed into "ICT". I'm certainly behind SPJ's efforts to reintroduce it. And the efforts of initiatives such as Raspberry PI to produce the low cost hardware for school students to hack on.
Other related terms: structural typing and structural subtyping. You can draw a contrast between nominal typing and structural typing:
When things are nominally typed, the name of the type is the thing that determine if two types unify. So, in haskell, let's define two types:
data Person = Person {name :: Text, age :: Int} data Dog = Dog {name :: Text, age :: Int}
Ignore the whole problem with overloading labels for the moment. The types Person
and Dog
are not the same. You cannot pass one where the other is expected.
Structural typing means that types are determined by their structure, not by a name. Naming structurally defined types isn't usually required, and is done as a type alias. As an example, records in Elm have language-level support for this kind of thing. Here is an example of how this can be approximated in haskell:
type Person = '[ "name" :~> Text, "age" :~> Int ] type Dog = '[ "name" :~> Text, "age" :~> Int ]
You would need to define a :~>
combinator and use something like vinyl
, but this will work in GHC haskell today. The problem is that these attributes are a list of tuples, not a map. So, if in the example above, I had flipped the order of the two attributes in Dog
, then Person
and Dog
wouldn't be the same any more. It's also hard to prove things with them (hard but not impossible).
Here's my understanding: json
is the result of parsing--it's an Either
. The _Right
focuses on the correct case, which is a JSON document. Then we access properties of the JSON document: like json.data.children
in JavaScript. Since children
is presumably a list, values
lets us apply a lens to every element. This is essentially like pluck from _.js, except more general. Then, for every value, we access value.data.title
as a string. The final _String
handles the possibility that there is a different JSON type in title
.
In pseudocode, we can think of the whole thing as parse(json).data.children.map(o -> o.data.title)
. Except with better error-handling.
I hope that helped :P.
haskell.org's Hoogle searches a smaller set of packages by default. You can add additional packages to search using +packagename:
http://www.haskell.org/hoogle/?hoogle=parseJSON+%2Baeson
I don't know if there's a flag to make it search all Hackage packages, like the FPComplete one does. I think that's the sensible default, actually.
wasn't this posted earlier today?
anyway, why don't you link to release notes ( http://www.haskell.org/ghc/docs/7.2.1/html/users_guide/release-7-2-1.html ) which contain useful info instead of that web2.0-social mumbo-jumbo?
Use spacemacs with dante. As a pretty sophisticated vim user for almost 20 years, I would say that Spacemacs is pretty much unambiguously better than vim. Spacemacs has everything I want from vim with a bunch of nice improvements added on top (space as the leader key, better discoverability with mnemonics, etc), all while being built on the vastly better platform that is emacs. I still use vim for one-off file editing, but I simply can't recommend it any more for Haskell development.
The idris docs linked are themed via readthedocs (https://readthedocs.org/) -- I think we can probably just hook up a readthedocs instance for ghc now that we have this doc build method, and get that on top of whatever we now have, tied to different tags, etc.
I understand agda is moving to use that system as well.
More info: we are working on the dating platform https://feeld.co This role would be for someone who has the experience working on the Backend and solid Haskell knowledge. Part of the backend is done in CoffeeScript, so it's added as the requirement for the job. However to know the CoffeeScript is not the absolute must.
Happy to explain to the best of my ability; it's two ways to present a family of signatures (not related to generativity, as far as I know).
In short, the terms in this context mean roughly the same thing as they mean in mathematics. (i.e. two different ways of defining families)
The first way to define a family of sets (or signatures, or types) indexed in I
is as function Set^I
(this is parameterization). The second way to define a family is in terms of display maps (fibrations) from some total space (i.e. Σ[ X : Set ] X -> I
); this is fibration. Here's a proof by Paolo Capriotti that the two are equivalent: http://www.paolocapriotti.com/pages/display/display.html
Anyway, for type classes the choice is clear: parameterization must be used, since you need the classes to be predicates C a
such that the resolution algorithm can find a suitable implementation by inspecting a
. On the other hand, with modules there has not been any pre-commitment to a particular resolution mechanism, so we are free to choose the approach that results in the least amount of churn when we move stuff around. And it turns out that fibration is better in this respect; it lets us build up composite signatures without privileging any one particular piece of the signature unduly over the others; then, mutual coherence of signature components may be ensured using sharing constraints (as opposed by causing it to be the case by construction, through ornate/brittle & carefully arranged nested signature parameterizations & instantiations).
Harper & Pierce give a good explanation of why fibration is more practical: https://books.google.com/books?id=A5ic1MPTvVsC&lpg=PA323&ots=PopFhJcC4s&dq=%22fibration%22%20%22parameterization%22&pg=PA323#v=onepage&q&f=false
This is a good point. I'd like to expand it a bit: any time we have multiple tools we can end up with confusion. How to install GHC itself can fall into that category, for example. We definitely need to be aware of that.
Now for Stackage vs Hackage. I'll take you literally at first, and answer that I think there's a clear description we can (and to some extent do) give about the two projects. I actually think my first blog post about Stackage explains these layers well: Stackage sits on top of Hackage, in the same way that Hackage sits on top of Github (and other source control mechanisms). We should make clear to users the different options, and provide a sensible default when we think users will not be able to make a good decision.
But I think you're really getting at a deeper issue: the wider ecosystems evolving around Hackage and Stackage. In this case, you're talking about stackage-update
vs cabal update
, but in general there will likely be other such choices between two sets of tools. I think this is somewhat inevitable, as we have different parties who believe the tooling should be developed differently.
In this case, I would have much preferred to just include the stackage-update
code in cabal-install
itself, as I don't see a reason for it to not be included. I was going to bring that up next week, but discussions with Duncan indicated that he'll be taking Hackage and cabal in a different direction, which I strongly disagree with. Therefore, I decided to make this Git-based approach available separately, and people can decide for themselves what they want to use.
And to be honest, I wrote this because I woke up a 3:30 this morning for no good reason and was bored ;)
Ah, a variant on the classic Wat talk. Nice one. Maybe he should've tried Haskell:
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help Prelude> [1, 2, 3] + 2
<interactive>:2:1: Non type-variable argument in the constraint: Num [t] (Use FlexibleContexts to permit this) When checking that ‘it’ has the inferred type it :: forall t. (Num t, Num [t]) => [t]
A nice application that was created with a combination of UR/Web and Haskell is BazQux Reader, a replacement for Google Reader that I've been using for a while.
There was nice article in /r/haskell a while back that's definitely worth reading.
For this I generally use nix. In particular, I would strongly recommend you look into the haskell.nix infrastructure, which makes cross-compilation a breeze (thanks for /u/angerman).
For static charts/plots/graphs the library Chart is really nice. Otherwise diagrams+an FRP library is probably your best bet.
Slightly outside of Haskell I have been playing with Elm recently and am incredibly impressed with its ease of creating interactive applications. Plus since it compiles to Javascript the ease of deployment is miles ahead of Haskell.
I notice that for a lot of people, when you search for “Cloud Haskell”, you get Jeff's prototype implementation, rather than the newer distributed-process which Duncan is talking about. See the Cloud Haskell wiki page for more details.
It's not detailed, but it's correct.
In my experience, when someone says they have trouble with arrays, what they really mean is that they have trouble managing mutability with monads in general, and arrays are the first case where they can't get by with persistent data structures. (For example, they aren't comfortable with MVars or STRefs either.) To solve that problem requires study and practice with monadic types to get the hang of it.
If you want to be quick and dirty, you can just work in IO and use IOArrays (boxed or unboxed) just like arrays in impure programming languages.
If you are comfortable with Data.Array in the small but find that you can't shoehorn your arrays into the rest of your program, this page will help you pinpoint why that is, and guide you to an appropriate alternative array library: http://nix-tips.blogspot.com/2011/03/how-to-choose-haskell-array-library.html
Haskell is a really complicated language that demands of a lot. It may not be possible.
Making it more popular though:
As others have mentioned the tooling is complicated. Haskell has the same problem Tex had. Stack and Haskell platform get part of the way there but the installers need to configure editors and project tools to work out of the box fully configured. In particular include a fully configured Leksah or Geanny or Kate.
Finally and this will be controversial. Strip options. There is one easy web framework with a note in the documentation of where to find the full featured but hard one. The database is preconfigured out of the box (SQLite or something), a script for say MySQL and Mongo (single node on desktop) and then a link to how to do it for a real setup. Because the options are simple there can be a simple management tool to make minor changes to the environment.
Then include targeted tutorials for that environment.
Paul Hudak's environment for https://www.amazon.com/Haskell-School-Expression-Functional-Programming/dp/0521644089/ was perfect. It got a Haskell, an editor and enough of an environment to do graphics and sound programming.
Basically Haskell platform got too focused on Haskell libraries and not focused enough on ecoystems. Make a Haskell the way Microsoft, Adobe or Apple would make a Haskell.
Note to comment readers: the SO question is already closed, since it is off-topic for Stack Overflow. The question follows (originally by Joe on Stack Overflow, CC-BY-SA-3.0):
> I am implementing the Mandelbrot-set in Haskell. I am want to create zoomable version of the mandeblrot set hence I want to create a GUI application where mandelbrot image takes all the screen and the user can draw a rectangle on the screen by holding down the mouse and dragging to create a rectangle the area under the rectangle will then be zoomed in.
> I have manged to create mandelbrot computation functions in Haskell, However I am having trouble finding a library thatt alows me to generate an image and implenent the interactive zooming functionality that I want.
> Do you have any suggestions of what GUI\Graphics Libarary I should be using? Idealy the libary should be well documented with samples as i am an amature in functional programming
That being said, If you, /u/TheKing01, are interested in a zoomable GUI library and want to ask this community about it, why do you send them to an external resource if they have to answer here anyway? At least add a summary :/.
I find it hard to believe that even toy projects can be hosted much more cheaply than the smallest available VPS (e.g. US$5/month on Digital Ocean https://www.digitalocean.com/pricing/ ).
Anyway, the point is that creating a whole new compiler backend with likely dozens of person years of work on top of one of the most unrealiable and ill-specified languages out there just for toy projects doesn't make a whole lot of sense to me.
I can imagine that URef
beats STRef
, but what about URef
versus Int
when you can write your loop counter mutation as a recursion? I'm inclined to think the GHC inliner will work better with an Int
by simply removing all the box . unbox
occurences, to finally generate a perfectly unboxed Int
which fits in a register.
On the other hand, the URef
will always have an indirection and I don't think the optimization pass can remove it, because it will change the semantic of URef
which can be modified by an other thread.
A quick and totally irrelevant micro benchmark:
uglyLoop :: Int -> Int uglyLoop n = go n 0 where go !0 !c = c go !n !c = go (n-1) (c+1)
uglyLoop' :: Int -> Int uglyLoop' n' = runST $ do n <- newRef n' :: ST s (URef s Int) c <- newRef 0 :: ST s (URef s Int)
whileM_((0/=) <$> readRef n) $ do modifyRef n (subtract 1) modifyRef c (+1)
readRef c
Gives:
λ skolem ~ → ghci -fobject-code -O2 ~ GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help Prelude> :l BenchGHC.hs Ok, modules loaded: Main (BenchGHC.o). Prelude Main> :set +s Prelude Main> uglyLoop' 100000000 100000000 (0.24 secs, 103,632 bytes) Prelude Main> uglyLoop 100000000 100000000 (0.06 secs, 98,800 bytes)
I don't know how much it is relevant, because some other optimisation which favorise the Int
may be involved (and I may have missed something else).
I suggest Visual Studio Code. Install the extension Haskero which almost gives you an IDE feeling.
As an added bonus with VSCode, you will have in-built terminal (of your choice): and depending on your OS - a shell of your choice - bash, powershell, etc. You can fire up a ghci REPL here. And get everything on Haskell in one VSCode window.
Atleast, this is my setup and I enjoy it this way.
Like all cabal woes, the answers are probably "cabal is not a package manager" and "try nix", the purely functional package manager. I haven't tried nix yet (sandboxes have reduced the pain enough that I don't feel the need to look for a better solution), but I can see in its documentation that nix supports uninstalling packages.
Termonad relies on VTE for the underlying terminal emulator. It would be nice to have an actual terminal emulator written in Haskell as well. If someone wanted to write that, I'd love to use it in Termonad.
There's Elm which isn't really Haskell, but it's still kind-of Haskell.
I recently got a job doing front and back-end development using AngularJS/Bootstrap on the front and Scala(Play + Slick) on the back-end, at a friend's small consultant company. We have decided that once we have the time and resources, we are going to try Elm for the front-end(and then probably Haskell for the back-end).
Changes to the Haskell standard pretty much don't happen. It's a lot of work, with little perceived gain.
Changes to the Haskell language happen because people add features to GHC. Most of the interesting changes have papers that accompany them.
The Haskell community is great. But, we are probably not where we should be in terms of involvement with GHC. SPJ is currently trying to transition GHC from a vehicle for research to a community supported project. Which is great! But, I think it does show that Python was always a community supported project, and GHC has traditionally been an open-source research project.
That said -- there are community driven changes. We do often see library submission proposals:
http://www.haskell.org/haskellwiki/Library_submissions
And things like Functor, Applicative, Monad changes in GHC 7.10 have come out of the community,
http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal
So, in summary, yes, you are seeing a hole in the Haskell ecosystem. We really need to create more transparency, policy, and tools for contributing. I'm not sure if anyone has taken leadership on that though. IMO, it needs to be someone's full-time paid position.
Nice review Bryan!
BTW if anyone wants his book signed, come to ZuriHac 2014 this summer where Simon Marlow and also Edward Kmett will be giving talks.
The post is from April 2010 (2007 is his Haskell starting date). GHC 7.0.1 was released later that year, but you're right, had he used the functionality from then-HEAD he would probably have mentioned it.
This comes up periodically. See here for some old discussion: http://www.haskell.org/haskellwiki/Top_level_mutable_state
A more recent thread (from 2008) had what I think is a very cogent reply from dcoutts on this topic: http://www.haskell.org/pipermail/haskell-cafe/2008-August/046437.html
Ah, it's our good old monomorphism restriction in action. And defaulting, though I'm not sure why the default gets to be () (see the report for more info). In any case it's not particularly related to uncurry.
You can turn it off with:
> :set -XNoMonomorphismRestriction
I'm pretty sure there's also a command line option.
If you're not aware of the monomorphism restriction, check out the Haskell Wiki and Stackoverflow, there are multiple questions there related to that.
Did I mention my reactive-banana library and corresponding GUI examples already?
I've been a massive user of IRC since the mid 90s... have written lots of bots, scripts etc plus set up plenty of stuff to deal with being able to disconnect your client without missing out on anything (currently use https://quassel-irc.org/ with the daemon on a VPS). I was even l33t enough to "read bitchx.doc" back in the day...
But even I think IRC is a pain in the ass these days. Really the only thing it has going for it is the "openness" of the protocol itself, plus options for clients (although they still pretty much all suck compared to Slack).
But yeah, I prefer Slack/Discord for like 20 other reasons, all of which I care more about than the openness thing.
It's bad enough for casual conversation, but even worse for stuff where you're showing source code examples + screenshots etc. And also stuff were you want to be notified when offline, or read historical comments. Chat systems aren't great for historical searching to begin with, but IRC is by far the worse, unless somebody happens to have posted some logs you want on the web somewhere (and you can find it), which is pretty rare anyway.
It's also super annoying not being able to edit/delete comments.
I know some will hate me saying it... but I really wish a lot more of the older channels related to open source software / programming etc would migrate away from IRC to these newer chat systems.
If people don't like having a company like Slack/Discord in control... fair enough... but there's open source self-hosted alternatives too if that's the main concern.
I'm still on about 50 IRC channels across a bunch of networks, but I wish I didn't have to be. Although I rarely even open my client these days anyway, at least compared to how much I use the other newer chat systems + forums. Half the time when I open IRC I'm not even in half the channels anymore anyway due to nickserv/registration/ping timeout crap that I just can't be bothered dealing with anymore.
You have to remember that "Clean Code" and the rules you provided are opinions of the author. They are not some universally applicable truths, just some guys' opinions, which are roughly as good as anyone else's.
Don't blindly follow such rules. Having hard limits (5 lines per function, 4 parameters per function) is just an artificial constraint which does not necessarily make your code any better. It's easy to follow such rules in e.g. Ruby, you just add a class to hold some of the parameters but then you introduce more complexity to do what is essentially a partial function application.
E.g. my guidelines for length of function and number of parameters are very different and depending on the programming language. When writing C, I would consider it fine for a function to be a screenful of code, or roughly one page if printer (I never print code, though). When writing Haskell it would be shorter.
When trying to write lots of small functions, you easily end up writing more code than necessary and adding complexity that is not required.
Don't be afraid to ignore silly rules like the ones you show. Over time you'll gain an intuition that will work better than blindly following hard rules.
At the very least because that would be inconsistent. data
types are always lifted.
We're currently waiting for binary builds to finish; https://launchpad.net/~hvr/+archive/ubuntu/ghc already has final builds of GHC 8.0.2 specifically configured for Ubuntu 12.04/14.04/16.04/16.10 so Travis jobs can already start using/migrating to 8.0.2; I still need to build packages for Debian 8 & Debian 9. There's also other people contributing binary packages/builds still being in progress.
Well, how about this for a big old facepalm moment. Once my error was pointed out, I came up with a pretty sane fix very quickly. Unfortunately the performance cost of this fix was unacceptable to me, about 10% slower than the original version that backtracked by default. I saved that commit for posterity, since it's important to save badges of one's stupidity.
And now we return you to your previous happy semantics :-(
You can use lucid instead, because it's a transformer already:
$ stack build lucid $ stack exec ghci GHCi, version 8.2.2: http://www.haskell.org/ghc/ :? for help > :set -XOverloadedStrings -XExtendedDefaultRules > import Lucid > import Control.Monad.Trans.Reader > import Control.Monad.Trans > runReaderT (renderTextT (do p_ (do title <- lift ask; strong_ (toHtml title)))) "Hello, World!" "<p><strong>Hello, World!</strong></p>" >
(Blaze's MarkupM a
is not a transformer and therefore cannot transform another monad. Lucid's HtmlT m a
can.)
I can speak for the author of aeson, but here are my thoughts:
Exception e => Either e a
says that "whatever exception type you choose, if there is a failure I will produce one of those". That's pretty hard to implement.
IsString e => Either e a
is reasonable to implement, but the cost is type errors when the function is used in a polymorphic context, as GHC can't figure out the type:
GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help > let foo :: IsString s => Either s Int; foo = Right 5 > either (const 0) id foo
<interactive>:4:21: error: • Ambiguous type variable ‘b0’ arising from a use of ‘foo’ prevents the constraint ‘(IsString b0)’ from being solved. Probable fix: use a type annotation to specify what ‘b0’ should be. These potential instances exist: instance a ~ Char => IsString [a] -- Defined in ‘Data.String’ ...plus one instance involving out-of-scope types (use -fprint-potential-instances to see them all) • In the third argument of ‘either’, namely ‘foo’ In the expression: either (const 0) id foo In an equation for ‘it’: it = either (const 0) id foo
There is always a conceptual cost to overly general code, and sometimes a technical one too.
We were able to get the armv7-linux-androideabi
target for ghc working. But while building aarch64-linux-android
target, we got some compiler panics.
(GHC version 8.3.20170530 for aarch64-unknown-linux-android): ghc-stage1: panic! (the 'impossible' happened) LlvmCodeGen.Ppr: Cross compiling without valid target info.
(GHC version 8.3.20170530 for aarch64-unknown-linux-android): Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug LlvmCodeGen.Ppr: Cross compiling without valid target info.
Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug
tc-DTargetArch=\"aarch64\" -optc-DTargetOS=\"linux_android\" -optc-DTargetVendor=\"unknown\" -optc-DGhcUnregisterised=\"NO\" -optc-DGhcEnableTablesNextToCode=\"YES\" -static -eventlog -O0 -H64m -Wall -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i -irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen -Irts/dist/build/./autogen -O2 -Wnoncanonical-monad-instances -c rts/RtsUtils.c -o rts/dist/build/RtsUtils.l_o rts/ghc.mk:251: recipe for target 'rts/dist/build/StgStartup.o' failed
Any idea what's going on here?
I think there are two things here.
The first is, is the language capable of production work. And the answer at this point is clearly yes (even if it is only used by a reasonably small number of shops, it is used, and at reasonable scale to validate that it's not falling down). There is a little disconnect between building a toy app and building something for real (for example, the type system cannot and is not intended to catch all bugs, so you should still be testing).
The second is does it have tooling / conventions that get developed over the course of doing production work. And I think in this case the answer is... not yet, really. As a simple example, the jury is still somewhat out as to how you actually run an application in production. Some people are betting on Nix, others on Docker, and other people are just building on production instances (which is a bad idea, due to the high ram usage of linking), or on machines running the same system as production (whether virtual or not-in-use production instances), and some people have figured out how to get it running on Heroku (but that solution seems to me worse than the problem!)... But all of this is more complicated because most web development has been in dynamic languages, where this is a non-issue (and Go side steps this by having completely static linking).
As for your last concern, there are definitely people you can throw money at. Here's a starting point: http://www.haskell.org/haskellwiki/Consultants
This is not about how the operator itself associates, but rather how GHC implements the deriving Generic
mechanism. If I were to write the data declaration
data T = A | B | C deriving Generic
GHC would generate a Generic
representation corresponding to that data type which is roughly of the form a :+: b :+: c
. At present, it appears to generate a :+: (b :+: c)
(which you can verify by writing the above declaration in a file with the appropriate pragmas and then compiling it with -ddump-deriv
) but the manual, as linked above, indicates that a user should not rely on a particular nesting being generated.
Among other things, this means that GHC could hypothetically change the way it implements deriving Generic
and consequently the serialization strategy presented here would produce a different encoding of the same value when run with a different version of the compiler.
The question being asked is not, "Is the nesting of these expressions reliably the same?", because the manual clearly indicates that you can't rely on a particular nesting. The question is, rather, "In light of the fact that the compiler can choose an arbitrary nesting, should the serialization example be revised to produce a consistent serialization regardless of how the expression is nested?"
After a few minutes of research on darcs here are some reasons for not switching:
~~None of these are big problems for small personal projects, so I might use it for that. But those projects are usually so small (< 400 loc) that it doesn't need version control (just making a copy of the file when making major changes is enough for me).~~
~~I am all for encouraging functional programs and improving the haskell ecosystem, but it is pretty essential for me to be able to collaborate. If I can find an alternative to github that I can use with darcs I would start using it immediately.~~
Another advantage to me would be that it is easy for beginners so I can collaborate on projects with non programmers or beginning programmers.
EDIT: I have since I wrote this comment tried darcs and it looks very good. I will seriously consider using darcs for all my future projects.
Did you mean darcs? If so, indeed, being a a distributed version control system, it can be a part of a daily workflow of non-Haskeller; though I wonder if it is actually used to large extent by non-Haskellers to manage their source code.