I'm sad to hear that. Here are my suggestions:
> Scala's future is not guaranteed, assuming so is a little, well, presumptuous.
It's really not. Entire business units of Fortune 15 companies like Verizon have millions of lines of Scala that aren't going away. On the contrary, we have representation on the Scala Center Advisory Board; employ about half the committers to libraries like scalaz, Cats, http4s, etc.; and open-source a bit of our own work.
It really isn't true that, e.g. Kotlin or Ceylon are challenging Scala. Kotlin gets some attention from Android developers, but has no iOS story, and has no visible server-side penetration. It's also a pretty anemic language. Ceylon is more interesting technically, but if anything has even less traction than Kotlin.
Scala is obviously not as popular as Java, but that's fine. Anyone who wants to make their living in Scala has been able to for at least a decade now, and will be able to continue to indefinitely.
Emacs + Ensime is magic. At Twitter, we wrote our own .ensime generator. (.ensime is the file format that tells ensime where to find your source, library jars, and tests)
In Scala it is preferred to use a type class approach. In the standard library there is Numeric
: http://www.scala-lang.org/api/2.11.6/index.html#scala.math.Numeric
However, if you want to do this properly -- that is, to model numbers in the way that mathematicians think about them -- I suggest looking at Spire: https://github.com/non/spire
As the comparison page already mentions:
> Taking this into account, if you are happy with Scala, you probably do not need Kotlin
Scala can do everything what Kotlin can and it can do even more. Not to mention that dotc+scala.meta moves another step forward in terms of tooling support. And if we consider that Scala has the libraries Kotlin does not have, there isn't really anything interesting coming from there (at least not for a Scala developer).
I found Principles of Reactive Programming very helpful when I was learning. Week 5 is really good introduction to actor model in general. And Roland Kuhn is not only Tech Lead of akka but also great instructor.
Not a question (but not really worthy of its own topic) the new scaladoc is up on nightlies, if anyone is interested. I filed a few SI issues but then noticed there is this thread that seems to be accumulating ideas. Anyway, it's worth a glance.
AnyRefMap is just as general as java.util.HashMap. You can't put primitives into a java.util.HashMap.
This email thread appears to be an early discussion of the problem and the new functionality being introduced to address it.
Martin, honest question, not rhetoric. I'm truly interested in your answer. Why then does Scaladoc advocates the opposite?
> The most idiomatic way to use an scala.Option instance is to treat it as a collection or monad and use map,flatMap, filter, or foreach
> A less-idiomatic way to use scala.Option values is via pattern matching
Alternatively, you could make a self-recursive infinite list: (copied from here)
scala> val fibs: Stream[Int] = 1 #:: 2 #:: fibs.zip(fibs.tail).map(n => n._1 + n._2) fibs: Stream[Int] = Stream(1, ?)
You can then use it like a function:
scala> fibs(0) res0: Int = 1
scala> fibs(1) res1: Int = 2
scala> fibs(2) res2: Int = 3
scala> fibs(3) res3: Int = 5
scala> fibs(10) res4: Int = 144
scala> fibs(20) res5: Int = 17711
scala> fibs(40) res6: Int = 267914296
This example probably isn't very helpful, but interesting nonetheless.
Scala API can answer a lot of questions. But it won't help your example since split is for java.lang.String. If you look there you will find split takes a regular expression. Googling regular expressions would answer what "\r?\n" is doing. It is optionally matching cartridge returns [edit: should be carriage return. (thx kkrev)] followed by a required line feed: line endings in Windows and Unix. Sorry if you already figured out as much.
Following all the implicit conversions, like from java.lang.String to scala.collection.immutable.WrappedString and back, can be tricky but if you look at scala.Predef you will find the most used ones.
I've put this together to be a useful index of different Scala channels. Many of them right now are on gitter, and I know a lot are hidden on discord (looking at you ZIO).
If you know of a channel I should add, let me know. If you have a discord or slack server where Scala discussions are happening, it's very easy to create a matrix bridge for them so that matrix users can join the conversation and I can add the channel to this space as well.
sbt is running your tests in multiple threads, you're sharing the connection (db
) across threads, and SQLite's JDBC driver sucks at this. Three problems, three solutions:
parallelExecution in Test := false
db
a var
and set it in before
, i.e. use a connection per test.Hope this helps!
Update: Since you presumably won't use SQLite in production but will use threads, including opening new database connections in a multi-threaded environment, I recommend option 3), and, of course, deeper testing with your production DB (MariaDB, PostgreSQL...) and its JDBC driver.
Don't know why you posted this in /r/scala.
But have a look at http://opengameart.org/, /r/gameassets and /r/gamedev
And here I hoped you will show as how you used actor models in your game.
Goodluck,
This may not be the best place to ask this but does anyone know (or point me to some resource) regarding what is the plan for Dotty in the next few years?
Is it:
a) Just a research/academic language with no other future?
b) An incubator language whose features would slowly land in Scala? If yes, approximate timeline? I see the timeline ends on Jan 2016: http://www.scala-lang.org/news/2.12-roadmap
c) No slow crossover into Scala 2.x; instead it would simply be called Scala 3?
Anyway, super excited about Dotty!
I started learning Scala using the official book (Programming in Scala) with zero Java background. It was tough at the beginning, especially since I never heard of functional programming before, but the book is pretty well written, and it nicely eases into topics. Just download the book, fire up the REPL, and start playing.
Not knowing Java, and not being familiar with its idiosyncrasies may even be beneficial. It may save you from having to "unlearn" things.
If I'm going to be completely honest, I have a difficult time imagining why you would want Scala for .NET, when .NET is the native platform of F#. As much as I like using Scala professionally given the reflexive and pervasive choice of the JVM for server-side work, I prefer OCaml as a language, and F# is OCaml's .NET cousin, although, like C#'s evolution away from Java, F# has changed quite a bit from its OCaml roots—not necessarily in bad ways.
Look in the issue tracker.
Certainly there have been some show stoppers along the way (extraneous sub selects probably the most notable; that took well over 2 years from time of reporting to implementing the solution in 3.1), but in general I've noticed issues cropping up with Slick that seem to be the result of Slick's pushing the envelope in terms of capabilities and scope. This, for example, wouldn't happen in an FRM with basic session/transaction support (i.e. DBIO
brings innovation and complexity to the table).
Everything unproven is a work in progress, and until a library is fully vetted through myriad usage scenarios, it's an unproven work in progress ;-) In other words, there's room for other query DSLs in the Scala ecosystem to prove their viability while the Slick 3.x series bakes in.
Needless to say - if Option
were a value class, one could forget about this structural typed thing altogether.
Does anyone know the reason why this isn't the case already? Is it really because someone finds Some(null)
useful? https://groups.google.com/forum/#!topic/scala-language/Mz_VoJdJf1w
https://www.coursera.org/course/progfun
The official Coursera course is "over." But you can still register, watch the lectures and view the assignments. Not sure what happens if you try to submit them for a grade. I don't know if the automatic test tool will provide you with a grade or not.
I'm pretty happy with Emacs + ENSIME these days. I'm a dirty Vim user so I use Vimpulse with Emacs to get modal editing.
IntelliJ IDEA with the Scala plugin is pretty good and, most importantly, very frequently improving. You can follow one of the developers of this plugin on Twitter here: https://twitter.com/pavelfatin
The main reason I don't use IntelliJ these days are that IntelliJ itself (even without Scala) has some performance problems on my laptop that I can't figure out. Also, IdeaVIM has some pretty annoying behaviors.
It turned out that while scala.meta was great for tool developers (i.e., IDEs, auto-formatter, etc.) and for syntactic macro annotations, it was not really the right approach for def macros. Apparently, this was mainly due to the lack of good support for semantic information (tree types, symbols, etc.).
One new direction has been to try and abstract the reflection APIs of both Dotty and Scalac, in order to provide a cleaned-up and portable version of the old scala.reflect API, supporting many previous use cases of macros.
See this post for a deeper explanation of these two points.
However, a cleaned-up scala-reflect-like interface also incur problems that are still unresolved (e.g., mixing up typed and untyped trees without getting unstable behavior from the compiler).
Moreover, there is also a push to get away from the unrestricted power of macro offered by such interfaces, and provide instead some more type-safe and high-level (though more restricted) tools. Martin's proposal goes in that direction.
TL;DR: the way forward is still very uncertain, and people don't even quite agree on what use cases macros should eventually support.
IN my opinion, we should go with the cleaned-up scala.reflect (i.e., the scala.macros approach), and build type-safe high-level tools on top of these lower-level building blocks, which is exactly what I have been doing with Squid, a type-safe metaprogrammijng framework (currently based on scala.reflect, but hopefully ported to scala.macros in the future).
tl;dr If you want to measure how much time is spent compiling each source file, trying to write a compiler plugin will not get you to the answer.
TBH that is all I read in that article. There's absolutely no indication of "how to write a Scala compiler plugin". For that, the infamous node/140 remains the best to-the-point source of documentation.
It seems that coming from Python, your perspective is that library writers choose what truth is. In Scala it’s the other way around, the user chooses what truth is, by calling a boolean function. This can be done with a function call (f(x)), postfix method (x.f) or a comparison operator like > (or implicitly with an implicit conversion, but unlikely, since the appropriate truthiness can change from line to line).
So with a Scala Seq, you might want truth to be seq.isEmpty, or you might want truth to be seq.nonEmpty. Scala’s standard library does not choose that if seq
means either of those. You choose. On one line you might want x==0, in another line you might want x!=0. Scala does not define an Int with any inherent non-contextual truth.
In Scala, you choose the implicits by importing or defining them. The built-in exceptions are string concatenation (+) and Predef which is imported by default:
>The Predef object provides definitions that are accessible in all Scala compilation units without explicit qualification. > > Implicit Conversions > >A number of commonly applied implicit conversions are also defined here, and in the parent type scala.LowPriorityImplicits. Implicit conversions are provided for the "widening" of numeric values, for instance, converting a Short value to a Long value as required, and to add additional higher-order functions to Array values. These are described in more detail in the documentation of scala.Array.
I know Scala better than F#, but according to this pdf, sequence expressions are syntactic sugar that gets compiled down to calls to the function Seq.collect
, which is the same as for for-comprehensions in Scala, which is syntactic sugar that gets compiled down to calls to the methods map
and flatMap
(collect
and flatMap
are the same in regards to functionality).
As you note, sequence expressions in F# are lazily evaluated and avoid intermediate allocations. In regards to collections that are lazily evaluated and avoid intermediate allocations in the standard library in Scala, there are both the Streams api ~~and the views
part of the library (which you get by invoking view
on a strict collection)~~, though neither of these solutions have good performance by default compared to the strict collections (optimization options exist, however).
EDIT: Views are deprecated as /u/alex_ndc has pointed out.
Its been two years since I've programmed scala. Never had too many problems with the language in my own use, but then I never ventured too far in the type system as Paul did. He of course has been on the receiving end as the main maintainer of the compiler and library - so I can certainly understand his agitation, working tirelessly on keep pace with bugs.
It never occured to me to question the scala collections with the redesign in 2.8 at the time. Looking at the inheritance hierarcy I have to agree with him that it got out of hand. I don't think its worth it if some generality could be shed in exchange for simpler interfaces and inheritance model.
One thing he didn't mention, which I and countless others have reported bugs on, is Enumeration. Not only has it usage quirks but been plagued with bugs its entire existence. Despite its flawed design there is an unwillingness to deprecate it. I have a hunch this is where Paul biggest quarrel lies. The unwillingness to alter bad designs in the library while being very acceptant to new designs in the language.
Scala is the work of academia. A lot of features are built by academics doing research projects. This causes problems though when they move on and no one is left to maintain their work - like actors, specialization, macros etc. There is a lot of risk involved in including their work into the language. Not many are qualified maintaining it. None are redesigning and evolving it. I don't know how this compares to other languages and their standard libraries in this regard.
Last note. Its hard judging the type system when you're unqualified. Regardless, to me it seems too ad-hoc. I have no idea what a better system might look like but the future is certainly not the current state of scala's full type system. I recon it's an open research problem but I hope significant simplifications are possible so ordinary users can make better use of it.
It seems Java (and C# too) is gradually turning into a bad version of Scala. I've totally lost interest in the language which IMO never will become useful, but any improvements they make to the JVM in the Valhalla project like value types, specialization etc. will of course be good for other, better languages running on the JVM.
Cay Horstmann (the author of Scala for the Impatient) also wrote a number of Java for the Impatient books.
Joshua Bloch's Effective Java is considered a must-read by many people.
I personally like Peter Sestoft's Java Precisely, it's short and clear.
The requirement for blank lines is gone, as of sbt 0.13.7.
One powerful feature of sbt is the ability to discover or inspect values + dependencies for any task or setting.
I picked up a bit of Scala on my own, then I took the Coursera course Functional Programming Principles in Scala. The course nearly killed me; I found it quite hard. But my way of thinking about code was changed by it. I have rewritten some of my old Java code in Scala. When I look at this code now it seems to be much clearer then the Java equivalent. I have also learnt a bit about Play. I plan now on learning Slick.
Thank you, that was very informative. What about between map/getOrElse
and fold
? Is there a more conventional way among them, or is it purely up to preference?
Btw, found this thread in stackoverflow where Martin Odersky does state preference over pattern matching, of course that doesn't mean it is the scala way of doing it, still, I thought it would be interesting to mention.
It seems you are beginning to approach to sbt. I will suggest you can read this slide - Sbt baby steps. I think this slides is really great for anyone using sbt in the beginning.
BTW, I notice a lot of people still use SBT to indicate this build tool. I just learned recently that it's only called sbt.
> the name sbt doesn’t stand for anything, it’s just “sbt”, and it should be written that way.
There is more detail explanation at sbt web side.
There's a lot of learning material online for free, for example the Coursera course https://www.coursera.org/course/progfun and many many books, if your evil you can bittorrent them, if your good you can buy them if you don't want to be evil or good but just lazy there is a whole book here! http://www.artima.com/pins1ed/
You won't verify if you've mastered the basics by reading any single program, but rather by writing them. Experience is the best teacher and all that.
Try Googling some Scala tutorials and load them into your favourite IDE. Break 'em and make 'em your own. There are a number of them on the scala-lang.org site.
EDIT: Came across these while looking for a tutorial on working with sockets, they may prove helpful.
Of the many interesting things in this roadmap, one that stands out is that starting with 2.12 (so Q1 next year), Scala will require Java 8. Per this link, v. 2.11 will be the last to support Java < 8.0
That is actually a really good question. String, from the Java side, is not a functor, and is not defined as a collection with a type parameter. Scala reuses Java's String, and uses various implicits and the like to make String more flexible and convenient to use. This does not change that String does not have a type parameter. Thus, while String supports map
in practice, it is not a functor.
However, at this point things get funky. String (as far as I can tell) is typically implicitly converted to a WrappedString or a StringOps when using map
or similar operations on String. Those types both inherit from StringLike. StringLike's method signature for <code>map</code> looks like this:
mapB: String[B]
This is wrong, since String does not have any type parameters. The full signature looks like this:
mapB, That(implicit bf: CanBuildFrom[Repr, B, That]): That
Which seems more plausible. I don't know how or where the "short-hand" signature was created or written; I cannot find it in the sources for any of the traits.
While this means that String is clearly not a functor, it does still provide map
and other useful utilities using Java's String class, which is useful to have in practice. However, it is definitely not a clean solution. I think a cleaner solution would be to create a new String-type not based on Java's String, possibly similar to how Haskell implements strings, namely by defining the type simply as a list of characters. However, I could imagine that solution would have some issues in regards to operating on the JVM and interfacing with Java libraries and code.
> e.g. describing migration to Dotty as an impossible task - they are developing a tool for migrating which should cover majority of changes
However, an automated tool that covers a "majority of changes" isn't the same thing as an automated tool that can convert a "majority of projects".
Also, automated conversion tools can help library authors support both the old and the new language. It happened with python, with the jump from python 2 to python 3. Library authors could develop in python 2 and use 2to3 to generate the python 3 version for the people who made the jump. As of 2 years ago, python 2 still had a very significant market share.
This sort of thing has gone wrong before, with e.g. python 3 or perl 6. Why do you think it's going to go better this time? What specific mistakes did python make that Scala is avoiding?
Yeah, those "B"-type devs bring their bad habits from Java and refuse to learn FP techniques that would simplify the development.
Somebody probably should write "Effective Java" type of book for Scala (tips & tricks, composability in FP, etc), so it would be used as required reading for newcomers.
Sorry, you are right. I just don't like when people ingrain this stuff, and I had experience multitude of people claiming "FP is bullshit for academics, OOP is real world" before chuging into their reflectiony-dependency-xml-generated-codebase that took them ages to solve simplest of problems (and usually, after "sprints" of investigations, coming up with similiar solution as to what FP offers by default - separate data from behaviour, deffer execution from model/computation description, etc...)
How about we stop pretending FP is hard, and take a real good honest look at OOP, if it, by any means, is not harder, more complicated. Nah, I'm sure we're just all not following Clean Coder properly /sacrasm
If you are just starting, then I suggest you to follow this course: https://www.coursera.org/course/progfun It's a very good introduction to the language by the author.
There is no need to wait for new session, you can select one of the previous ones and work with the material shown there
The "Calling zero-parameter methods without parens" feature is one I've never understood why it's necessary, or even a good idea. However if you Google it there's a Stack Overflow answer, the top answer to which argues that it's required because:
> Introducing additional rules that conflate the two forms would have undermined currying as a consistent language feature
Allowing comments, and their presence or lack of thereof, to influence semantics just seems insane though. I'm assuming this is some sort of accidental side-effect of the whitespace/new-line rules though and not intentional.
This is the has most correct answer. I believe it's a reference to An algebra over a field.
Here's a stack overflow post that might help too. https://stackoverflow.com/q/16015020
So the issue here is there's no such thing as "plain" Scala. I should put these on a flying banner message or something:
:
, in which case look on the object to the right.To answer the specific question, <+=
adds a value to a Setting
that holds a sequence, and the value depends on the value of some other Setting
. For example:
libraryDependencies <+= sbtVersion("com.github.siasia" %% "xsbt-web-plugin" % _)
sbtVersion
is itself a Setting
in sbt, so this says, in effect, "add a dependency on the artifact from group com.github.siasia named xsbt-web-plugin of whatever version sbt says it is."
I released an app with Scaloid:
https://play.google.com/store/apps/details?id=com.soundcorset.client.android
It works fine with more than 100,000 devices, though I am not sure it is medium or larger.
Look into deis, it's like running your own Heroku, you can use Heroku's Play buildpack to get things running easily or roll your own docker setup.
For many trivial examples see underscore.js library API: http://underscorejs.org/ All these functions work on any javascript objects, including those that are unaware of the underscore library. If you do _.where(myFooables, {foo: 'bar'})
, you don't need to prove to the compiler that myFooable
has a possibly optional property foo
which can be a string. Depending on the type system you would need some boilerplate to express this if you're even able to do this.
Meanwhile, as a developer I know that myFooable has the foo property. It's obvious to me, yet unfortunately not guaranteed by anything other than my own discipline or an excessive amount of tests.
The where
function example is one of the easier ones, others like omit
could be even harder to type in a static type system.
As another example – cycle.js just because that's what I've been suffering with recently (making an interface for it in scala.js). If you look at code samples in https://cycle.js.org/getting-started.html you'll see the app uses the DOM
key on seemingly unrelated objects – drivers, sinks, sources. As a developer I know that if I provide a DOMDriver on the DOM
key, a DOMSource will be present on sources at the DOM
key, and I need to write a DOMSink value to the DOM
key in sinks if I want to send that value back to the DOMDriver. This kind of weird distributed typing constraint is natural for javascript developers, but is hard to encode in Scala's type system.
From what I've heard, the typical timeline for these major new features tends to be a few major releases down the line. They've already mapped out the major features of the next few major releases: http://www.scala-lang.org/news/roadmap-next
EDIT:
If you're specifically referring to TASTY, they've got a paper and proof of concept implementation out already, but seems like there's still a long way to go to getting it into the main line Scala compiler.
Just like any language, do some projects, practice and be inquisitive.
http://www.scala-lang.org/old/node/8610
This is old, but still might be helpful in pointing out what you dont know so you can fill in the gaps.
Autocomplete is a particular kind of prefix matching that's different than what's done when searching for term matches. Elastic has both.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters-completion.html
Worth noting that the 'Functional Programming Principles in Scala' class in the OP's post is taught by Martin Odersky, designer of the Scala language. It's a great class so far! (My background, too, is somewhat orthogonal to computer science.)
I would recommend picking up Structure and Interpretation of Computer Programs if you take the course (and even if you don't). Several lectures pull directly from the exercises in that book, and I find the additional context offered by Abelson & Sussman to be helpful.
BTW: Some discussion from 2013 around Option
, values classes, Some(null)
: https://groups.google.com/forum/#!topic/scala-language/Mz_VoJdJf1w
I find this 'Ceylonian' bit interesting to think about (Scala 3):
type Opt[A] = A | Nothing { def getOrElse(a : A) : A = ... }
This problem is harder than it looks. You want an algorithm that can diff two trees. The current best is Ω( n^3 ) (Zhang-Shasha). For this small JSON, it would be pretty fast, but for a really large one, it becomes impractical. See https://reactjs.org/docs/reconciliation.html for more info about this.
> This results in code that is quite difficult to debug, test, navigate, refactor, and reason about. Our previous Akka-application had all of these issues, along with bugs, race-conditions, etc. We just started another new application, and I'm seeing the exact same patterns & it's making me want to scream.
It's not just you. Every Akka-based prototype we've built at my work has ended up the same way. (All of these were written in Scala, so it's not Java's fault if that's what you're using.) Even the tiny Akka-based assignment in the almost-finished current iteration of the Reactive Coursera course led to code like you described.
Akka actors give you a nice way to turn asynchronous calls into an ordered application of operations. This is a nice way to wrap and access mutable state, and a big step up from low-level locks and synchronization. The remoting and supervision features are also really, really cool.
But the downsides are huge, at least for all the projects I've tried Akka on. You give up type safety in lots of ways that matter; actors end up behaving like Smalltalk objects. become() is clever but in addition to being mutable, ends up spreading related logic around in ways that make it hard to reason about your program.
After writing Scala in a functional style for years, using Akka feels like a big step back: it's mutable all over the place, more or less untyped, and I end up spending much more time thinking about and writing how to do things, instead of what to do. This is in contrast to the benefits I've seen from using functional APIs in Scala generally: with those, I spend much more time expressing what I'm doing than how.
Futures and other concurrency abstractions are generally preferable to actors, but actors can be useful when distributed computing becomes relevant due to features such as messaging and location transparency. The Coursera course Principles of Reactive Programming discusses both futures, actors and other abstractions and in which cases that they are useful.
In regards to actors being untyped, this is indeed an issue. That said, there has been some work toward getting typed actors in Akka, though it is still experimental.
Principles of reactive programming is the second course in the series but is starting up again apr 13 or 14. The course is lead by ordersky himself and is very good. I'll be taking it as a refresher.
If you're set on using SQLlite, a common pattern is using a different database file for each test using something like https://stackoverflow.com/questions/32160549/using-junit-rule-with-scalatest-e-g-temporaryfolder.
It's probably easier and cleaner to use H2 like everyone else is suggesting though.
Meanwhile on Stackoverflow...
> In Scala:
>
> scala> val xs = List(List(1, 3, 5), List(3, 4, 30))
> xs: List[List[Int]] = List(List(1, 3, 5), List(3, 4, 30))
>
> scala> xs flatMap {x => x + 1}
> <console>:9: error: type mismatch;
> found : Int(1)
> required: String
> xs flatMap {x => x + 1}
>
> Why?
Hacker rank has a special part for the pure FP tasks.
Here is my favorite
https://www.hackerrank.com/challenges/simplify-the-algebraic-expressions/problem
Also you can google for “99 problems”
I think you'll only get a certain distance while using Play - most functional-oriented people tend to move on to something else. (I've heard good things about https://www.slideshare.net/GaryCoady/http4s-doobie-and-circe-the-functional-web-stack but can't look at it at the moment).
> Even if we are using map, foldLeft and constant values with as less side effect as possible, we are not using functionnal design pattern, currification, etc.
This is the wrong way to look at it IMO - the whole point of the functional patterns is to let you write your functions in simple input-output form, if you can do that you're doing functional programming. Complex patterns are a liability rather than an asset. A better starting point is: what's a function that you can't figure out how to write in a functional way, that seems to inherently need a side effect? Then you can look for a functional technique to solve that problem, and you'll be using that technique for a practical reason rather than for the sake of it.
git actually has a similar hashing 'problem'
http://lwn.net/Articles/307281/
> I've been informed by the git Gods that the chances of a SHA1 collision is the same as the Earth being sucked up into the black hole created by the CERN accelerator. If this is indeed true, then there's no need for that extra memcmp.
I'm confident that the hashing will be sufficiently robust :) (as long as they don't use MD5)
One of the problems with 'build-from-source' is that the linking algorithm isn't entirely stable between releases. Small changes with the way implicits work may break backwards compatibility.
It might not be exactly what you're looking for, but I have been using prismic.io for a project at work. It's from Zengularity, the same guys who created Play.
It's a CMS backend, which means they expose a REST api which you can call from your application.
Stackoverflow is a good resource. Make sure you click on the "Linked" questions, there's your question among others.
Very nice! Just a couple of very minor thoughts:
Anyway, nice job!
FRP is just one model. Elm is a good example of how to make web applications with simple pure functional code that doesn't mention FRP concepts like signals, streams or behaviors.
To some degree, you can also make games with Elm. However, since Elm produces HTML, there's no model for collision detection and response, or anything like that, so you'd have to write a lot of code to handle that. One could imagine an "Elm for games" that produced a 3D scene graph instead, where you could listen for collisions in the same manner you can listen for clicks in Elm.
For Scala, see the side bar for a list of libraries. I maintain a React-based library, React4s, that comes with support for writing purely functional webapps.
Hah. I used to complete Project Euler problems in Racket while waiting for compiles (which took 1-2 hours after we finally stabilized the build infrastructure) to finish. That and go have conversations with coworkers; pretty sure my talking to coding ratio was at least 3:1.
The project I worked on was quite large with at least 100 developers having touched the code with many only working in that code base. It was also quite well designed in both the "business domain modelling" sense as well as the "we have a small, core technical philosophy to use when solving problems" sense.
This project was adjacent to an A+ (APL derivative) project that was being deprecated. As A+ is interpreted and has quite the "code while the application runs" feature set, I was quite jealous of the A+ developer productivity.
What you're asking/doing here doesn't make much sense. You already have the x(i) value, why do you need to store it again? Unless you're concerned about the case where the array doesn't hold the value, in which case your call to head will throw an exception. You could look into (given an a:Array[Int] and i:Int)
More generally, look at the documentation for Iterable - these are the data structure functions you will come to know (and maybe love) if you write Scala regularly.
http://www.scala-lang.org/node/250
The name comes from two sources. First, "scala" is the Italian word for stairway, which is appropriate since Scala helps you ascend to a better programming language. The Scala logo is an abstraction of a stairway. Also, Scala stands for scalable language, because Scala's concepts scale well to large programs.
> ListBuffer from the api only seem to have overridden operators
That's due to the extensive trait hierarchy Scala has for its collections. Among the concrete collections there are very few methods that aren't overrides of some trait method.
> how can I be sure this isn't recreating the list every addition, deletion?
You could look at the source (follow the source link here), or you can trust that modifying the collection in place is the main purpose of the mutable collections and rely on the fact that they do just that.
> not sure why you think linked list is slow to iterate ? all you're doing is looking up a reference to the next node
It's precisely because it requires following a reference to the next node that makes the iteration relatively slow compared to other data structures, particularly Vector
and Array
/ArrayBuffer
, but also anything else using chunks of contiguous memory. On current hardware those perform much better because of how they interact with the processor's cache and prefetcher. There are other factors that go into it too, like allocation patterns, but this kind of performance talk is a deep rabbit hole filled with caveats and exceptions.
> I'm not sure how an immutable list could do this any faster or if it isn't in effect doing the same ??
An immutable linked list definitely can't do that faster - it is still a linked list. When I said you want something other than a linked list, I meant an entirely different data structure like the ones just mentioned; I wasn't talking about mutable vs immutable at that point.
b.type
is a singleton type. This isInstanceOf
is translated to a eq b
, although I couldn't find the spec saying it has to be this way.
After reading, I still have some questions:
> Thanks to an improvement in type-checking, the parameter type in a lambda expression can be omitted even when the invoked method is overloaded.
But the given example also works in Scala 2.11.8:
trait MyFun { def apply(x: Int): String }
object T { def m(f: Int => String) = 0 def m(f: MyFun) = 1 }
T.m(x => x.toString)
> With Java 8 allowing concrete methods in interfaces, Scala 2.12 is able to compile a trait to a single interface classfile. Before, a trait was represented as an interface and a class that held the method implementations (T$class.class).
> Note that the compiler still has quite a bit of magic to perform behind the scenes, so that care must be taken if a trait is meant to be implemented in Java. Briefly, if a trait does any of the following its subclasses require synthetic code: defining fields ( val or var, but a constant is ok – final val without result type) ...
Does anyone has any ideas? Thanks.
It seems like half the angst here is around Unit return type from procedures. Note that this feature is on the roadmap to be removed. See "Scala: Don Giovanni" point #4. http://www.scala-lang.org/news/roadmap-next/
I think your approach should be to transform the parent node rather than the child node. That is, when traversing the parent check for the existence of the child and add the new element in the parent.
Also, you can consider using scala.xml.transform. You need to express the transformation as a RewriteRule and then let the RuleTransformer do the traversal.
Take a look at the Odersky classification, "Scala levels: beginner to expert, application programmer to library designer"
http://www.scala-lang.org/old/node/8610
I would say it really works that way. You start with A1, A2 and go down to the bottom at your normal speed. Each new step is made when you want to achieve something and the current constraints don't give you that, or when a library forces you to (not so ofthen), or when you just explore something out of curiosity (and to more easily handle the first two cases).
I don't know of specifics, the scala roadmap has in it's Don Giovanni
release includes some some general features that can reduce accidental complexity like the unification of syntax for existential and partial type application, or intersect and union types.
>"Tuples can be decomposed recursively, overcoming current limits to tuple size, and leading to simpler, streamlined native support for abstractions like HLists or HMaps which are currently implemented in some form or other in various libraries."
This seems like a promise to improve dependent type features directly in some way, how exactly that will be remains yet to be seen.
It may be worth pinging /u/Odersky or Miles Sabin for something more specific.
I don't know much about Spark and companies, but I believe that the Any
you are trying to convert is not a Double
but it is a list type that is used to convert between Java and Scala collections. Relevant documentation can be found here: http://www.scala-lang.org/api/2.11.4/index.html#scala.collection.convert.Wrappers$
Since the JListWrapper
implements the Buffer
trait and that, in turn, implements the Seq
one you can probably narrow the type of your field by saying asInstanceOf[Seq[Double]]
, assuming that the actual wrapped type is Seq[Double]
. Otherwise you could try something like asInstanceOf[Seq[Any]].head.asInstanceOf[Double]
but my eyes are hurting just looking at it and you can probably do better with some Spark-specific knowledge!
The book Effective Java?
I can't recommend the first two enough. Martin Odersky designed the language, so his book is a must. And Scala for the impatient walks you through use cases and covers most of what you can do with the language relatively concisely.
Edit: If you're trying to learn Scala, I recommend learning it by learning functional programming. There's a great Coursera course by Martin Odersky
You've linked to the documentation of the abstract class Source which is not directly usable, instead you want it's companion object
zpowers@zpowA:~⟫ echo "hello" > myfile.txt zpowers@zpowA:~⟫ scala Welcome to Scala version 2.11.1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_60). Type in expressions to have them evaluated. Type :help for more information.
scala> import scala.io._ import scala.io._
scala> import java.io._ import java.io._
scala> val mySource = Source.fromFile(new File("myfile.txt")) mySource: scala.io.BufferedSource = non-empty iterator scala> mySource.getLines foreach println hello
scala>
I contribute to a small IDE for a language implemented in Scala. The GUI is mostly scala-swing, but some parts had to be rewritten directly in terms of Java APIs, due to some incompatibilities between generic Java 7 and Scala's type system. Other than that, the experience has mostly been good – the Swing code is comparatively terse and easier than usual to work with. As long as you are fine with UI development in Java, you will be happy with the Scala counterpart. However, I have never tried JavaFX/ScalaFX so I don't know if that is any good.
Well, you have to look at the class which defines those methods. None of those above are part of Scala's syntax.
You know http://www.scala-lang.org/archives/downloads/distrib/files/nightly/docs/library/index.html#index.index-_ right?
Strong stdlib?! You need to import "strconv"
and then convert a string to integer like this: strconv.Atoi("100")
. And because of that lack of generics the language and its stdlib feels barbaric, not strong...
> and has at least settled on a concurrency primitive
This maybe appealing for beginners but different problems may require different tools.
> The very fast compile times (Go was deliberately designed to compile as fast as possible) means that it also has very fast iteration speed for tests and deploys, which matters in the environment where it has its niche in.
Nim also has really fast compilation times while still having generics and other good stuff. But the true iteration speed with Nim and golang is still slower compared to Scala due to their inexpressivity. At least, for me and for those I knew...
> I compile a test (with only a single line change) in Scala and that takes roughly 8-10 seconds to execute
Too much implicits(when using scalaz etc.), macros and files with multiple classes and traits can slow down your compilation speed drastically. If you avoid that it can be 1-3 seconds.
I don't have a lot of Scala experience, but my first "real-world" project was a rewrite of an existing Node API with Finch. There was some Java interop involved because I was using a couple Java SDKs including Firebase. I have some Java experience, but not knowing any Java wouldn't have been much of an issue.
Yeah ScalaDoc is awful. Under the search box there is a list of letters; click the P
and scroll for it.
If you're on a Mac you should buy a copy of Dash which lets you do a proper search across Scala, Java and other docsets.
Same here. I really wish someone can put together a list of key features and bug fixes for 2.9.1, e.g., something like PostgreSQL release notes
Effective Java is great, but it takes a little experience working with the language to understand its rules. I would try starting with building a Java 8 app using Dropwizard or another similar library.
> I'm interested in topics like: maintain ability, ~bugs per deployment, on boarding of new members, ability to move written software over different teams, development speed, performance of the artifact and of course fun. Do I miss something? Feel free to add :-)
I've used Scala for all sorts of things over the last 6-7 years, from distributed-systems to fast-iteration web stuff to CLI tools to highly-parallel job-orchestration engines.
My experience has been very positive. Due to Scala's conciseness and decent IDE support, I write code fast up-front; it's easily comparable with a dynamic language in this regard. More importantly, it's also faster to develop in Scala down the line compared to dynamic lanugages, or even Java, due to Scala's type system and standard library facilitating catching errors early.
What drew me to Scala initially after many years of working in Java was that Scala basically made a ton of the bullet points from Josh Bloch's Effective Java the defaults, or otherwise baked into the language. Scala made all the things I was already doing much easier, which felt great.
I've seen Scala used on teams ranging from 2 to 8 devs, with all senior devs to a mix of junior and senior ones. It generally went well, provided 2 things:
For the Task
implementation fusion happens naturally due to its lazy nature and I don't think Haskell does anything special either, though I might be wrong.
Also you can't implement fusion at the library level without making Future
lazy, because it would mean that at the library level you'd have to build some sort of representation in memory, some AST that you can later optimise. But that's not the case, as Future
is eager, meaning that when executing a map
operation, you're already expecting the computation to be fired somewhere. Change this and you change Future
completely.
Which means that the only choice is to do it at the compiler level, as a macro possibly. But then that's only a trick that will optimise only a few narrow cases. For example you certainly can't optimise recursive calls.
I've heard about Dotty's compiler attempting to rewrite expressions into more efficient versions when possible, but IMO I don't like the idea. This is usability 101, something I've read about in "The Design of Everyday Things": for objects to be easy to use, users have to form a good mental model of how they work (when buttons are pressed). With fusion that mental model of how something works goes out the window, meaning the principle of least surprise is ruined.
It's difficult to talk about subversion. The author is just trying to cope with erasures, by using the type system to safely store the missing information.
You can have a look at Joshua Bloch's Heterogeneous Container discussion in Effective Java.
I built a rest API to front for jenkins in Finatra 2.0.0.M2. I have to say that the support from twitter has been really good. I had difficulty doing a thing, and they implemented it for me. It hasn't been released yet, but I expect that to happen soon.
I liked that Finatra has a baked in admin API, using the twitter server. I don't like that it uses many APIs that are different than the standard scala libraries, but this project started back when scala had some pretty big missing functionality (Futures.)
Overall, Finatra seals the deal for me. I've done stuff with spray, but I'm waiting a bit longer for akka-http to mature, and I really liked the out-of-the box management stuff that finatra offered.
Edit: misread your first question. Some webservers serve static files too and some just respond to requests you explicitly handle. If they do, you don't need another solution for images etc.
For your second question you should run the web server as a service. Most current linux distros use SystemD, which is pretty easy to use. Here is an example service file and some environment variables.
Here is a DigitalOcean systemd guide. I just scrolled through it but DO has great guides so it's probably a great starting point.
There is some very interesting related content in 'One Monad to Rule Them All' https://www.slideshare.net/jdegoes/one-monad-to-rule-them-all - can imagine that appearing in future parts
Anorm is being phased out from Play (created by authors of the same), not exactly a vote of confidence in its favor.
Squeryl does support scala 2.11, and has excellent relations support. Sure, it has more reflection based magic than say, Slick, but overall is a very solid query DSL in scala land.
Slick absolutely has support for relations, might not be as closely aligned with sql semantics as Squeryl, M$ Linq offerings, etc., but does the job, and has been improving with every release (async support on the way, for example).
If you're looking for a more straightforward type safe query dsl without having the oh-no-scala-I-don't-get-it-everything-sucks reaction, maybe look at JOOQ.
Note: If you are using Firefox with restrictive cookie settings, the site fails to load because of a SecurityError
related to Web Storage API. As outlined in this super-user question/answer, the trick is to add http://scalakata.com/ to the allowed websites (whitelist) in the cookie exceptions settings. @MasGui - perhaps the best approach would be that you set a test cookie, because that will trigger the cookie acceptance dialog in Firefox.
What you're looking for are partial functions; they're the explicitly non-exhaustive counterpart to exhaustive pattern matching, so you can ignore the cases you don't have any use for. They look like:
map( case Some(Int) i => doSomething(i))
You can also- though I don't recommend it, I think it's a bit too non-obvious- call .flatten
on a collection of Options; so List[Option[Int]]
gets flattened to a List[Int]
while discarding all of the empty Options.
I don't think it's bad to be explicit about discarding None
s in a match statement though.
No it doesn't. GHC has ignored the spec that says
> "A type may not be declared as an instance of a particular class more than once in the program."
for at least the last 8 years: https://ghc.haskell.org/trac/ghc/ticket/2356
Straightforward example code https://stackoverflow.com/questions/12735274/breaking-data-set-integrity-without-generalizednewtypederiving/12744568#12744568
You can't just make a blocking API non-blocking. In most cases, you should be able to find a non-blocking replacement, though. For file I/O, HTTP, DNS, MongoDB, Cassandra, and even PostgreSQL, non-blocking APIs are available afaik. According to https://aws.amazon.com/articles/5496117154196801, the AWS Java SDK provides at least some async APIs as well.
If you're forced to use a blocking API, wrapping it in a Future has the advantage that you can run the blocking calls on a separate ExecutionContext, so you can have a dedicated thread pool for blocking I/O operations. This is the approach Slick takes for example, to provide an async interface on top of blocking JDBC.
Here's something interesting to mull over: their initial decision was to use Oracle. Then they migrated to MongoDB. Then they migrated to PG.
​
- Time on Oracle: 11 years (1999ish- March2011)
- Time on MongoDB: 6.5 years (2011-2018)
- Time on PG: ???
​
I'd posit that their problem isn't the decision. It's something deeper, like not-invented-here syndrome.
​
Functionalworks are definitely worth talking to! They're that rare thing - A recruitment firm who are actually useful and know their stuff. I'd also recommend getting along to the London Scala User Group, and signing up for the london-scala-jobs mailing list.
Nice. From glancing over it, you might have some trouble with accumulating mapValues
calls in each iteration (*). I think you end up there with a linear slow-down.
You can do SSR either using node or jvm. In the node scenario, just do standard React SSR. In the JVM(graaljs) scenario, things will be much more difficult. You will need to change your frontend code to work with JVM Javascript engine.
If your site can't be found using google search - try to check if Google is indexing your site. Check your sitemap.xml and google console. If your site is present in google search but has a low position - try to analyze that problem, test your site with google page speed test, check how Google sees your pages. You need SSR only to speed up the first render and serve ready HTML to Google crawler. Google can execute JS nowadays, so maybe the problem is not in SSR.