Indeed. When I looked into these glasses some time ago I thought "wtf", but it turns out that reducing the blue light you get from the screen over long periods is easier on the eyes. Ofcourse, you can do this for free by just using the colour correction sliders in virtually all video cards/chips. Turn down blue to zero when working, turn it back to normal for doing other things (movies/etc). If you can use CMYK adjustment, that's even better. You can also use Powerstrip to quickly switch between low-fatigue and normal colour by assigning it to a key combo. The glasses are a rip-off but they're not pseudo-science.
Edit: found a really easy solution for this. Get NegativeScreen. Run it and in the tray icon, click "Edit Configuration" and replace the (negativescreen.conf) file with this:
Toggle=win+W
SmoothTransitions=true SmoothToggles=true
InitialColorEffect="Yellow"
Yellow=win+alt+Y { 0.33, 0.33, 0.0, 0.0, 0.0 } { 0.33, 0.33, 0.0, 0.0, 0.0 } { 0.33, 0.33, 0.0, 0.0, 0.0 } { 0.0, 0.0, 0.0, 1.0, 0.0 } { 0.0, 0.0, 0.0, 0.0, 1.0 }
The matrix there will convert all colours into a shade of yellow. Restart NegativeScreen and press Win+W to toggle yellow/normal colour. It defaults to yellow.
This article is pretty uninformed. Doom 3 was the first Id game to use C++, and John Carmack was a legend well before. I'm guessing he has never thought of himself as a C++ guy --
> I sort of meandered into C++ with Doom 3...
and
> The major evolution that is still going on for me is towards a more functional programming style, which involves unlearning a lot of old habits, and backing away from some OOP directions - John Carmack 2013.
The quote appears as a comment by John to this Kotaku article by Shawn McGrath. The Kotaku article, by the way, could be viewed as a much better alternative to the linked article.
> These ABC pointers are passed in constructors, rather than trivially created because "unit tests"
Watch Miško Hevery's "Google Clean Code Talks" to get a better idea of what they're trying to do here. It's dependency injection and allows for mocking behavior out for testing. Works great in Java where you have introspection.
I feel like I usually go into these articles with a chip on shoulder after having read so many of them linked here with either poor or narrowly applicable advice. This article read like a snippet digest of Code Complete and Clean Code; I think it is useful, but hopefully is common sense to most of us that have been at it a while already.
> I've never heard anyone voice a negative opinion on the book [Clean Code]
OK, here you go: I strongly disliked that book, and felt that some parts of it would be damaging for a relatively inexperienced programmer to read.
The basic problem I have with it is that it presents itself as a guide to how to develop software well, written by experts, perhaps with a similar target audience to classics like Code Complete. However, unlike the latter, it often seems to rely on little more than each author's personal preferences, with far too much hand-waving and (sometimes by their own admission) little real evidence to back up their assertions and advice.
A lot of professional programmers would disagree with some of that advice, sometimes with good reason, and sometimes with a large amount of real evidence on their side. But the book is, like much of Martin's work, very black-and-white in its presentation, and very heavily oriented to the style of development he and his colleagues prefer.
Just to be clear, I'm not saying it's all wrong or bad advice. The trouble is, with so many parts lacking supporting evidence or much in the way of balanced discussion about pros and cons and alternatives, it's basically impossible for anyone inexperienced enough to benefit from this kind of book to know which parts they can trust.
> especially as an author of a famous, widely read and often wrong or at least controversial book.
Someone said you are talking about Clean Code - I've never heard anyone voice a negative opinion on the book (which would be a requirement for it to be controversial) - I even tried googling and had trouble finding anything. Can you elaborate on this?
Lambdas are the oldest of toys. Alonzo Church discovered them in the 30s and developed The Lambda Calculus. In the 50s, McCarthy et al designed languages around it. The power of lambdas is in their ability to capture their environment e.g. variables in scope at the time of definition. This permits some incredibly mind bending language features which blur the line between code and the data it operates on. In the 60s, Ritchie came along and developed C based on Algol, which does not have lambdas. Its ability to pass around function pointers can be seen as one of the several language features necessary to support lambdas. Missing are “functions as first class types” and environment capturing (closures). Without these features, things like higher-order functions, currying, etc become either prohibitively difficult or impossible to implement in a language. What you are noticing is that traditionally imperative languages are incorporating, where possible, ideas stemming from The Lambda Calculus, and with that a new set of programmers are discovering their value as tools of abstraction. This leads to a lot of rehashing of thought where we find people wittingly or more frequently unwittingly exploring 60-year old theory as an exercise in a language recently imbued with a subset of functional idioms.
If you want more info, I suggest reading Structure and Interpretation of Computer Programs by Sussman and Ableson. Its a really great read.
>1) Programming Perls
Oh come on man. You're not even trying.
>I don't thing, I need to convince any Java developer to read this book but for my C++, Python and Ruby programmers, you can learn a lot about API design, design patterns and writing clean and robust code from this book.
Also this? Brutal.
Don't forget The Pragmatic Programmer
I can highly recommend the Artificial Intelligence for Games book mentioned in this article too, has many useful techniques written in an easy to understand manner.
This implementation looks like it is broken. It creates the dict d
, but it never puts anything into it, so the check n in d
will always return false. Thus that case will never come into play.
The calculated results in the recursive case should be inserted into the dictionary.
The correct code would be:
d = {} def fib(n): if n == 1 or n == 2: return 1 elif n in d: return d[n] else: f = fib(n - 1) + fib(n - 2) d[n] = f return f
print fib(5)
Yes. All the time.
Also, there is another search engine exists: SymbolHound. It's not perfect, it doesn't replace google but unlike google (and many other search engines), it doesn't strip signs or some words from search query. It's extremely useful when you need to use some piece of code as search query.
> Also for a specified compiler, target and switches you will get the same result every time.
That's the point though, its not portable
>Also, bit bashing an int into a negative is something you really should know and it isn't undefined at all.
This isn't true, its most certainly undefined, same as signed overflow. And a compiler can and will optimise out signed integer overflow
https://stackoverflow.com/questions/4009885/arithmetic-bit-shift-on-a-signed-integer
Specifically, for the arithmetic left shift I was talking about
"The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. If E1 has an unsigned type, the value of the result is E1× 2E2, reduced modulo one more than the maximum value representable in the result type. If E1 has a signed type and nonnegative value, and E1× 2E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined."
This person obviously lacks the creativity and insight required to make coding valuable and worthwhile. They also talk about it as if they don't know anything about it.
"Product Managers should be able to just make the app do what it’s supposed to do, without knowing how to code at all. The only thing a company should be creating are the things that make their product unique. Everything else has already been built in other apps and should be reused."
The idealism is cute, but nothing more. I always strive to do things that nobody has done before, that there aren't libraries for doing, or existing frameworks for. If you can't figure out how to do something that nobody has done before, you shouldn't be coding.
If you want to drag-and-drop an app then go make stuff using https://scratch.mit.edu/ and leave the real work up to the rest of us who know what coding is actually about.
Clean Code seems to be a mixture of the obvious (assuming you have experience) and a style guide presented as the "one true way".
The reason that I have learned to hate it is because of its followers ruthlessly imposing it onto me. When I try to argue with them, it's not that I have a different opinion, it's that I'm actually "doing things wrong".
~~The tone of the book does not help with this at all - no indication is given that this is just the author's opinion.~~
Just try and work with a large codebase with absolutely no comments at all, and thousands of 3 line methods. Then argue for days over variable names and spend weeks rewriting things for very debatable code readibility. You will understand then.
From Code Complete:
Minimize the number of returns in each routine. It's harder to understand a routine if, reading it at the bottom, you're unaware of the possibility that it returned somewhere above.
Use a return when it enhances readability. In certain routines, once you know the answer, you want to return it to the calling routine immediately. If the routine is defined in such a way that it doesn't require any cleanup, not returning immediately means that you have to write more code.
This is fantastic, how have I never heard of this before?
EDIT: It does say in the Notes section:
> Users often try to use the assume-unchanged and skip-worktree bits to tell Git to ignore changes to files that are tracked. This does not work as expected, since Git may still check working tree files against the index when performing certain operations. In general, Git does not provide a way to ignore changes to tracked files, so alternate solutions are recommended.
> For example, if the file you want to change is some sort of config file, the repository can include a sample config file that can then be copied into the ignored name and modified. The repository can even include a script to treat the sample file as a template, modifying and copying it automatically.
Various ByteString instances are best for memory usage, if you have a lot of strings (you can have UTF-8 encoded data also in these). The plain old String is unfortunately alias for [Char], meaning linked list of Char's - which is one of the stupidest ideas in the core Haskell library imho.
http://www.haskell.org/ghc/docs/7.2.1/html/libraries/bytestring-0.9.2.0/Data-ByteString-Char8.html
>As I use regular expressions to define where the content is, no DOM parsing is performed, so malformed HTML pages are none issue.
Isn't this a cardinal sin?
They didn't:
> Within the explorer phase, you typically go through two steps. First, you ascend to the Peak of Inflated Expectations. You think you can do everything in Go, but without really understanding or grokking the Go ethos yet. […]
> Following the Peak of Inflated Expectations is the Trough of Disillusionment. You miss feature X from language Y. You haven’t fully bought into idiomatic Go. You are still trying to write in the style of other programming languages and are getting frustrated. […]
You see that you got stuck somewhere in phase 2.
I've seen a similar version of this list about half a dozen times. Its a decent list, but the problem with books being tried and true is that they are a bit antiquated.
Code Complete was one of my favorite programming books when I started learning, but its missing many concepts that it would have had otherwise if it was written today.
As /u/uh_no_ has expressed, its really hard to be relevant to a large section of developers at the same time across languages and domain areas.
Great article, but as usual from Nvidia kool-aid drinking camp, it neglects to mention the prohibitive latency of moving data to the GPU and back. GPGPU is great for processing large batches of data but it's practically useless for more common interactive programs. Intel's Sandy Bridge has a fast ring bus with a very low latency connection GPU-CPUs. But they still ship terrible GPUs and anyway it's much harder to code in OpenCL [Intel GPU] than CUDA [Nvidia GPU].
Also, the hardware available varies a lot more for GPUs. The program depends even more on the customer's hardware platform. This reduces significantly the cases where GPGPU software development applies.
The Silver Searcher is even faster than ack. I've been loving it so far.
Edit: according to the author, the only thing that is faster is something like ctags, which produces an actual index
Apple has also open sourced ARC and an ARC migration tool to the clang part of the llvm compiler project.
As a result, you now have more options in getting bugs addressed, getting questions answered, and seeing the source code itself.
There does seem to be a lot of refactoring going on with Objective-C memory management in clang (and I believe ARC will be refactored to better fit the clang pipeline). So, it's very likely the version in llvm 3.0 today is not the same as what you get with Xcode today. But I'm guessing these will converge at or before WWDC (and llvm 3.1).
This is an article by a third-party blog author that does not seem to be affiliated with Microsoft. You might want to read about the 1.0 announcement from the source instead.
Steve McConnell calls it "intellectual dishonesty" or "intellectual sloppiness."
> A related kind of intellectual sloppiness occurs when you don't quite understand your program and "just compile it to see if it works." One example is running the program to see whether you should use < or <=. In that situation, it doesn't really matter whether the program works because you don't understand it well enough to know why it works. Remember that testing can show only the presence of errors, not their absence. If you don't understand the program, you can't test it thoroughly. Feeling tempted to compile a program to "see what happens" is a warning sign. It might mean that you need to back up to design or that you began coding before you were sure you knew what you were doing. Make sure you have a strong intellectual grip on the program before you relinquish it to the compiler.
Code Complete 2nd Edition, p. 827
Sorry no, he's using a class component so he used that. And componentWillMount is the one unsafe to use.
I started doing this a few years ago with Joplin sync'd to my Nextcloud. I got tired of trying to copy notes from computer to computer. "Where did I write that snippet to do X again?"
This is also why the C++ community is busy trying to design a better module system. See for example the proposal A Module System for C++ by Gabriel Dos Reis, Mark Hall and Gor Nishanov, 2014:
> The lack of direct language support for componentization of C++ libraries and programs, combined with increasing use of templates, has led to serious impediments to compile-time scalability, and programmer productivity. It is the source of lackluster build performance and poor integration with cloud and distributed build systems. Furthermore, the heavy-reliance on header file inclusion (i.e. copy-and-paste from compilers’ perspective) and macros stifle flowering of C++ developer tools in increasingly semantics-aware development environments. > > Responding to mounting requests from application programmers, library de- velopers, tool builders alike, this report proposes a module system for C++ with a handful of clearly articulated goals. The proposal is informed by the current state of the art regarding module systems in contemporary programming languages, past suggestions [4, 6] and experiments such as Clang’s [2, 5], and practical constraints specific to C++ ecosystem, deployment, and use. The design is minimalist; yet, it aims for a handful of fundamental goals > > 1. componentization; > 2. isolation from macros; > 3. scalable build; > 4. support for modern semantics-aware developer tools. > > Furthermore, the proposal reduces opportunities for violations of the One Definition Rule (ODR), and increases practical type-safe linking. An implementation of these suggestions is currently underway.
The Clang-specific feature mentioned in this introduction is documented there.
You should probably check with ideone.com to make sure this is permitted in their ToS or that they're OK with it. They appear to be ad supported, and this bot circumvents the ads.
It would be really cool if the bot was able to sandbox everything itself, but yeah - that's playing with fire. :-)
I would not recommend w3schools. It's a very controversial site and not very fun. Of course this is mostly opinion but one that is shared by many professional web developers. I'd suggest you do a little research on the matter.
If I had to make a recommendation, I have heard good things about codecademy. Look here:
http://www.codecademy.com/tracks/web
Also, don't worry about feeling like an idiot. You are going to make really stupid stuff for a while, just enjoy it. And don't worry too much about whether you are using the right resources or not. Just learn enough to experiment then experiment until you need to learn more.
Clickbaity title, so here is a r/savendyouaclick :
Java : The Complete Java Reference
Effective Java 2nd and 3rd Edition
Java Generics and Collections
Core Java Volume 1 and Volume 2
Java concurrency in practice
I can attest to Code Complete. The Pragmatic Programmer is also a good book on the practicalities of writing quality code.
Edit: For a specific example, they provide advice like comments should not describe what the code is doing. The code already does that, and is guaranteed to be accurate unlike a comment that may not have been kept up to date. Comments should instead describe why the code does what it does. Accounting for a weird edge case? Add a comment so the next person to look at the code (who may be you!) doesn't think you just screwed up and "fix" it.
I totally agree. Even if you want to argue the underlying point about conditionals, this code excerpt seems a particularly poor motivating example. I suspect, mainly based on evidence from Code Complete, that the number of bugs related to confusion about pointer manipulation in C, especially indirect pointers, is an order of magnitude higher than the number of bugs where someone forgot to consider either an explicit or implicit path.
As mentioned in the article, the book Structure and Interpretation of Computer Programs (SICP) covers topics like this. It's why I believe every programmer should read that book. It's even freely available from MIT Press, so definitely check it out!
I'm going to assume you're a fairly young programmer just getting started, so a couple things not even related to the code:
Ok, on to code stuff.
o.println(build(l, s, aid));
-- compare to System.out.println(buildMessage(level, message, artifactId))
. The second version makes it clear where we're printing, what the build() method does, and the purpose of the parameters. Check out the book "Clean Code" if you want a good manual on writing nice code.
According to who?
Goodreads has lists of books, which is more within the scope of the site, e.g.
Maxima is somewhat similar to Mathmatica but old-school and free. It's available for Win, OSX, Linux/BSD/Unix.
Writing Python (a fine general purpose programming that sadly lacks closures (which you should not care about at all for this application)) requires giving discrete steps to the computer. Ditto for Fortran (as another commenter suggested; Fortran is the ideal for programming massively parallel computers like the Earth Simulator, where the compiler has to be able to pick up implicit parallelism in the code).
Maxima and Mathematica take a formula and operate directly on it. Maxima has a solve() function, and a sum() function, for example. Functions operate on other functions (in CS terms, functions are "higher order"). You can pass various functions to the float() function to get a decimal result instead of a fractional/rational result, for example.
This should get you started: http://maxima.sourceforge.net/docs/tutorial/en/gaertner-tutorial-revision/Pages/sum01.htm
Of course, you still have to be able to read all of that stuff in order to translate it. "Computer algebra systems" like Maxima and Mathmatica will solve things for you, but if you want to continue on in math (which I have not done), it may not be desirable to use Maxima etc as a crutch for knowing how to solve stuff.
Good luck!
I see you've edited the wikipedia article titled XOR cipher to include a link to your program. Might that be a conflict of interest?
Edit: Here's a link to the source code, as it seems to not be available via your sourceforge link. As /u/Bottled_Void points out, this will have problems with files starting with the ~ character.
There's an interesting template trick to create the optimal (or near-optimal, not sure) run-time power code for any integral power known at compile-time. I wrote it awhile ago, and you can see it here: http://codepad.org/8S7mQqKa
I've tested it in GCC and VS, and both will optimize to a clean disassembly of fmul over and over again (for Power<997>, it does 15 fmuls).
You could also expand it to do negative powers, or add a type parameter so it's not just for doubles.
Cool project!
I didn't go into too much detail when reading the source. I did see however that some functions in the stats.py file are pretty big. You should consider refactoring those bad boys and reduce the responsibility held by those functions. If you haven't already take a look at these books:
Keep up the good work!
This "tutorial" has some of the same problems that the vast majority of other C/++ "tutorials" littered around the net has: very terse, and lacking in important, finer details.
For example, when mentioning floating-point literals, the difference between 3.14
and 3.14f
, where the former is a double
and the latter is a float
was never mentioned. Hell, the only hint of it was in the list of examples with no explanation of why it was there.
When they brought up constants, there wasn't even a brief summary of what the preprocessor is, or what it does, and how using #define
is very different from using const
both in terms of how the compiler "sees" it, and where the "constant" can even be used. No mention of why you would need a const
, what constant expressions are and where they are expected, high/low level const
, etc. For such a deep topic, it seemed to get the least detail.
EDIT: This page has a pretty good list of books, including free ones. More specifically, here is a very good list of C++ books, categorized by skill level and scope, and that IIRC is kept relatively up to date.
I firmly believe that a language as complex as C/++ cannot be learned from online "tutorials"... Grab an acclaimed book like C++ Primer or Bjarne's book on C++. Hell, grab more than one. You'll learn way more, and much more correctly than from fragmented, terse (and sometimes blatantly wrong) online "tutorials". Just my $0.02
N.B. YMMV, not all online resources are garbage, especially when they're references or tips, and I can't say that good tutorials are non-existent (I just believe they are exceedingly rare :p).
Introduction to Algorithms are not really an introduction. At least the 10yo version I have I more of a reference book if I need a specific algorithm, and even then Wikipedia is usually better. It does not cover algorithm analysis in as much depth as it could.
That said, having browsed more than half the books on that list, it most certainly is the best of them.
Every project should have a "one true style guide". With so many different preferences out there, it's hard to guess how the people in charge of a certain project want you to write your code.
The JavaScript language will never have a PEP 8.
I would love to see Bob Martin's books here.
Clean code would fit well under practices and Clean Coder would be great in the career or soft skills section.
Some material on design patterns would also be relevant across the board.
"When The Pragmatic Programmer was written, the most important software in the Enterprise still ran on the desktop. It might do a couple socket connections, maybe consume some data through a Web form but that was it. Software was a thing that lived on someone’s desktop or laptop machine. If it worked, it worked for one user."
Really, no-one wrote server-based software 18-years ago? I'd suggest databases alone were more important to the enterprise than desktop software even back then. Many app's I personally developed had a client server model, maybe not web-based, but not a world removed architecturally.
> we don't know what he has been doing for those 30 years
Uhh, you do know Martin's pretty well-known, right? Not to you, I guess, but many, many software people know who he is.
He co-authored the original Agile Manifesto, which has had a huge influence on software management practices. He wrote Clean Code, which is literately sitting on my desk a foot away from me. He has a long extensive blog he's kept for years as well as a consultancy.
I don't even agree with him more than half the time, but trust me, he's not some rando on the internet. He has deep roots in large-scale object-oriented programming. His point about coding for 30 years is to highlight the fact that he's advocating for something different. It's like he's saying, "why am I about to argue in favor of a paradigm (FP), when I've spent 30 years using a different paradigm (OO)?"
Keep in mind that if you're invoking Robert Martin (as the author did by quoting Clean Code) that he assumes you're doing TDD and everything he says is coloured by that.
With TDD you'll have a thorough automated test suite which acts as a kind of documentation of the business requirements and thinking that were behind the code.
Design of everyday things is great. Surprised there was no Code Complete. Other great books one should look into are Release It (pragmatic design patterns), Continuous Delivery (growing more and more common with cloud computing), Thinking Fast and Slow (like everyday things this is more about understanding your users). And find a book on systems theory that matches your level of understanding, this underpins both your Software system and the larger organization / society who uses it. (Try and and something that includes Cynefin)
Not exactly sure what the salary of this equals but: Udacity guarantees you a job offer as an employee or a contractor within 6 months from receiving your Nanodegree Plus credential (the Job Placement Period). Further, in your new job Udacity guarantees that your gross income from such job will be in excess of your cost of tuition (pre-tax) within a 3 month period following job placement.
If you think this is neat, check out spatch (https://github.com/facebook/pfff/wiki/Spatch). It's a tool for applying "syntactic patch" files to do similar transformations. For example, (part of) the net.Dial change would look like this in spatch:
- tls.Dial(a, "", b, c) + tls.Dial(a, b, c)
I've had a couple of people recently ask me about the exchange software about Smarkets so I wrote about one of the systems I still remember. I've poked the Smarkets team about writing more technical articles. I also just noticed that their web framework is now on github: https://github.com/skarab/smak
Given that my phone is magnitudes more powerful than those embedded systems, sure, it makes a big difference. Those network monitoring boxes were expensive - today you can buy a whole board with a GB plus of memory and several GB of flash and multiple cores for less than the cost of just the CPU (some 486 clone) on that device... If you've got a GB of memory and several GB of flash, most people won't bother to optimize at all unless they have very specific hard requirements when it comes to startup times etc.
E.g. there's no inherent reason why your phone should take 20-30 seconds to boot, other than that it has sufficient resources that nobody bothers to optimize things, and people boot their phones rarely enough that there's just insufficient demand to make any money off of optimizing the boot process much and so it gets little attention.
The irony is that this has actually made us go backwards in many ways.
I play with AROS on occasion- a reimplementation of sorts of AmigaOS. It can run native, but it can also run hosted under Linux (and others). I can get AROS to boot hosted on Linux on my laptop, and start a text editor faster than I can start Emacs on the same laptop. That is, it brings up a full OS and graphical UI and executes a startup script to start the text editor faster than Emacs starts....
There's nothing special done to achieve that, other than a system design that was developed to work on a machine with 512KB RAM and a 7.16MHz 68000 CPU...
Good article. Myself and another developer have been working on a generalized way to do tagged unions and exhaustive pattern matching in C# without the need for the sealed class stuff, or even a class hierarchy at all if you like. Check it out - https://github.com/Jagged/OneOf | https://www.nuget.org/packages/DiscU/0.0.29
Best way to find online programming courses and tutorials
You go to google, search for a language (for ex: AngularJS), get 100s of suggestions but don't know which one to choose from those 100s of good looking tutorials. First page results on Google doesn't mean that they are the best tutorials as Google ranks results as per SEO ranking and NOT as per content quality of the tutorial. Hence you end up wasting hell lot of time, money, energy and peace in experimenting with multiple tutorials to find a good one. The worst thing is that your passionate start peter out into search for the best course whereas you should be just starting with the best course in under 1 minute. Hackr.io is the solution.
What Hackr.io do:It recommends you the best online programming tutorials for any programming language. All the tutorials are submitted & voted by the programming community. It's like StackOverflow for online programming tutorials. Community upvotes the tutorials they like hence overtime best tutorials rises to the top of the page.
Give it a hit - https://hackr.io/
read this article from official java site - http://www.javaworld.com/article/2076864/java-concurrency/building-an-internet-chat-system.html
Also for learning Java - https://hackr.io/tutorials/learn-java
I have to disagree with notion that there's something inherently wrong with nesting callbacks. Yes, it's wrong in the node.js world because callbacks managing callbacks become impossible to manage very quickly. But in Ruby world, 3 nested callbacks are mostly likely just 3 lines of synchronous code. It's neat, manageable, debuggable etc. That's what I miss about synchronous programming.
I've been using promises which lets me do
asyncFunction1() .then(function(result) { return asyncFunction2(); }, function(err) { // do something about err }) .then(function(result) { // do more... }, function(err) { // do something about err });
Which looks way better than nested callbacks.
And in ES7 , it's looking even more promising. It looks almost like synchronous code.
Scratch ([http://scratch.mit.edu]) is a great starting language for kids. There's a reimplementation of Scratch aimed at highschoolers and adults ([http://snap.berkeley.edu/]) but I haven't looked into it.
From the about page:
>SpaceVim is a community-driven vim distribution that supports vim and Neovim. SpaceVim manages collections of plugins in layers. Layers make it easy for you, the user, to enable a new language or feature by grouping all the related plugins together. It got inspired by spacemacs.
Basically, it's a configuration for vim which is curated by a community. You can add or remove packages to suit your preferences and to add functionality for new languages, features, etc. I haven't used it yet, but I do love Spacemacs, which is the same idea for Emacs: http://spacemacs.org/
The type system is deliberately unsound, so even in fully typed code, there are holes. For example, take a look at [this](https://www.typescriptlang.org/play/index.html#src=function%20invokeCallback(callback%3A%20(o%3A%20Object\)%20%3D%3E%20void\)%20%7B%0D%0A%20%20%20%20callback(%22not%20a%20number%22\)%3B%0D%0A%7D%0D%0A%0D%0Afunction%20expectNumber(n%3A%20number\)%2...
function invokeCallback(callback: (o: Object) => void) { callback("not a number"); }
function expectNumber(n: number) { console.log(n * 10); }
invokeCallback(expectNumber);
Here, invokeCallback()
expects to be given a function that accepts any object. It calls it and passes it a string, which is fine, since strings are objects.
The expectNumber()
function takes a single parameter, which is required to be a number. We then call invokeCallback
and pass it expectNumber
. At runtime, the parameter n
in expectNumber
, whose type is number
will hold a string.
That *
then fails (which in JS means it evaluates to NaN), which is exactly the kind of thing you expect a type system to catch for you.
In this case, the hole is that TypeScript treats functions as bivariant in their parameter types instead of contravariant. The latter is sound, the former is not.
Note that this code is fully typed, takes in no user input, and doesn't do anything squirrely, and yet has no static errors or warnings. It is the type system itself which simply (and deliberately) has holes in it.
Hi! Thanks for your interest and your question! :)
If you follow the link, you can see from the first paragraph that starts the 'readme', that there is a line giving links to the official site and documentation:
See Leo, the Literate Editor with Outline, at leoeditor.com or on github, and VS Code at code.visualstudio.com.
Also, this is not 'language specific' tool, so I didn't think about posting in 'programmingLanguages' but you might be right in that it may reach people who still might have an interest in something like 'Leo'. So thanks again!
Related: 10 Minute Mail is great to use as a throw away email without the hassle of actually creating an account. You can cut down on spam email massively if you make good use of it, though obviously it won't stop spam that you're already getting.
http://10minutemail.com/10MinuteMail/index.html
Edit: Just realised how much of a shill I sound like. Whatever, it's an awesome service.
Clean Code does something like that: It picks a Java code and starts refactoring and explaining why the refactor and how it makes more "readable".
> Working Effectively with Legacy Code by Michael Feathers
This is considered the benchmark, today.
I work more with end-to-end integration tests than with unit tests. Some suggestions:
Great suggestions! I also highly recommend Code Complete: A practical handbook of Software Construction (Second Edition) by Steve McConnell
Duplicated Code - I use the following rule: 1, 2, 3, refactoring
Commented Code - The hell is when you have various implentations of the same function commented. :/
These names were given by Uncle Bob in Clean Code
A short explanation can be:
Divergent Change
A class being changed by change in different concerns. The class has a lot of different responsabilities
Shotgun Surgery
A lot of classes being changed when one concern changes. One responsability is shared by a lot of classes.
Fair enough, and to be honest...I learned how to code on my TI-83 back in study hall nearly 2 decades ago. Today, I'm a pretty solid engineer and have been professionally programming for almost 10 years.
It's not a bad idea to learn the basics of a language before getting full-on into the aspects of programming...and to your credit, learning something in the C-family will give you the most versatile knowledge when it comes time to get into the deeper stuff.
A lot of people who are just starting to learn how to program decide to set out to learn x languages in y amount of time or something like that, and after about the second language, they realize that once they know C-syntax and how to program, picking up new languages is trivial. A talented programmer who knows C or C++ could easily pick up the basics of Java or C# in a weekend...and after spending a month or so writing it, they'd likely have all of the major libraries and functions committed to their memory.
When you have some of that basic code-writing down, I'd check out a couple of books:
First off, the Programmer's Bible: The C Programming Language by Kernighan and Ritchie. It's an older book, but it's relevance has never diminished.
Next up, I'd say read The Pragmatic Programmer by Hunt and Thomas. Great book, I recommend it to anyone...but it does count on you having the basics down.
Code Complete by McConnell is a lot like the Pragmatic Programmer, but it's a lot more wordy, for lack of a better term. It's still a good one to have on your shelf. I found that one is not as easy to read cover-to-cover, but the information in it is valuable no less.
Hope that helps, and good luck!
I'm not sure if this is what you are asking, but I just barley started doing the Project Euler problems. I've been out of school and coding for over 5 years. It's so nice to sit down and just focus on solving a problem without worrying about commenting my code, or integrating it into some big project, or putting a bunch of error handling in to deal with ignorant users. It's just me and the machine trying to solve a problem in quick and efficient way.
On the other side of the spectrum, I've also just started reading Head First Design Patterns. I've only read about the first pattern, but as luck would have it I had just spent a couple of weeks working on a problem that had the exact problem the Strategy pattern describes. My solution was not nearly as clean, and I felt that at the time. Learning this elegant solution at that exact time really excited me and I'm looking forward to the rest of the book.
When you start learning imperative programming, most languages can teach you the basics. What's a variable, what's a loop, what's a function, different types, etc.
The benefit with JS is that you can just start coding right away, there's not platform installation step. Open browser, open console, write js.
Want to preserve your changes? Write an html linking to a js file (or even use http://codepen.io/ or http://jsfiddle.net/) and now you can wow your friends with your creations.
I see immediate gratification as an important step to keeping the person engaged in the activity. I think that's the most important aspect: make that human keep up with the learning.
Now, if you have a tenacious individual that WANTS to learn programming and has no problem reading books, sitting down and practicing, pretty much anything can do the trick. That person WILL learn to program because he/she wants to do it and has experience learning by themselves.
well, it sounds like you're in desperate need for help, so ill show you where I learned basic programming myself.
Specifically the python tutorial.
Man, its good.
But yeah, if you are just starting out, thats a good resource for begginers. They also got ruby and javascript and stuff, so im sure you'll find some use on that website.
RVO may eliminate one copy, and op+ "like ints" requires a new object. Copy elision and moves also help on both the parameter and the return value. When all copies can be eliminated, which copy is unnecessary?
RVO has been a known technique, in the context of this language, since the USSR broke up, circa 1991. You can't both worry about unnecessary copies and ignore copy optimizations that are old enough to legally drink.
I did a bit of this visualization a few years ago, you can play with the slider on the bottom:
the magic number is around 1597463007. The graph is in log-log mode, and the linearness of the plot with a near constant slope of -1/2 suggests that no matter what the magic constant is, this function approximates C * rsqrt(x) for some C. Shifting this magic number will only translate this line, which suggests that for any C*rsqrt(x) (within the range of the graph), you can find a magic constant so that you can natively approximate C*rsqrt(x). For example, if you want a native approximation for 2*rsqrt(x), then you can slide the slider until the line passes the y-intercept at 2. Unfortunately, you can't actually see this number, but if you go through the rest of this article, you'll find that the magic here would be int_to_float(2) + int_to_float(1)/2, or 0x40000000 + 0x3f800000/2 = 0x5fc00000. (e.g. https://repl.it/repls/EminentWorthlessLoaderprogram). The approximation there differs from the Quake one because it isn't centered, so the approximation is only good around perfect powers of two. But my multiplying the magic constants by 0x5f3759df/0x5f400000, these approximations should improve.
For any constant c, c * rsqrt(x) can be approximated with the magic number int_to_float(c) + 0x1fc00000
Depending on your school, the POSIX compliant nature of Mac OS might make a MBP worth it. It gets you the perks of running something like linux, but you can also watch Netflix or run Photoshop without having to hack together an elaborate workaround. So even if you're not in love with Apple (and these days, who is?) it's still mighty convenient to be using a Mac.
This might vary from school to school; "software engineering" might involve more MS technology than "computer science" does at my school. Here, the department infrastucture is Linux from top to bottom, so Macs fit right in. If your school isn't like this, everyone else is spot on in suggesting a Lenovo. Macs are solid, but my experience is Lenovo laptops actually stand up better to abuse in practice. Drop a MBP, and the screen will crack. A Thinkpad, probably not.
Personally, I have a 13" MBP as well as a little EeePC running Ubuntu with xmonad.
I second /u/josuf107's solution, but I wanted to provide one similar to the original. It's not as smooth but I'm a slow typist who makes a lot of mistakes and there's a limit to how many times I bother doing it before getting fed up: Asciicast. On the other hand I do start off with an empty file.
I am the author of the Zeus editor and it has ctags integration (but Zeus only runs on Windows).
To ctags your source code in Zeus you would create a workspace and then drag and drop the 100,000 files onto the workspace using Windows Explorer.
But with that many files I suspect it would take quite sometime for this tagging process to complete. For example if even 100 files where tagged ever second this process would take some 1000 seconds to complete.
I've never tried doing this with such a large number of files, but I would hope given enough time it would complete the tagging process.
If the process did complete you could then search the tags produced for any keyword or partial keywords.
Exactly. It is great to be able to parallelize easily (not unique to perl6, either), but using the results of parallel computation efficiently (hint -> it does not include waiting for all the computation to be complete and then continuing processing in a single thread), creating code paths with real-world complexity, ensuring reliability, being able to do this with multiple machines in a cluster (not just locally), ...
A rather more interesting approach is Elixir's new GenStage with the Flow API. http://elixir-lang.org/blog/2016/07/14/announcing-genstage/
> I haven't been able to embrace emacs. I'm sure you've heard that before :)
Yea, for sure. It's not for everyone. There are frameworks such as doom-emacs that make it more approachable and has reasonable defaults.
So glad to hear that you are going to try it out -- please make sure you let us know how you like it! The visualization of it all is a great characteristic of Ungit. If you have any suggestions, or changes you think can be made, be sure to contact: https://github.com/FredrikNoren/ungit. It's open-source, so if you want to jump in and contribute, that would be amazing.
This is a bit of a weak spot for Make. Things are easier if you just stick the object files where the sources are (but this is not very nice).
However, there's a trick how you can get the outputs in a different folder without having to write all the rules from scratch. Run make in your output folder, and use vpath directives to tell where the sources are.
NOTE: vpath doesn't work well for searching object files. It's best used to search for source files.
E.g. you want to have your sources under project/src
and put the build files in project/build
. Add something like this to your Makefile
:
SRC_PATH ?= $(dir $(abspath $(firstword $(MAKEFILE_LIST)))) vpath %.c $(SRC_PATH)src
This sets SRC_PATH to be the directory where your Makefile is and then tells make to search for .c files in the src subdirectory.
And then in the project/
directory run make -C build -f ../Makefile
.
I would of course need to test it out to have any proper critique, and I'm not criticizing your product specifically. The template stuff really looks like a great thing (even though I've just seen the very surface of it), and I totally trust your experience in how to handle stuff in a big project environment.
It was mostly a random thought I blurted out because I've started doing node based tools for different things myself, and I always get a feeling that I'm just making something that's more cumbersome and less descriptive than... plain text basically. Might be that I'm doing it wrong though. :D
If I were to brainstorm on alternative ways, I would like to experiment with some simple DSL that can be transformed with templates (good idea), coupled with a heavily specific editor that adds the good things from visual programming like preview of intermediate values and specialized input widgets (color pickers etc), while still retaining the power of text editing (compact, fast). Not sure exactly how it would turn out, but it feels like Light Table is aiming to be something like that. Not sure, haven't actually tried it.
Just thinking out loud. Or well, in text. :)
I ported a toy version of Asteroids to my FRP (functional reactive programming) library, in case you're interested. It's written in Haskell, of course.
You ought to be familiar with the project if for no other reason than the benchmarking they did. You may be familiar with John Myles White, a facebook data scientist that is one of the main contributors and proselytizers of the language, who wrote Machine Learning for Hackers.
-- Also a Data Scientist
I would recommend https://www.codecademy.com. They teach the basics of a ton of different languages, so I would pick one and just focus on learning the concepts of the language more so than the language syntax. They have everything you need to start, right in your web browser. I personally would recommend java or another object oriented language for starting.
https://scratch.mit.edu is also a great place to start learning the concepts, if you want to learn logic statements and programming basics in a more visual way.
Good luck!
> found his expectation of stable APIs over a 10 year timescale unreasonable
Source compatibility over extended time periods is extremely desirable. Go's compatibility promise has already met such expectations for five years and shows no sign of not continuing to do so for quite some time. In other words, such an expectation is entirely reasonable.
I once met a blind guy who makes a living as a programmer, business coach, analyst, and a few other things I don't understand. He's also an avid mountaineer, climbing on the highest summits of each continent (including Antarctica; 3 more to go).
That's the guy:
It might sound like an ad or something, but I was really impressed by the guy's approach to life, optimism and confidence, all despite the hardship that being blind must be.
I think this article it's just the tip of the iceberg. Great discussion about the exceptions is in Chapter 9 of Effective Java by Joshua Bloch (yes, that's Java but most of the ideas can be used in all OOP languages). I highly recommend that book.
Boolean parameters usually show a violation of the 'Single Responsibility Principle'. In this case there is a parameter that toggles if the cache should be checked. That can be split into another method, eg GetFromCache. This makes both methods much easier to test since there are few code paths per method. The other parameter is automatically creating a new object. For this I would just remove the parameter altogether and either return null or throw an exception if the record doesn't exist. The code calling the method would then call a CreateRecord when needed. Again, the GetRecord method becomes a lot simpler and easier to test and the code calling the method becomes easier to read and understand.
Methods with boolean parameters will cause some build tools to mark the code as failed, in my case PHP MD will class any new code like this as an error and fail the pull request.
I'd suggest doing a quick search for something like "methods with boolean arguments" and have a quick read of a few posts and also reading the book "Clean Code" (for both this and many other ways to improve the code you write).
I'd throw Pragmatic Programmer on the list as well. It distills some of the best advice in Clean Code and its ilk, and sums it up in digestible chunks. Plus it goes beyond writing code per se: professional development tips, advice on how to get started on a difficult project, etc. It's a good appetizer for some of the heavier, more code-heavy tomes.
You're right, and on second thought I agree with your argument; what I dislike is how the author is using the technique, not the technique itself. It's just that it's not as common from my perspective and I just grouped both of them together.
This last one is not as forced as the previous one I've read but it's still a pretty annoying read IMO. I really like most of what he writes/talks, and Clean Code is one of my favorite books, but I'm starting to think he's starting to go off the deep end.
It's also Item 16 in Effective Java
The Sedgewick & Wayne book is good. Introduction to Algorithms is on my to-read list, as is Knuth's The Art of Computer Programming (which didn't make the list). As others have stated, reading 10 of these seems like overkill.
Well.. It doesn't have to be more code. You can use monads to hide the error checking behind a clean and fluent abstractions. And from is it readable, if you are writing your own APIs monadic, the naming can be very expressive for that "Clean Code" feel.
Sorry, I don't have details with me to cite anything directly, but I can tell you some good places to look.
I think I first came across these particular arguments via Steve McConnell and/or Robert Glass, so if you have any of their books I'd suggest looking there to start with. Code Complete definitely includes substantial discussion of routine length and cites original sources.
For more recent papers, which are sometimes better for taking contemporary languages and programming styles into account, I usually find Google Scholar is the quickest way to find recent developments these days, though unfortunately a lot of potentially relevant papers from the past few years are locked up behind paywalls courtesy of the usual suspects. If you work in academia or have access to a library with subscriptions to the major journals, it's worth looking, though, as quite a few somewhat-related papers do get published.
I don't think I've seen a list of must-read books that consist of language-specific books (except the odd Effective C++ or Effective Java, as seen here) in a general programming subreddit. Maybe they get appropriately downvoted before they hit the front page.
Having said that, I do agree that for the most part language-agnostic books are better to recommend to a wide audience. It's just that there are already dozens of blog posts recommending these same books. If I Google for suggestions, I'm already going to find these. This post doesn't add anything but traffic to the OP's blog.
I guess this is a matter of taste, but Clean Code, arguably the standard on, well, clean code, recommends extracting methods. As often it depends, but I agree with you that it makes navigating harder, especially without proper tool support.
> Do you have any comment on what I could do if I re-learned Prolog compared to what I can do in eg Clojure + core.logic?
Clojure's core.logic is rather similar to Prolog, not by any means a coincidence. There's also a LISP-based logic programming language in Structure and Interpretation of Computer Programs, which is also very similar to Prolog.
Some Prolog implementations have very sophisticated compilers and parallel/distributed processing facilities, so using a sophisticated (perhaps commercial/proprietary) Prolog implementation instead of the LISP-based logic programming environments, you could get higher performance.
Prolog is also homoiconic (like lisps are), so it's trivial to write a Prolog program that interprets Prolog code (even more so than lisp, a Prolog meta-interpreter is a lot shorter). This makes it interesting because you can easily modify the semantics of Prolog, e.g. to utilize fuzzy logic or do constraint programming but still use the quite nice syntax of Prolog.
I'm also enjoying "Test Driven Development: By Example" by Kent Beck (despite my aversion to Java), and I really liked "Clean Code" by Robert C. "Uncle Bob" Martin. Such great ideas for keeping things clean and working.
Yes. Stroustrup used it in "The C++ Programming Language". I've tried, it - it's not as bad as you'd think at first. In fact I like it more than monospace, it can be a lot more readable, and certainly more beautiful. However it is hard to find a font where the symbols have enough space around them and are consistent heights, etc.
It would be nice if someone made a "proportional programmers font".
Oh, and it also screws on the one true indenting method - tabs for alignment and spaces for alignment - because the spaces don't match the width of the characters above. But since you lose alignment of similar lines anyway, I guess it is not worth trying to fix. And the loss of perfect alignment is not more significant than the improved readability in my opinion.