If you have a pressing need to get a handle on DP this probably won't help. That being said, I have an MS in applied math and think the way DP is presented to anyone coming at it from a CS perspective creates a lot of the confusion.
DP is NOT a programming paradigm. It is an optimization technique originally developed to reduce the search space of high dimensional problems.
I would recommend reading the first few chapters of this book (I have no relationship to or financial stake in either the book or you clicking the link): http://athenasc.com/dpbook.html
It does a good job of explaining how to think about DP without being prescriptive which I think is the flaw with the CLRS presentation.
Id also recommend this book (again, I have no interest in this): Dynamic Programming (Dover Books on Computer Science) https://www.amazon.com/dp/0486428095/ref=cm_sw_r_cp_apa_i_UT.7CbAEDXGAM
The book gets pretty dense pretty fast but if you push through the first few chapters it will give you additional insight as to the motivation behind why this technique was developed
I second Introduction to Algorithms (CLRS). Algorithm Design is also a good book but doesn't cover as much.
Depending on what is valuable for you.
If your main goal is to pass a programming interview, there are better (and by better I mean they are easier to read/understand/solve problems) books for that (I can recommend "Introduction to Algorithms" by Cormen).
If you plan to get a PhD in computer science - probably you should take a look at least at the first volume.
There is of course The Art of Computer Programming beware though: that way be dragons.
My algorithms class (I go to Carnegie Mellon) recommends the following two textbooks:
Introduction to Algorithms, by Cormen, Leiserson, Rivest, and Stein (CLRS)
Algorithms, by Dasgupta, Papadimitriou, and Vazirani (DPV)
CLRS is generally considered to be the best for algorithms, from what I've been told, and very dense, whereas DPV is still good but less in depth. DPV is also available as a download from Berkeley, which is pretty nice. I haven't really looked at CLRS, but I found DPV pretty valuable for some topics like dynamic programming.
Another thing that you could try is searching for algorithms courses at different schools. Many (including CMU) offer lecture notes online.
I suggest you to not waste too much time reading 15 different books on algorithms or spreading on 15 different resources (YT videos, online courses, forums, tutorials, etc.). Stick to 1 or 2 good books (try Introduction to Algorithms and, if you're completely new to algorithms, and have no idea what are they and what is their role in computer science and science in general, I recommend book by same author that could make a good preparation to previous book; it's called Algorithms Unlocked ) and start applying that knowledge in the run (solving problems). Remember: don't waste time on hundreds of resources; they may be great and offer some really high quality information about topic, but you just don't have time to go through all of them. Good luck!
For videos, I want to add coursera.
I think this course with Sedgewick is the best: https://www.coursera.org/course/algs4partI
There is also this one with Roughgarden that might match your topics better: https://www.coursera.org/course/algo
Why not both?
This looks like Counting Sort which only works on (small) integers
It uses O(N) time because it only counts each element once, but needs O(size_of(n)) space. You could use a hash map (dictionary) to reduce the space requirement, but a dict has O(log(N)) access and insert time, so then you're back to O(N log(N)).
As it happens, a Coursera algorithms class starts today! https://www.coursera.org/course/algo2 I took the previous course in the series, and the instructor (Tim Roughgarden) is great.
Edit: so far this course has been pretty good too. The first week covers greedy algorithms, and in particular scheduling and Prim's algorithm.
I disagree, this book is pretty bad for self teaching. The best I've found sofar is Skiena: The Algorithm Design Manual; though I used CLRS in school without trouble.
Here's why: I think CLRS goes deep while Skiena goes for intuition and defers details to CLRS, which I think is more important to grasp before the nitty gritties. I got intuition from my professor.
The Algorithm Design Manual by Steven Skiena. It's a good book if you want to supplement the rigorousness of CLRS with something that shows you real-world applications of algorithms. Bear in mind, though, that he discourages genetic algorithms and prefers simulated annealing. Actually, come to think of it, I should make a new topic about this.
> I'd suggest a introductory class or textbook on compilers.
I can highly recommend Coursera's compilers course, which is starting up again on March 17. It's tough, but very rewarding.
Are you looking for a rectangle/shape packing algorithm, like that? (link on the wikipedia article on packing problems)
This book covers both operating systems concepts and how they are implemented (in C) with a specific example of "Minix", which is a teaching operating system that inspired the creation of Linux. It's a great reference: https://www.amazon.com/Operating-Systems-Design-Implementation-3rd/dp/0131429388
I can't answer between those two, though the first one is rather in depth but can be too verbose to start up with. Your mileage may vary.
One that I like very much is The Algorithm Design Manual by Skiena. Easy to read and get started.
I used a combination of studying text books (Sedgewick & Wayne's Algorithms, Skiena's The Algorithm Design Manual, and CLRS) and practicing implementing algorithms using puzzle and algorithm competition sites like Project Euler, Top Coder, SPOJ, and UVa Online Judge. After a little bit of practice you start to recognize when you've solved similar problems, so you quickly know a starting point.
I think a good book that covers that basic mathematics needed to understand and use algorithms is Concrete Mathematics. Some of the problem sets are rather challenging, but you don’t need any special math knowledge to get started. It covers both continuous math topics, as well as dis*crete*. It’s co-written by the algorithms god himself, Donald Knuth.
If you struggle with some algebra concepts, I would also recommend Khan Academy. I’ve used it pretty extensively to brush up on topics I haven’t seen in a while.
https://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025
Take a look at The Algorithm Design Manual by Skiena ( http://www.amazon.com/Algorithm-Design-Manual-Steve-Skiena/dp/0387948600). While it doesn't contain /all/ possible algorithms (since that would be impossible), it does have a great catalog of problems and their associated algorithms. Besides, Skiena is a great writer and this is a great, accessible book in general. The war stories alone makes it a worthwhile read in my opinion.
Edit: http://www.amazon.com/gp/aw/d/1848000693/ref=dp_ob_neva_mobile is the newest edition
I've always been intrigued by how people great algorithms so I was really hoping someone would answer this Q.
Anyway, I did some google to find where you got it from and found this:
https://stackoverflow.com/questions/398299/looping-in-a-spiral/398302#398302
There are two different SO threads, this one is the original so I'm not sure if you've seen this one but scroll down, there is a guy who breaks it down to pseudocode and describes it great details.
From a quick glance, it looks like -1 or +1 dictates the direction of the spiral.
there are a few steps. figure out relative alignment of the images; upscale and move them to the appropriate location in the space of the final image; merge the overlapping pieces into a single final image.
step one is complicated, but can be skipped if you know exactly how you moved the camera. step two is fairly simple. step three is also complicated.
There are a lot of good books out there. I would strongly recommend Introduction to Algorithms by Cormac, Leisteson, Rivest & Stein (CLRS).
However, I would advise to also try to solve some problems by yourself, instead of just reading. For example, you can pick up a simple language (e.g., python) and try to solve a simple problem without looking at the solution first. This --in my experience-- makes all the difference in the world.
I would probably start with some graph problems first. You can also check out http://projecteuler.net/.
If you're not concerned with weighting the different categories, one very simple way to do this is by calculating the Euclidean Distance for each name, and returning the 5 "closest" ones.
If you want to get crazy, you could do something like Google's Estimated True Value.
I would recommend doing the Stanford and Princeton classes on Coursera (both come in 2 parts). They both have great lecturers and also include quizes and homeworks which IMO are important to understand the material. MIT's classes are probably good too but you'll miss on the practice materials.
You may want to look at Recommender System. Your problem can be solved in multiple ways. NLP is natural language processing and is not suitable for this job.
Much better tools would include: regression, cosine similarity, collaborative filtering or you can train a neural network. At the end you'd like to cluster your data and serve your recommendations.
There are tools to help you to make some "shortcuts". Try amazon machine learning. Feed it with your model, features, etc' and let it cluster the data for you.
It's a broad subject with many tools available. Good luck!
a very simplified view is that you encode the strategy elements needed to solve a problem as a set of "genes". then you package a set of genes into a program, and give that program the problem to solve. do that with a bunch of different permutations and combinations, and give them all the same problem; some of them will achieve more effective solutions than others. call this run of attempts one generation. for the next generation, you 'breed' programs by combining mixtures of their genes into the next generation of programs. the darwinian bit is that the programs that were more successful in the current generation contribute more of their genes to the next one.
here's a demonstration where the problem is to design a car that can traverse a hilly track, and where the genes are features like wheel size and body shape, that combine into a program which is a specific car design.
Truth be told, the only reason I'm doing Scheme right now is because I read Paul Graham's article on the Blub paradox and decided I wanted to learn a Lisp dialect to see if he was right - a friend of mine recommended The Structure and Interpretation of Computer Programs to me as reading (and I've been following along with the MIT Open Courseware videos too), although I don't think I'll ever actually use this in real life as a programmer. However, it has definitely challenged my thinking a lot coming from a Python / C++ / C# background, and I (so far) recommend it to everyone!
I will also suggest Introduction to Algorithms. It goes over a lot of the fundamental data structures and algorithms.
If you want pathfinding, look up the different basic search algorithms including Breadth First Search, Depth First Search, Universal Cost Search, and A* (A star) Search. If you're really brave, you can look them up in Artificial Intelligence: A Modern Approach . Most of it will be very difficult to understand, especially in high school, but chapters of that book 3.3 -3.5 might be what you are looking for. If you can understand, you'll find it very interesting.
I recommend "Artificial Intelligence: A Modern Approach (latest edition)" if you want to learn A.I. You can probably find a couple of second edition versions at your local thrift store (I always find them at Goodwill).
I've been studying The Algorithm Design Manual by S. Skiena.
It cover's most fundamental algorithms and data structures along with a great set of exercises.. Half of the book is 'The Hitchhikers guide to algorithms' and will help you identify your problem formally and formal ways to solve them.
The book cover's most of the same topics as Introduction To Algorithms (T H. Cormen) but in a less(ish) formal style, Skiena's humour is a little strange at times but good fun :)
I'm not going to provide links, but here are some good self-study books (in order of preferred to least preferred based on my own experience):
I don't care much for Introduction to Algorithms by CLRS, but others swear by it. I think the above books introduce the material better. A free online book is Algorithms by Dasgupta, but it assumes some maturity in the field.
The canonical most widely used book on algorithms, both in teaching and by practitioners, is without a doubt Introduction to Algorithms by Cormac, Leisteson, Rivest & Stein. It's a brilliant book that most likely contains everything you'll ever need. There is some math knowledge required but I think you'll get by fine with some basic calculus and discrete math.
Are you asking for help on predicting the market, or on how to fit a curve with a genetic algorithm?
If the first, don't bother. If the second, it's a fun choice for exploring this kind of machine learning.
The polynomial fitting approach suggested by /u/sparks88 is a good one, it's easy to understand and implement, and would probably do a pretty decent job of matching the curve. If I recall correctly the Stanford Machine Learning course at Coursera explains early-on one way to go about that kind of curve-fitting.
One hard part might be the fitness algorithm. Quantifying the fit so you can grade the candidates could be fairly involved. Someone who can math can probably name a couple of algorithms for it, but I'd probably do something dumb like draw small graphs of the functions, area-fill above the curve, then XOR the graphs and count the pixels that are left, smallest value is the best.
The problem is that while you can fit a curve, that isn't going to give you a way to predict the future price. All you would have done is create a way to draw a smooth squiggly line that looks much like a rough squiggly line. To make it predictive you would need to hide the last few days of the stock data in the training set while it is fitting the curve, and then in the fitness algorithm unhide the data and see how well the evolved polynomial fits the new data. Poorly most likely, unless you do something more clever than a basic breeding strategy.
Anyway, even if it doesn't predict stock prices well, it'll be interesting to write.
Isn't this this just a flood-fill algorithm? Here's a deeplink.
It seems like the problem isn't exactly defined, nor the implementation, supposedly in an attempt to stay general. The abstract doesn't state a clear result, nor did I detect one in the paper itself.
This is similar to a reply I got on StackOverflow. My follow-up questions are:
Z_m = {0, 1, ..., m-1}
. As a function of m and n, how many duplicates should you expect? Is there a relationship between the number of expected duplicates and the average-case time complexity of the modified binary search algorithm? If so, when if ever is it expected to out-perform the linear approach in the average case?You could still use floats but just zero out the last ~~16~~ 3-4 bit. If you cut out part oft the mantisse, equals should work and so also the hash function. Float has some function to get the bits as integer and convert them back to floats. ~~I'm on my phone oherwise i could post so me example code.~~ edit: It just adds a fuzzieness to the float, depending on your input data it could yield some errors. So it is probably not the cleanest solution out there, but within certain domains and applications it should do the work.
private static float normalize(float f) { int i = Float.floatToIntBits(f); i &= 0xFFFFFFF8; return Float.intBitsToFloat(i); }
which is also mentioned in: stackOverflow
You run it on your system with the sample test cases (and preferably a few more of your own) and see if it gives a runtime error.
Check this page for possible runtime errors that might occur and what part of your code they can arise from - https://www.hackerearth.com/practice/basic-programming/input-output/basics-of-input-output/tutorial/
Can you link to some of these solutions? I can't see any reason for binary search.
Edit: here's one: https://leetcode.com/articles/longest-common-prefix/#
I can't explain it; it's a very foolish way to get LCP. Maybe if you're only allowed to compare substrings instead of individual characters, but even that's a reach.
Sounds like a job for RECURSION! edit: To see it in action, [click here](http://coffeescript.org/#try:%0Aoptions%20%3D%20%5B'b'%2C'c'%2C'a'%2C'e'%2C'd'%2C'f'%2C'g'%2C'h'%2C'a'%2C'e'%5D%0Alength%20%3D%20options.length%0A%0Acurrent%20%3D%20%5B%5D%0Abest%20%3D%20%5B%5D%0AbestScore%20%3D%20-Infinity%0AcurrentIndex%20%3D%20...) and then click "run" in the upper right.
Sorry if coffeescript isn't really your thing. If you want, I could write it in a different language or pseudocode for you.
edit: So I just realized that since my example evaluation function gives a higher score depending on how correlated the character code is to the index, I have created perhaps the most inefficient sorting algorithm I've ever seen. O(n!).
edit 2: If you really wanted to work with as many elements as possible, implementing a minimax on the recursion tree would probably give you a few extra elements.
Several people have recommended this course Algorithms - Princeton. Others recommended this book on Amazon. Hope this helps a bit. :)
Intro: Grokking Algorithms by Bhargava...it is such a nice read & so well presented. It's basically the closest thing there is to a Head First DS&A...sadly not enough covered by itself
Then: Coursera Algorithms 1 & 2 by Sedgewick...very nice for the most part although I ignored some sections
Then: The Algorithm Design Manual by Skiena...I'm almost halfway through & it's an OK read except the last couple chapters; apparently the second half is the reason to get it though so will see. It'd be better if he didn't use some ugly af language (C++? Python? idk) and stick to the standard, Java
No interest in Dasgupta or CLRS. I don't want to see the word lemma in my text book or hundreds of sum equations.
I did a ISO C++11 port of k-d trees that is a template class...
template<typename T> struct kdnode {...}; template<typename T> class kdtree {...};
And verified it against MATLAB's KDTreeSearcher using their example (which is the Fisher Iris data set).
I went straight to the source using Jon Bentley's paper first publishing k-d trees.
if you want to learn C++, buy the paper from ACM and get Stroustrup's latest "The C++ Programming Language" 4ed (covers ISO C++11). You'll learn a lot and get the IKEA effect for building your own algo library!!!
EDIT: I also did a ISO C++11 port for Adjacency List Graphs G(V,E) that implement all the CLRS 3ed basic graph algorithms (DFS, BFS, Topological Sort, and Dijkstra's algorithm, template (binary) Object-Heap class for the vertex minimum priority queue, etc...
You could do something similar and then augment the k-d node/k-d tree data structures to hold the vertices in the nodes. That way, a k-d tree kNN operation can tell you which vertices to remove from your current graph.
HTH
The down votes are likely because we don't get many people with a casual interest around here. Most people here have a background in Math or CS and the few questions we see are along those lines. Don't let them get you down.
I would recommend The Algorithm Design Manual. It's CS oriented but should still make for an interesting read. Half the book is just a list of common problem types and possible approaches. I've always found it fun to browse.
I would also recommend getting a deck of cards. YMMV but I would say that it's unsatisfying to simply learn OF an algorithm and far more interesting to be able to actually use it to some degree. Luckily that doesn't mean you need to know how to program. The best way to learn a sorting algorithm is to lay out a shuffled set of 5 to 10 cards and sort them by hand. Watching some animations like this one are interesting too.
If you want "mathy" algorithms, I would suggest something from scientific computing. For example an algorithm that solves systems of equations or evaluates integrals. This page could be a starting point. If you are into probability and statistics, exploring Monte Carlo methods could be interesting. The theory behind these methods can be quite tricky, though. But if you find good sources it might be manageable, and it should be interesting enough to just write about the stuff you understand.
If you want a more "traditional" CS algorithm, you can browse through Introduction to Algorithms (aka CLRS). Plenty of inspiration there.
You could also explore the fundamentals through automata theory. You could for example explore deterministic and nondeterministic finite automata. That would not be too difficult and you would learn a bit of the weird math computer folks deal with. Maybe write about the pumping lemma for regular languages. If you pick up an introductory-level book on automata theory, this stuff should be covered within the first few chapters -- if you keep reading you will learn about Turing machines and the answer to the question Hilbert's Entscheidungsproblem: "Is there an algorithm that can solve all mathematical problems?" [1] (The answer is "Nope...")
I think different people have different ways to retain knowledge. Whatever motivates you.
I like to view learning as a game with multiple levels. I'd start with something easier, more practical, this will keep you motivated with faster feedback and will raise the right questions for the next level.
Start with a book of problems that isn't very formal yet, I'm thinking of the likes of Cracking the Coding Interview
Then move to some of the suggestions around The Algorithm Design Manual and Introduction to Algorithms.
At this point you may want to specialise your knowledge in a particular area of algorithms or math.
The good thing about this approach is that you see results earlier and can also start applying your knowledge quickly (even if very limited).
Along the way, as a second vertical of learning that you should pursue, you should take some online courses on algorithms or more specialised applications (biology has a lot going on now).
Have fun
Edit: try HackerRank too!
I second The Algorithm Design Manual, very good book.
I think that being proficient with algorithms consists of three skills:
Understanding specific algorithms and how they work (theory)
Being able to utilize/tweak a specific algorithm to solve a problem
Being able to decompose problems and choose relevant algorithms
And I think that The Algorithm Design Manual is excellent for this, Carmen is more comprehensive but I find it a bit to dry for my taste.
I loved "Algorithms in C++" by Sedgewick.
Standard recommendations will also be: "Introduction to Algorithms" by Cormen et al, and "Algorithms" by Dasgupta et al.
If you want to get introduced to the world of programming contests, I hear great things about Programming Challenges: The Programming Contest Training Manual.
The relevant part of the paper: "If the permutation is given by an array and its entries can be used to record information as the algorithm proceeds (perhaps destroying the permutation in the process), data rearrangement can be done efficiently, using O(n) time and O(log n) additional bits of storage [13, ex 5.2-10, pp. 80, 595]."
The referenced work is The Art of Computer Programming vol 3, asking basically the same thing you do:
>In order to avoid using N extra "tag" bits, yet keep the running time essentially proportional to N, we may use the following algorithm based on the cycle structure of the permutation:
>P1. [Loop on i.] Do step P2 for 1 <= i <= N; then terminate the algorithm.
>P2. [Is p(i) = i?] Do steps P3 through P5, if p(i) != i.
>P3. [Begin cycle] Set t <-- Ri, j<-- i.
>P4. [Fix Rj] Set k <-- p(j), Rj <-- Rk, p(j) <-- j, j <-- k. If p(j) != i, repeat this step.
>P5. [End cycle.] Set Rj <-- t, p(j) <-- j.
>This algorithm changes p(i), since the sorting application lets us assume that p(i) is stored in memory.
Basically, instead of using NULL as a sentinel value, it just changes the permutation for that particular array slot to the identity permutation, but otherwise the running time of this algorithm and space usage should be the same as the one I gave above...
The more interesting question is what the running time might be like for large arrays (such that they can't fit in caches), so even though the theoretical running time might be linear on a flat-addressing model, the constant value would be HUGE.
If you want to learn by doing, you could take a look at the pacman projects from Berkeley. They've recently added an autograder that tells you what tests you failed or passed. The project is in python 2.5+, so it shouldn't be that hard to get started. This project is, however, only for DFS, BFS, uniform cost search and A* search. It doesn't have any exercises for Dijkstra, etc.
The book that closely follows this material is the one by Russel and Norvig: Artificial Intelligence: A Modern Approach. They might mention shortest path algorithms in there, but I don't have it with me right now.
CLRS is indeed not the best introductory book because it goes extremely deep and works more like a reference manual. I would add The Algorithm Design Manual by Steve S. Skiena to your list by the way, in my opinion it's an excellent book for beginners on this topic.
It would also be interesting to know wether OP has something specific in mind he wants to do/learn (ie what class of algorithms).
Have you checked out the coursera.org coursers for algorithms? https://www.coursera.org/courses?query=algorithms&languages=en Checkout the "required background" section to see what to learn. This si what Im currently doing
While you are waiting, you can read 'Combinatorial Optimization'. https://www.amazon.com/Combinatorial-Optimization-3-B-C/dp/3540443894
In its breadth and ambition, it's pretty much the equal of tAoCP. It's all about algorithms with polynomial runtime.
You got me here, I struggle with the former, I keep repeating the lecture till I fully understand what's going on, then I write my notes along with the code on paper and check what might blow up. This helped me recall the pseudocode.
I am frustrated with the latter. Take this problem, for instance, I don't know how to plug the BFS algorithms in. How to add checkpoints for some conditions. I think what I need is a solved example or two on how to tweak an algorithm with some additions to solve a problem.
Sorry for my bad English :D
Suggest you start by checking out
https://en.m.wikipedia.org/wiki/Diff
From a quick search, git appears to use one of four methods: as described in
https://git-scm.com/docs/git-diff
--diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows:
default, myers The basic greedy diff algorithm. Currently, this is the default.
minimal Spend extra time to make sure the smallest possible diff is produced.
patience Use "patience diff" algorithm when generating patches.
histogram This algorithm extends the patience algorithm to "support low-occurrence common elements".
If the grid is 7x7 and the navigation string is always 48, we can use meet in the middle.
Let V(p) be the set cells visited by a path p.
Let S the the set of all paths from (0,0) so (6,0) that follow the navigation constraints.
|V(x)| is 48 for every x in S.
​
For every cell (x,y) of the grid, let's store a set of paths Axy that start in (0,0), end in (x,y) and are of length 24.
For every a in Axy, |V(a)| = 25 and (x,y) is in V(a).
​
For every cell (x,y) of the grid, let's store a set of paths Bxy that start in (x,y), end in (6,0) and are of length 24.
For every b in Bxy, |V(b)| = 25 and (x,y) is in V(b).
​
Now we build the solution set S merging paths in Axy and Bxy for every (x,y).
For every (x,y), a path a in Axy can be merged with a path B in Bxy if the size of the union of V(a) and V(b) is 49.
This means that the path a and b intersect only in one point. Because both paths pass though (x,y), that point must be (x,y).
If we store sets as bitmasks, this means that b must be the bitwise negation of a modulo 2^49 (after removing the (x,y) bit).
​
We can use maps or hashmaps to store the bitmasks found in the first run of the meet in the middle.
In the second run we just do a table lookup any time we complete 24 steps.
Code: https://hastebin.com/xoparutezo.cpp
This idea can be improved with some pruning, my solution uses 2Gb~ but at least is feasible.
I haven't taken that course on coursera but if you have any interest in bioinformatics, the bioinformatics algorithms course on coursera is excellent https://www.coursera.org/course/bioinformatics
it is made by the same people that created rosalind (a "project euler" for bioinformatics) and it isn't even really about biology; it's about the really interesting and complex ways that we can analyze strings of information in our DNA in algorithmic ways and includes tons of variety of classic computer science algorithms (graph algorithms,dynamic programming,string matching,etc). It also builds off of simple examples until you get better and better and make more complicated applications.
I've been learning a lot from Tim Roughgarden's coursera course on algorithms, and he actually explains this topic very clearly, I think. I can recommend the entire course, if you get a chance, but if you watch the first two weeks' worth of lectures (up through the master method stuff), it might help you get a feel for analysis of algorithms.
As for whether you should step away from this, I'd like to think that if you can figure this out, then you are building mathematical skills.
Have you taken this course and if so, how was it? It looks like it's going to be heavily focused on analytic combinatorics! and it's taught by Sedgewick! But the intro lectures are read from powerpoints which is a big turnoff for me, so I can't decide if I should take the time or not.
I would simply go through all numbers and have a set for each place/digit. In the set simply remember all numbers which occured on this place.
At the end you check if there is only one number in the set. If so, it's fixed else you have the different digits that occured on that place.
In Python (generating a simple regex): https://repl.it/repls/CourageousWhitesmokeSpof
Looks like a good start! The code is clean and easy to read.
If you're going to use it in practice you'll probably want to use different data structures. In particular:
GetNodeEdges
is looping through all edges in the graph and then returning only the ones you are looking for. For most graphs that's a lot of extra work. For example, in the game Dragon Age Origins, the Korcari Wilds map has probably 400,000 edges, but you're only looking at 4 at a time. The other 99.999% of them are wasted. Instead, you'll want a different data structure that lets you look up the needed edges and not loop through the rest. I think the easiest thing is a "map" data structure that stores the edges for each node.getClosestNonVisitedNode
is looping through potentially all nodes in the graph for costTable (which is expensive), and then for each one it's looping through potentially all the nodes again for visited. That Dragon Age Origins map has 137k nodes. The if node == visitedNode
is potentially running 137k X 137k = 18 billion times! Then you need to sort them all, and return just one of them. That's 18 billion comparisons to get just one node. Instead, you can use a "map" or "set" data structure for the visited set. This lets you quickly find if something is visited without going through all the nodes. You can use a "priority queue" data structure for the cost table. A priority queue can quickly find the best node without looking through all of them (it has to look at 17 nodes instead of 137k to find the best one). It's called heap in Go.u/meneldal2, this paper on the game War seems to answer your question about computability for that game. RedBlack is quite similar in some regards, so the same could easily be true for this game!
https://www.semanticscholar.org/paper/Multinational-War-is-Hard-Weed/ec5960b31a567a5e866f1bfcd5b7157cb6a9b189
What you are describing is the quite well-known "Levin's universal search algorithm". That's the same Levin from the Cook-Levin theorem. See for example: http://www.scholarpedia.org/article/Universal_search
>"...Radix sort is the answer. The idea of Radix Sort is to do digit by digit sort starting from least significant digit to most significant digit. Radix sort uses counting sort as a subroutine to sort."
Also this explains what I'm trying to do:
>"....What is the running time of Radix Sort? Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the base for representing numbers, for example, for the decimal system, b is 10. What is the value of d? If k is the maximum possible value, then d would be O(logb(k)). So overall time complexity is O((n+b) * logb(k)). Which looks more than the time complexity of comparison-based sorting algorithms for a large k. Let us first limit k. Let k <= nc where c is a constant. In that case, the complexity becomes O(nLogb(n)). But it still doesn’t beat comparison-based sorting algorithms. What if we make the value of b larger?. What should be the value of b to make the time complexity linear? If we set b as n, we get the time complexity as O(n). In other words, we can sort an array of integers with a range from 1 to nc if the numbers are represented in base n (or every digit takes log2(n) bits)."
Source is same as above.
Edit:
>"Radix sort uses counting sort as a subroutine to sort an array of numbers." Source2
The reverse-engineer in me says there are less complicated ways (assuming the system runs on Windows).
It makes sense being a graph problem, that's how this problem was presented to me in a recruiting portal. E.g. https://i.imgur.com/Ar3YCRK.png
Problem description is almost identical to OPs.
I found this leetcode problem that's somewhat related : https://leetcode.com/problems/pancake-sorting/
But I haven't dug into the problem enough to figure out the optimal path.
For my problem the list of items is bounded to 8 : Size N is between 1 and 8
https://leetcode.com/problems/word-break/
This leetcode question is more of a decision problem (i.e. determine whether you can break a string into valid words), but you should be able to adopt the DP algorithm to solve your problem.
Don't sweat it dude! And nah, your algorithm is technically equivalent asymptotically.
​
This reminded me of a similar problem I solved on LeetCode once. https://leetcode.com/problems/number-of-ships-in-a-rectangle/
In this case we can call an API that tells us if a quadrant contains a ship ("enemy") or doesn't. This means that we don't need to recurse down quadrants that don't contain a ship. Suppose there is only one ship, then the recurrence becomes:
T(n) = T(N/4) + O(1), a=1, b=4, d=0. This would result in a=b^d, so T(n) = Theta(log(n)). This is much more efficient then checking all squares O(n), so there are situations where your line of thinking is optimal, just not the example that you gave.
Yes I did it, but acctualy you need to use a sort of sertSort so is not a good approach O(nlong)..
Its a trick problem I think the best solution looks like https://leetcode.com/problems/lru-cache/ that is O(1).
I agree with this point, may not need to resort to anything overly sophisticated. OP might even benefit from using something like Elasticsearch to store the long/lat values (from Distance Matrix API) and use Elasticsearch's geo-distance query to retrieve all close by points every time a volunteer drops off.
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-distance-query.html
This sounds like an interesting problem for a great cause, I'd be happy to help out in anyway
Please take a closer look at the code. As for your previous question, I am doing mod n
on line 14 and 30. Please refer to my mod function. Hint: cur=2, des=1, dest-cur=-1, mod(dest-cur, n) = 7
Here is the code in an online environment you can run: https://repl.it/repls/CrimsonDeeppinkGames
It some random online IDE I found and it is very slow, so when you click run give it some time it will eventually print out the answer
A use I have for BFS is a way to give a partial view of what is naturally a tree: a file tree. I use BFS in broot both for pruning the tree and for showing first the nearest matches, because that's what the user usually wants.
I was trying to make a web app so I can help my fellow students picking classes based on certain major's course selection requirements. Here is a figma design I did, but I think I am going to pivot to something else. Maybe make the parameters more specific.
Well, maybe. VirtualDub tried but that apparently didn't work out so well (though it does work). You could describe something like this as being like a DFA also - certainly it can be in finitely many states and you can ascribe it some transition function from state to state that depends on input. But I wouldn't really call it a pure DFA, since it does things that aren't part of the usual description of a DFA: extracting a range of bits from the stream rather than reading "a symbol" and doing a bitscan and doing more than one table lookup.
You may be interested in learning about the "Sox" tool. It is a command line tool (UNIX, Linux, MacOS, Cygwin) that lets you manipulate sound data, and apply filters and effects: http://sox.sourceforge.net/
It is open source, so you can look at the source code to see how things work! https://sourceforge.net/p/sox/code/ci/master/tree/
I hope so. ABM explores swarm intelligence and ant colony models are quite often used for research. For instance
http://ccl.northwestern.edu/netlogo/models/AntsSimple
If you do a Google search you'll find many more examples. Have a look at a couple of YouTube videos, they are quite fun.
Hi , It's me again , I'm trying to apply your concept to this task http://www.spoj.com/problems/MATT/ . In my opinion node in graph is oase+number of hours without water+ number of hours without food. I can contruct graph and then adjacency matrix apply the above algorithm and voila but I can't get this right for second test case so if you or anyone else could look a bit maybe I didn't grasp your idea correctly? Code is really simple and strightforward http://pastebin.com/AVVgSe2v Thanks in advance!
I can't believe you did it in 0.09s with an n^2 approach. I guess I need much larger inputs to force C programmers into linear time. Then I'll probably have to use ask people to implement a random generator though...
EDIT: Try this one as well: http://www.spoj.com/status/VFRIENDS/
For part 2:
Go to https://www.desmos.com/calculator (an online graphing calculator). Type in three equations:
Zoom out a few steps (zooming out so that the y axis is from about -40 to 40 is probably good).
Play with the a and b sliders until you find some values where the (x + 10) is completely above the ax line and completely below the bx line when you only look on the right of some x value (that is your n_0). (Since they are all straight lines, either x + 10 will be parallel to bx, or they will intersect at some point, and x + 10 will be above bx on one side and below ax on the other side. You want x + 1 to be below bx to the right of the intersection point.)
Lets say k=2, then you're just looking for the median, which can be done in O(n). From here I get the feeling that a divide-and-conquer method would work, maybe something along the lines of modifying median-of-medians:
These are just using the standard summation formula to sum to index n, then subtract the nth term.
https://brilliant.org/wiki/sum-of-n-n2-or-n3/
> I'm in the process of reviewing some summation formula too but I think its overkill?
If you want to be able to prove runtimes formally, yes you should know of the summation formulae. You don't need to commit them to memory (though I feel the sum of arithmetic series is helpful to memorise) just at least know how and when to invoke them.
Toy example, I'll let you extrapolate:
Define a card as a tuple: (id, due)
, where id is a UUID, due is the date/time when the card is due.
Define:
ease - 5 buttons: (Bad|Fail|Pass|Good|Bright)
getParams = id -> spaced repetition params
render = id -> HTML
answerCard = id -> ease -> (next due, next spaced repetition params)
getScheduler = deckId -> scheduler
getNext = scheduler -> (next scheduler, id)
When you start a session, inside scheduler
obtain all ids
where due <= now()
.
Pick a data structure to store these based on your needs: Priority Queue
, or Linked Queue
are fairly common. You'll never realistically have more than 10k elements at a time, so you don't need to optimise for asymptotics unless you're doing something crazy or want to. Even if you do have 10k elements, you could batch them into to an appropriate size and perform a fetch from the database. Spaced repetition is human scale.
When you answer a card, you get a new due date. After you persist the data: If due > now()
, if due <= now()
, design a strategy to splice it back into the queue, otherwise ignore it.
Come explore and contribute
https://github.com/ankidroid/Anki-Android (GNU General Public License v3.0) https://github.com/ankitects/anki (Affero General Public License, version 3)
What book are you using? There are probably examples in there on the pitfalls of using bad elevator queue structure. Try stepping through the algorithm, make sure you understand it, then see if you can code up a simple implementation.
https://stackoverflow.com/questions/12009556/datastructure-for-elevator-mechanism
Since you can already figure out the recursive solution, the next step is to do memoization. The chapter about DP in CLRS is very helpful for this.
After solving it with top down (memoization), try using the bottom-up method with array and tables for the problem.
I would also these problems mentioned in these:
This is a little harder:
Depends on depth you want to go into, I would recommend book (like Introduction to Algorithms is a book on computer programming by Thomas H. Cormen) if you want to go in deep.
Or topcoder tutorials https://www.topcoder.com/community/competitive-programming/tutorials/ if you want to speed it up.
The pytorch forums are excellent for learning and implementing a variety of machine learning models. (https://pytorch.org/) This is an interesting study of the optimality of short-term plasticity under certain conditions https://arxiv.org/abs/2009.06808.
/u/tsahyt gave you a good answer -- the pattern you're seeing has nothing to do with the fact that the number are prime.
Instead, it is a pattern because:
Almost by definition, it'll form a pattern.
The thing to notice is the pattern inside those rectangles, and how there isn't really a pattern across them.
Here's a plunker where you can change the formula and domain of the thing. Change the plot call in the bottom of script.js.
In the current example, I have x*x*random.
> "in order traversal"
Pre order traversal.
On the sample tree:
Pre order traversal will produce: 1 2 3 4 5 6
In order traversal will produce: 3 2 4 1 5 6
Also the problem statement asks to flatten a given binary tree, not to traverse it. And by flattened, here we mean a tree in which the left node of every node is null.
The target audience are those that apply for jobs at Facebook, Google, etc: https://leetcode.com/problems/flatten-binary-tree-to-linked-list/
There are many other web pages that implement this algorithm but none of them had explained the logic well enough IMHO. So I decided to give explaining it a try myself. Also it helps me with learning to: https://www.slideshare.net/frankcalberg/feynman-technique
Geeks for geeks is pretty good imo. Maybe you can find a youtube video explaining it if you want a better breakdown? You can test out a solution on a challenge site like hackerrank
> The way I understand it, A* is a compromise between depth-first search and breadth-first search balanced by a heuristic function, which determines the order in which the algorithm expands on any given node.
Instead of that, you should think of it as Dijkstra's algorithm with a heuristic function to reduce the search space (i.e. determine the order of node expansion). And you should think of Dijkstra's as a version of BFS without uniform edge weights.
> That doesn't seem to necessitate using floating point values.
A* is Dijkstra's with a different order of expansion, and Dijkstra's works with non-negative weighted graphs, graphs which could have non-integer weights, and thus the algorithm must accommodate that.
> The output of my heuristic function will not change along an ideal linear path from start to goal, but it will change by 1 or 2 for nodes that deviate from an ideal path.
Ok, so I see what you're thinking now with the ideal path, but we're not interested in using A* in a world without obstacles. A better counter example would be this: http://imgur.com/eIio6Ir
Using chebyshev distance, f(0) would be 4, but f(tunnelEnd) = g+h = 6+2 = 8, and the next node to expand would be the lower neighbor of the start point which would have f = g + h = 1 + 4 = 5, and 8-5=3 which is not in the set {0,1,2}, and clearly I can make that distance much bigger by expanding the size of this example.
Do you agree?
Here's another recommended exercise for learning A*, how it relates to DFS, BFS, and Dijkstra's, and the last problem shows, what I consider, an interesting and non-obvious application of A*. https://www.hackerrank.com/domains/ai/astar-search
Edit: Hm, I must have been tired last night when I posted this. Obviously the 5 would be expanded earlier than any of the the 8s. Maybe I need to think about this more, but can't now, off to work. You should try programming it.
Describing an algorithm like that is called pseudo-code, it is not really a language but it's still one variety of code and also needs indentation for easy reading.
Code doesn't have to be in a particular language.
For some reason when I saw your post that part describing the algorithm lacked proper typesetting and I apologize if it was just me hallucinating.
Now to your problem, there doesn't seem to be any tricky case, and it's not like anyone can just guess what went wrong in your situation like that. Can you link your program and the website where you found that problem for me and others to take a look at ?
For sharing programs, gist should do fine.
First of all, there is <code>git bisect</code>, which does a binary search for the failing commit.
Second, doesn't the branch you're testing consist of a linear sequence of versions, each one having been created by a commit? I'm not sure I follow your description exactly, but you seem to be seeing it as a tree. (It might also be that my view of versioning systems isn't sophisticated enough.)
Not sure if you follow Java, but this is my implementation: https://codeshare.io/2BPq7L
As you can see, I'm using a ListNode class to store the word in question, and LinkedList of nextWords. I would imagine that these "END SYMBOLS" can actually be full-stops/periods themselves which I simply treat as words which come after certain words.
That is to say, a word like: "This is great." would give us training data as follows:
This -> "is"
is -> "great"
great -> "."
This way, we shouldn't even need a "START SYMBOL" right?
I didn't see anyone mention Introduction to Algorithms (MIT Press) yet. It's considered the standard must have book for algorithms and is language agnostic. I use it along side Grokking (for python specific/easy to follow examples). You probably want Introduction to Algorithms plus a language specific book for whatever you are working with.
Introduction to Algorithms:
https://www.amazon.com/Introduction-Algorithms-3rd-MIT-Press/dp/0262033844/
Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein. Best algorithms and data structures book I have read as a beginner. https://www.amazon.com/Introduction-Algorithms-3rd-MIT-Press/dp/0262033844
Do. the. math.
Seriously. If you understand the math, then you understand the algorithms. Don't shit on theory because theory is the only reason this stuff works anyway. Understanding individual algorithms is important, but it's way better to know how to treat problems in mathematical contexts: that's the gateway to creatively solving problems. If you start working at the math now, your foundation will me far stronger than others' as you continue in the field. I'd recommend Algorithm Design by Jon Kleinberg and Éva Tardos for learning about algorithms and Building Proofs: a Practical Guide by Suely Oliveira and Dave Stewart for learning how to prove things. Also, definitely take advantage of StackExchange.
Also, don't worry so much about your peers and how much "better" they are. Find what things you're interested in, dedicate yourself to those things, and go from there. Trying to game out how you stack up against your peers is a dead end and will only lead to impostor syndrome.
Honestly, the CLRS book has been the most helpful thing for me doing my Analysis of Algorithms class. For each example in the book, watch a video or three so you get the gist of what's going on, then step through the pseudocode example so you more intuitively understand how it works. I also tend to skip a lot of the proofs, but they are there if you need them.
Generally after CLRS you should be ready to read TAOCP. But if you still feel insufficient, Professor Knudth has another book that you can look into to enhance the math background - Concrete Mathematics: A Foundation for Computer Science.
the most popular algorithm book: https://www.amazon.co.uk/Introduction-Algorithms-Press-Thomas-Cormen/dp/0262533057.
But to answer the question about studying advance algorithms, it is better to know which programming language is preferred (i.e. Python, C++, etc.) and which area (i.e. Finance, computer science, data science, etc.)
I can recommend "Introduction to Algorithms" book by Thomas Cormen, it starts with easy stuff and goes to more difficult concepts (you still need to skip some chapters if you find them difficult). This is must read if you plan to interview at some high-profile companies like Google, FB or Amazon.
You can download online if you don't have resources to buy it. I've implemented some of the algs in Go here: https://github.com/peterprokop/Gormen
I personally like Skiena's book The Algorithm Design Manual https://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/dp/1848000693
If you already studied algorithms but need a quick refresher, Algorithms in a Nutshell is good. https://www.amazon.com/Algorithms-Nutshell-OReilly-George-Heineman/dp/059651624X/ref=sr_1_3?s=books&ie=UTF8&qid=1496051402&sr=1-3&keywords=algorithms+in+a+nutshell
From Artificial Intelligence: A Modern Approach by Russell and Norvig, "A* is complete and optimal, provided that h(n) is admissible (for TREE-SEARCH) or consistent (for GRAPH-SEARCH)." Consistency is the property that f(n), the sum of the cost to reach the current node and the heuristic at that node, is non-decreasing. If the path were not optimal, there would be another node in the priority queue n' on the optimal path from the start node to n based on the graph separation property. Because f is non-decreasing along any path, n' would have lower f-cost than n and would have been selected first.