If you sign up for everything with the same password, all it takes it one of those hundred websites to get hacked and you're boned. If you signed up for MyPokemonFantasy.com in 2003 and it turned out they didn't have good security on your password, when they get pwned you lose all security on all accounts ever. Have I Been Pwned illustrates this concept. My long time personal email address has been involved with 19 different password breaches, for example.
However, if you have used a password manager, then MyPokemonFantasy.com would have a different password from your bank and every other website. So if it gets pwned, only that one account is lost.
This all hinges on the password manager not getting pwned, of course. Which is why you should be very serious about choosing one that you believe won't ever have any serious security issues and why you are accepting complete trust in them to all your accounts.
One upside though is if a popular password manager got hacked, word would get out quick. It would hit the news as soon as accounts that were protected by one of them were getting cracked open. So you'd have some warning and if you weren't one of the unlucky early people to get your stuff nicked, you could go and fix it.
Those are my favorites that I can remember and type easily froumy tablet.
If it's static, why not just use GitHub Pages?
Otherwise, there's: https://lowendbox.com/
And since I like stability and accessibility, I've used DigitalOcean, which is very scaleable and developer friendly.
It's not that the kernel is faster, it's the process of transitioning from one mode to the other that takes a few cycles. If a user-mode process wants to do I/O, it has to trigger a special system call instruction which involves saving the process's state, doing a few security checks, copying data between user and kernel buffers, and so forth. Kernel code just has to make a function call.
That's why, for instance, Linux has the sendfile syscall. To do the same thing in userspace, you have to repeatedly read from one file descriptor into a buffer and write to another one, which means lots of transitions between user and kernel mode.
Check his bookcase, feel free to buy any of the following, if they're not already there.
The Clean Coder - Uncle Bob
Design Patterns - the Gang of Four
The Mythical Man-Month - former IBM guy
Clean Code - Uncle Bob
Saga, Vol 1 - Vaughan and Staples
Effective Java - Block (if his coursework is in Java)
No. There's much more to work than coding. The fact that you're asking this question suggests that you're in a managerial/lead position and that you have no idea what you're doing. This is deeply concerning. Start by reading Peopleware and The Manager's Path.
If I have a standup at 10am, bug triage at 11, lunch at noon, a team meeting at 1:30, and a feature crew sync at 3, I'm not getting any serious coding done today.
If there's an outage in production and I'm trying to deal with it (or trying to write up the postmortem from a recent outage), I'm not writing code today.
If I'm reviewing other people's code, I'm not writing code. If I'm waiting for other people to review my code, I'm not checking it in until it gets reviewed.
If I'm designing features, I'm not writing a code - I'm either writing documents or working with people to get the information I need to write those documents.
If I'm working on a particularly tricky refactor, it may not get checked in for a few days until I've done a full test pass and confirmed that everything is OK.
There is no correlation between commit rate and work accomplished. I could write a script to automatically convert my code between tabs and spaces and check in 20 times a day. No actual work would be getting done, but you'd see a huge number of commits and affected lines.
The book that allowed me to do this is the legendary "Gang of Four" Design Patterns book. Code examples are in C++ and it was written a while ago, but is still recommended as a fantastic resource for learning how to design software well.
There is also the SOLID principles, for object oriented design.
If you are interested on the logic side of CS:
Proposition as Types by Philip Wadler is simply excellent
Mathematical Logic by Chiswell and Hodges
Introduction to Lambda Calculus by
CS245 "Logic and Computation" at the University of Waterloo (most of the lectures handouts/assignments are online) and it's a course that second year take in their first semesters so it's pretty accessible.
For the complexity theory/algorithmic parts:
> Apart from memory
This is the main reason. Sometimes memory (or cache) footprint is important.
The lack of a backwards pointer also makes it easier to implement things like lock-free concurrency or efficient sharing/reuse of immutable data.
That being said, singly linked lists are not good general-purpose data structures and many people will never have occasion to use them explicitly.
Going by a quick Google search reviews don't seem that great.
A paid VPN is fairly cheap (I pay $32 a year for PIA) to me it's well worth it. I get much better speeds than I would with a free VPN and I get to deal with a company that I believe I can trust. It's nice that free VPNs are available, but they have to make money somehow and they're most likely doing it with your data.
That said if you do want a free VPN I remember people saying nice things about Betternet awhile back. I can't vouge for them though so use at your own risk.
You could argue that push down automata are an extension of finite state automata and turing machines are an extension of push down automata by looking at their attributes. I think it's more important to understand that they are minimal tools capable of solving different classes of problems, from the ability to recognise regular languages right up to anything that is computable.
I can recommend the Coursera course on Automata if you are interested in looking in more detail.
Evaluating the left side of an assignment is not the same as executing the assignment itself. If there's any code that has to run in order to figure out where the right side's value gets stored, it runs before the code on the right side.
Depends on your goals, really. If you're trying to learn databases for programming, you'll want to get something like sql rather than access. Mysql community edition is free and would be like something you'll use as a programmer often. https://www.mysql.com If you're looking for something like Access, there's libre office Base https://www.libreoffice.org/discover/base/. It's similar but not exactly the same. If you need it to get experience specifically with access you'll need to get a copy of access. If you're in college they should have a license you can use through the school. If not you can get student discounts on it.
I recommend going through The C Programming Language book. C-based syntax is quite common. Once you are somewhat comfortable with C, jump into whatever language your college curriculum will focus on early (is Java still popular in the first couple years?).
Spend as much of your free time programming as you can. You need to find out early if you really want to be a CS major and/or what type of programming interests you. Some people like desktop applications, others like low-level hardware interaction. Your CS classes won't do you much good for another major (except computer engineering), so it's best to find out early.
Read the FAQ for Algorithms, Part I:
> How does this course differ from Design and Analysis of Algorithms?
> The two courses are complementary. This one is essentially a programming course that concentrates on developing code; that one is essentially a math course that concentrates on understanding proofs. This course is about learning algorithms in the context of implementing and testing them in practical applications; that one is about learning algorithms in the context of developing mathematical models that help explain why they are efficient. In typical computer science curriculums, a course like this one is taken by first- and second-year students and a course like that one is taken by juniors and seniors.
That's pretty much the last thing the compiler does. The classic structure for a compiler has several phases. I say classic there because modern compilers can diverge from that. Anders Hejlsberg - who heads th team at Microsoft in charge of C# had an interesting chat about modern compiler construction and there are other approaches for using many, many very small phases that chain together.
EDIT: fix my poor typing skills.
https://en.wikipedia.org/wiki/Fluent_interface
pretty easy to implement where you want it.
for example, Console.WriteLine("blue".ToUpper().Reverse().ToArray());
or var result = new string("blue".ToUpper().Reverse().ToArray());
works fine in c#.
we need to do some type coercion to convert the array of characters back to a string, but yeah.
can also look at something like https://angular.io/guide/pipes where you end up with things like {{ birthday | date | uppercase }}
.
Knowing languages is nice (and necessary, really) for a software engineering position, but more importantly is your ability to solve problems. The best thing you can do/learn is "real world" work.
Either think of your own interesting project idea and implement it or gain experience working with clients (i.e. like http://freelancer.com or similar). Taking a project from a "non-computer science" specification and turning it into a reality is truly a valuable skill. Chances are that in a full-time position you'll be interacting with a lot of non-technical people trying to describe to you what they want and you'll have to translate those descriptions into something real.
Similarly, if there is an area you're particularly interested in, start to learn the landscape. For instance, if you want to work with big data and analytics, start to learn how to use common libraries and tools such as Hadoop, Spark, Pig, etc. Of course, the technologies you should look into should be related to your field of interest.
Remember, rolling your own solution is a great device for learning and fantastic if you need a specific optimization over the typical solution (assuming it's within your realm of capability to do so). However, in most positions you will be leveraging the work over others through the form of libraries. It's simply not practical to always write a hash map from scratch simply because you're working on a new project. Being able to read library docs and effectively use this code is incredibly applicable.
Good luck!
It may mean C, the programming language, or C, as in the set of complex numbers, which contains the real numbers (x +/- 0i). It really depends upon how it's written in the textbook.
Most likely, they mean the programming language. You can find C grammars all over the place. For example, here's a grammar in ANTLR: http://www.antlr.org/grammar/1153358328744/C.g
There are different ways one can write real numbers in C. 3, 3.14, 31.4E-1, ...
> Visual studio is literally the best ide available.
Actually IMO, I'd say Visual Studio with ReSharper is where it's at. (Made by the same people as IntelliJ).
Get into creative coding and doing digital media installations. That melds art and CS pretty nicely, and gives you pretty marketable skills. I know some schools are starting to offer specializations in digital media.
Check out Processing out for a place to start: https://processing.org/ ... Paper.js is another popular library: http://paperjs.org/
Do not delete the message that your husband received. Take a screenshot of it. Show it to a knowledgeable person to see what got downloaded when he tapped the link (post the screenshot here if it shows the entire link). Assume that your husband’s phone is compromised to some degree because he installed something. Find out what it is and figure out how to remove it.
It’s very easy to spoof the originator’s phone number with SMS. If you want to make sure your messages are coming from each other, consider using a dedicated app like Signal, or using iMessage on iPhone.
Not sure it's what you're looking for, but "Algorithm D", from The Art of Computer Programming (Knuth) book 2 section 4.3.1, is an efficient long division algorithm using numbers with large bases. For example, if your system supports up to 32-bit operations, it works 16 bits at a time, computing one base-65536 digit with each iteration.
Regardless of the base, Newton-Raphson division is often used for computing high-precision division.
Truth be told, you need to refer to the language standard for answers to language lawyer-y questions like these. My copy of C: A Reference Manual, 5/e says that if either operand in an expression is unsigned, the other operand/s are converted to unsigned ones as well.
You'll find the answer (and quotations) from the C99 standard here: > https://stackoverflow.com/questions/50605/signed-to-unsigned-conversion-in-c-is-it-always-safe
It depends on what you mean by similar - if you mean essentially as a grid of values then imagemagick can do that.
http://www.imagemagick.org/script/compare.php
If expect images with small shifts or scales or similar content to be similar this kind of approach will probably be disappointing. Other approaches include taking a perceptual hash of the image, sort of a small fingerprint that will be similar to similar images, a quick google shows an implementation on this site (I haven't tried it):
Otherwise there is image registration, trying to match up the rotation/scale/or other configuration of an image with another which could lead to a similarity measure. Medical image analysis is active area for this sort of thing.
And I guess there is also automatic image annotation where you attempt to label is in a label at a human language level, that could lead to a measure of similarity as well. For instances you might want to say a picture of a dog is similar to another picture of a dog, but they could be in very different poses with different colours that wouldn't be close on a purely image based system.
Evan Czaplicki wrote Elm for his senior thesis.
Elliott Brossard wrote The Elan Programming Language for Field-Programmable Gate Arrays for his senior thesis.
Look at very simple languages like Forth, Lisp and Smalltalk, all of which can be implemented in a few weeks time in Java.
I would recommend messing about in visual studio code
https://code.visualstudio.com/
This is a very open ended question though :) Maybe start with some python tutorials and see where that takes you!
Modern computers still use the same “basic theory” that the first computers used.
Introduction to computing systems is a good book that will give you what you are looking for.
A bunch of good programming books, ESPECIALLY Code Complete. It's such a great book; I read a few chapters at least every month.
That said, Jeff Atwood has a complete list.
I still swear by my CLSR Introduction to Algorithms book for reviewing data structures and algorithms. It's not a small book (it is a textbook, after all) but it's my most trusted book in my arsenal.
FWIW, I graduated undergrad two and a half years ago and have been working ever since.
+1. It is dense reading, but this is "The Bible" of data structures books used by many universities. If you are patient with yourself, you will learn things.
If you're more of a video guy, try MIT's OpenCourseWare: http://ocw.mit.edu/courses/
You want: 6.006 - Introduction to Algorithms
OP, you're definitely right to want to boost this skill/knowledge area. This is the foundation of a good chunk of problems that are asked in interviews by big tech companies.
I can't help you much with the Java book, but I can confidently tell you that Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein is my personal favorite Data Structures and Algorithms textbook.
Famous books everyone talks about:
Language specific
General:
Java:
Python:
Ruby on rails:
.. there's a bunch of php books too.. and look into fun books about scala, haskell, and clojure.. All are gaining traction.
Screen capturing involves reading the pixels back from the GPU. Most rendering is done on the GPU, which has its own memory on-board. You can render those pixels out to a VGA signal, which is relatively fast, and you can also encode those signals in an HDMI cable, which is relatively fast. But bringing the pixels back into your RAM or saving it to disk is relatively slow. The speed from RAM to GPU memory is around 10 GB/s, but going to RAM from GPU memory is relatively slow - around 2 GB/s. That may sound fast, but for a 1920x1200 screen, at 60 fps, is around 4 gigabytes per second. Taking that and encoding that into a simple packed YUV format like Fraps will do takes up some CPU time, as does actually flushing it to disk. My Samsung disk has a write speed of 1030 Mb/s.
I tend to learn by doing so I jumped into the deep end on my first linux machine and installed Arch! You could also try some other DIY distro like Gentoo if you'd like.
You will learn very quickly about setting up bootloaders, filesystems, networking, and all the fundamental linux commands during the installation process. Its good fun, but definitely be careful and make sure you understand what you are doing before you do it.
Projects have occasionally come up to re-create games. It was written in the first place; it could be rewritten. So, we'll consider that the worst-case scenario ("worst" because it would be a lot of work, probably for a team of people).
For Windows 10: It should be possible to watch the communication between the game and Windows 98, then between the game and Windows 10, and figure out where the differences lie. Someone would write a "shim" wrapper library to fix the issue (the Wikipedia article https://en.wikipedia.org/wiki/Shim_(computing) has more about the concept).
For a phone: Phones use a completely different type of CPU than a PC does. So you'd be looking at either rewriting the game, or looking at options for x86+Windows emulation (which would kill your battery, even if it turns out to be practically possible).
Most practically, this site claims that some external tools might help. Other sites suggested running Windows XP in Virtualbox, and installing it within that VM.
Truth. And in general, there are hardware random generators that do basically the same thing: give you random numbers from the environment, not from software. (You can get also random numbers from random.org if you need any!)
I forgot to mention that git is, indeed, my weapon of choice, but it's also the only one I ever learned.
It being a first-class citizen of my workflow is what allows me to justify using it, even for small or short projects. Of course, learning it in the first place is the point of friction. Unfortunately, I don't have a fantastic resource for learning git quickly and effectively. The book on the official site is where I got a lot of my initial knowledge, but additional knowledge was accrued piecemeal through experience and Google searches. Truth be told, I still Google complicated operations if I ever need to use them, but the basics stick well and work well.
Check out the Mycroft project. They are an entirely open source platform that is trying to build their own platform but keep it free and open. They even sell a hardware device that uses easily available parts (like a raspberry pi for the core). The system does work but it is early yet so they don't offer a lot of functionality as yet, but they are making steady progress.
I would strongly suggest rather than trying to build your own from scratch that you instead contribute to open platforms like that so that everyone in the world could someday benefit.
"Linux" is the proper name for a particular piece of software, a mostly-UNIX-compatible kernel: https://github.com/torvalds/linux
You can download the source (with git clone) and run make menuconfig
and go through the menu options and see what you can do, or more specifically, what all you can disable. Or even better, make allnoconfig
. Then build it and see what you get.
If you're on an x86 (or x86-64) machine, your kernel will end up in arch/x86/boot/bzImage. You can run it in a VM by installing qemu and running qemu-system-x86_64 -kernel arch/x86/boot/bzImage
. No filesystem, so it won't get very far.
Depending on how you want to interpret the name "Linux", grabbing the very first version of Linux from https://www.kernel.org/pub/linux/kernel/Historic/ might get you an even smaller kernel.
First thing first: convert given HEX dump to a file. Google "hex to string online", paste to some site given HEX dump, get back string. Copy-paste it, save as some file (most likely, some characters will be corrupted, download and use any HEX editor to correct them and check that everything is fine). Or you could use some HEX editor to do this from the beginning.
Now you have an executable file (e.g. p.com). It most likely doesn't work, unless you have a Win95. Now use some disassembler (OllyDbg, IDA PRO - they both have free demos). This will give you an assembler code for the program. If you don't know assembler and have no idea how registers/stack/CPU works - I can't explain that, you have to study it.
Then, you have to understand assembler code (it's a low level programming language, so it's hard to understand that program does sometimes) and write the same thing in your favorite programming language.
There is a shortcut though - c/c++ languages allow you to use some ASM instructions in your C code. You have to wrap ASM code in some asm instruction and modify it a bit (see http://www.codeproject.com/Articles/15971/Using-Inline-Assembly-in-C-C )
I never done it and I am not sure how to pass data to the asm block in C++, but there are manuals on the internet. Still, you need to have basic understanding of how assembler code works.
Good ol' The C Programming Language for me
Read it junior year of college, gave me a huge appreciation for the elegance of C and Unix. Made me venture from Java/IDE/Windows world into the Unix/Linux/Terminals world.
my god the art of computer programming made me want to shoot myself.
You can pretty much sum up your computer science degree if you read 3 books:
Introduction to Algorithms, structure and interpretation of computer programs, and compilers principles techniques and tools
Edit: Adding topics
Look into these topics * Dynamic programming and memoization (traveling salesman problem) * NP completeness (and how to reduce functions to other functions to make them NP complete and for that matter all the graph theory you can handle) * big oh notation (as well as little oh, big omega, little omega, and theta) * how to make quick sort run in O(nlogn) time * map reduce * functional programming languages * parallel computing
I'll add more as I can think of them
Not sure if this is what you are looking for, and I doubt they sell books, but they have tons of math videos and lectures. Lots of practice problems too. You can probably take your own notes for a more intuitive experience if you desire.
They also have a math problem grid, where you can start from the most basic of math like addition/subtraction and work your way up to topics of differential equations.
Did you use create react app to build your app? It take a lot of the pain away from learning as there are a lot of sane defaults configured in. If you specify Typescript (an option for create react app) it’s a lot closer to Java than vanilla JS, also helps you write clearer, more typesafe code. Perhaps look at some existing smaller projects using React, there are some good real world examples for a heap of languages on gothinkster including react, but that might be overwhelming with redux as well which is a whole other part of a UI app.
Another option could be to play around with some other JVM languages, Groovy would be a good choice as it has Interop with Java built in and behaves as an interpreted language. It isn’t as popular as it once was I believe.
Clojure is also a very nice JVM language to work with, but has a fairly steep learning curve from Java. It’s elegant simplicity can force you into breaking out of a lot of the Mindset that Java development can build.
I think the main thing is to be persistent, maybe read up on some fundamentals of the JS runtime event loop and DOM.
Hope that helps, good luck!
> Why does Boeing have a bunch of aerospace engineers when all their doing is making planes. Don't we already know how to make planes?
> Why does Ford have a bunch of mechanical engineers when they already have factories and designs for cars. I get that they make new models but can't like 2 engineers design that?
> Why does Sherwin Williams have a bunch of chemical engineers? Haven't we already invented paint?
The answer to your question is that they are writing a lot of new code.
YouTube, Reddit, and Facebook have all added many new features since their inception and continue to do so. If they don't keep evolving they'll be replaced by competitors who out-innovate them.
Another part of the answer is that most of the code for these products is not the 'client' code (what you see in your web browser) but servers hosted somewhere else to support that client. Think about the challenge of storing all those videos for YouTube in a way that they're easily accessible when needed.
For websites/apps with the scale of the one you mentioned even a small change can be extremely complex. You're dealing with different languages, timezones, number and currency formats, computers with different capabilities (think of how many models of cellphones Facebook needs to support globally for instance), etc.
Edit:
Most products publish release notes stating what's changed from release to release. Here are the release notes from Visual Studio Code for January for instance.
For sure not too late. I started at around 30 and managed to revitalize my career.
When I was starting out Udacity had a great free intro to programming course with Dave Evans that put me on the right track (might be this, their interface has changed a lot... sorry :\ ). For me, Python was a wonderful place to start, because you can play around with it in the interpreter and see what you can tell a computer to do. Some people will tell you to start with C or Java, but I found Python's syntax liberating and having it under my belt made learning those other languages easier when I had a reason to later.
From there I sort of learned what application I wanted (web, data science, etc) and expanded toward that. (learned R, learned a bit of JavaScript, etc. Built some silly projects)
Oof, I felt this on a personal level. I saw the questions, and usually knew what algo/ds to use , but when I went to write the code, I always used to draw a blank (and the fact that my primary language for interviews is C++ doesn't help lmao).
What helped me? The following:
If nothing else works, just try dfs/bfs and hope to god it sticks (I'm only half kidding lmao). It's kinda like hashmap for non-graph problems. This simple mantra - "If graph, throw bfs/dfs, if not, throw hash-map at the problem" has helped me get 2 offers lmao.
they don't store that data in a normal relational database. The click stream data gets piped through a data processing pipeline to create aggregate reports and then it's saved to S3. Here's an example pipeline from AWS
Unity physics does all the hard work for you, and you use C# as the language. If you know C++ and Python then C# should not be difficult to pickup since it uses C++ style syntax.
Follow the Unity Roll-a-ball basic physics game tutorial at http://unity3d.com/learn/tutorials/projects/roll-ball-tutorial
If you can follow along and understand it then you've already seen the most basic of physics based games.
Yes, I see what you're saying - you can't throw money at every loony doomsayer, that just encourages them.
On the other hand, when people say things like "AI will [n]ever be a super intelligence", they sound pretty loony to me. You make an absolute statement about the entirety of the Universe's future history, when historical precedent is against you, physics does not require it, and a lot of very smart people think you're totally wrong. And you bet human survival on it.
>They said we should not ever develop trains because people traveling over 40km/h would result in their organs failing.
BTW, I researched that because it annoyed me.
https://en.wikiquote.org/wiki/Dionysius_Lardner
"While widely quoted as an example of failed predictions about technological progress and attributed to Lardner, there are no known citations of this line prior to 1980 and it does not seem to appear in his published works."
There's actually a free class on Udacity by Sebastian Thrun covering some of the basics at play in the Google cars. Thrun worked with Google developing the original prototypes, though I'm not sure what his level of involvement has been over the past couple of years. It seems like they use a lot of Bayesian modeling along with techniques like particle filters for positioning. They also cover some of the velocity prediction techniques for nearby vehicles at a basic level.
I would recommend using something like processing it's a creative programming platform based off of Java (It is Java but the processing IDE simplifies things heaps). There's also a JavaScript version if you'd rather run it out of your browser.
Looks like you created this using word. If you know a bit of latex you could take a look at overleaf cv templates. Also your cv does not hold any personal information about you, like name/email/github id. Include hobbies which could be related otherwise there is no point including them. It doesn't add any extra value from an interviewer's pov. Here take a look at these templates:
Good Luck!
Let's take Reddit as an example. If I look up their IP address, I get 151.101.1.140. Heading to https://stat.ripe.net/151.101.1.140#tabId=at-a-glance , we see 151.101.0.0/16, which you could read as 151.101.. if you wanted, is routed ("announced by") a company called Fastly.
In particular this is part of 151.101.0.0/22, a sub-block of 151.101.0.0/16, which is the autonomous system AS54113. Here you can see a list of all IP address ranges that autonomous system owns: https://ipinfo.io/AS54113 .
So if we're using the IP address 151.101.1.140, we can consider the network prefix to be 151.101.0.0/16, and the host identifier to be 0.0.1.140. Note that a single IP address can be partitioned in more than one way. Just like in a real life address, you could say (123 John Doe Street), (Some City, Some State, Some Country) is a partition, but so is (123 Joen Doe Street, Some City), (Some State, Some Country). In this case the fact that Fastly has this autonomous system gives us the chance to say that we have a network at 151.101.0.0/22, and a particular host inside it, which is one of reddit.com's IP addresses. When routing a packet to Reddit, this is the first mask our computers will use, because autonomous levels are the top-level networking masks. But of course, AS54113 will have its own internal partitions of its network, in particular we saw that it has 151.101.0.0/22 as a sub-subnet of its 151.101.0.0/16, so if we use that as our network, we have a prefix of 151.101.0.0/22, and a host identifier again of 0.0.1.140. Note that the network prefix is now longer, it has 6 more bits in the address than a /16.
In case you aren't aware there are quite a few programs that do this sort of thing for you already.
HyperSpin is a very popular but a bit time consuming to setup and has high system requirements but I have to admit it looks damn good when you get it all setup. Personally I like mGalaxy as it's very easy to setup and has features the others lack.
You might want to check out /r/cade and /r/mame as they are both pretty good sources of info on arcade cabinets.
You can do your own small projects without worrying about it. I like to plug away at Project Euler problems. When I was applying for jobs about a year ago, I impressed the hell out of a few interviewers discussing some of those problems. Besides that, keep in mind that you don't need to release your projects into the wild. Write programs just for the hell of it. Want to play Minesweeper? Fuck the version that comes with Windows, open up your favorite IDE or text editor and write your own. Your company will never know or care. Nor are they even likely to know or care too much about non-profit open source software you release, although I can understand your concern.
Hello. Even if it's not appropriate subreddit. Here is my answer for you.
Practice typing at http://www.ratatype.com/ and http://play.typeracer.com/.
Accuracy and speed are both important. So spend some time practicing.
I think the things they're referring to as immediately accessible is specifically the likes of [email protected], where you wouldn't do anything -- your computer does it all automatically!
For the cleaning up data sets thing... it won't be CS related at all. It's 110% not as glamorous or fun as you expect. I believe they're referring to the likes of mturk, where people work for pocket change by doing menial tasks of any kind. Some tasks on those sites involves things like categorizing the data of data sets so that it can be used for machine learning (the algorithms need to know if they guessed correctly, as well as training data requires knowing the category immediately). I've never used mturk and can't say how easy it would be to focus on a specific type of work or how much work is available. But I can say it's pretty much as unrelated as it gets to the kind of work you'd actually do with a CS degree.
I agree with this. The internet is basically one big network. Professor Messor has a decent Network+ training program for free.
Well, it varies by database. But I believe the most common scheme is a file of Slotted Pages. They look like this:
http://www.cubrid.org/files/attach/images/220547/497/656/postgresql_data_page_structure.png
The idea is the variable length values stack in from one end of the page, and they're indexed by an array of fixed width offsets that grows in from the other end of the page. The page header often has other info, like a bitmap of which nullable values are present or absent, so that null values in the tuples can be omitted.
Other database make other choices. Another common scheme is to store each column separately. This means to reconstruct a row you have to read as many pages as there are columns, so it hurts performance for fine grained update workloads, like say the backend of reddit. But for queries that scan all rows, but only care about a subset of columns, it can be much more efficient. So you see this structure commonly in databases targeted at analytics (examples: vertica, greenplum, etc).
Also, microsoft's research wing has published quite a bit about one of their newest storage engines, the bw-tree: http://research.microsoft.com/apps/pubs/default.aspx?id=178758
The papers are well written, and fairly approachable even if this is a new area for you. But, this is also a state of the art design that uses concepts like lock free algorithms, which may feel a touch alien if you're unfamiliar. Afaik nothing else uses a scheme quite like this, but it is shipping in MS products and cloud services (notably the hekaton main memory engine in MS SQL, and apparently parts of Azure DocumentDB).
It doesn't necessarily need to be packed as bits, but yes. In general it would have a set of flags, and the actual storage isn't very important.
I looked at what Clang's libcxx does, and it does pack bit flags into a single field (pulling just the relevant bits out):
typedef T2 iostate; static const iostate eofbit = 0x2;
iostate _rdstate;
bool ios_base::eof() const { return (_rdstate & eofbit) != 0; }
Basically a Chrome extension is just some HTML, CSS and JS. What you need to learn is the framework used to write Chrome extensions. Here you can find an high level description of all you need to do. Obviously to go deep you need to read the documentation.
You can also find some youtube videos that explain very well how it works.
I'm not an expert in Chrome extensions because i wrote only one to manage my bookmarks (there was nothing on the market that allow me to do what i wanted to do), but i made this from scratch (refers to the framework, not to languages) in an afternoon, so it is not very difficult.
Sure thing!
I know there's a lot of Youtube tutorials for getting into programming, but the most important thing is to practice, practice, practice. You at least have some experience with Matlab so it's not like you're starting out totally clueless.
Create an account on github and publicly host your personal projects there. Make some simple side projects to keep yourself occupied. Try visiting and solving problems on leetcode when you're more experienced - this site is commonly used by more experienced engineers to practice traditional-style software engineering interview problems (the kind you'd see in places like Google, Microsoft, Amazon, etc).
And always ask questions. Irc channels, Discord channels, Reddit, StackOverflow (although StackOverflow is more for intermediate level people). Feel free to even shoot me PMs from time to time if you have questions, actually. I'd be happy to help as well.
Flask is always my Python web framework recommendation for starting out. I feel like this is a much better idea than the Bottle that /u/skunkwaffle suggested because the code is just as simple (just look at the example on their website), yet it lets you do more and has a far stronger community.
Maybe you don't need more right now, but between the two, might as well learn the more useful tool, right? Unless you really need a minimalistic fingerprint (in which case why would you even use Python?), I can't see why you'd use Bottle over Flask.
I've only ever heard of "partial indexes" in the context of database software. For example: http://www.postgresql.org/docs/9.4/static/indexes-partial.html
In a relational database, an index is basically a sorted map from keys (values of the indexed column) to row IDs. It could be implemented as a binary search tree, although there are a lot of technical details that go into making it efficient when stored on disk.
A partial index is an index that only contains a subset of the table's records. The downside is that it's only useful for queries that are filtered to only look at that subset; the upside is that it's smaller, and only needs to be updated when that subset is modified.
Don't get me wrong, Eclipse really isn't bad. I used it a lot in uni and was overall very happy with it in large Java codebases and Android programming.
My Senior year a buddy showed me Jetbrains' IntelliJ IDEA and I've been completely hooked on it since. Besides being extremely light-weight, it offers excellent compatibility (I had headaches setting some things up through Eclipse that work instantly on IntelliJ), very customizable interface, much better auto-complete system, and better plug-in support (I never found a suitable vim plug-in for Eclipse, but IntelliJ supports an excellent one).
CS is the mathematics of computation; for the most part, whether it's possible to, how best to, and fast it is to complete a task, and different definitions and models of computations. (side note: the "computer" in CS refers to ANY computer, it can be an electrical computer, a human computer, a mechanical computer, an abstract or theoretical computer like a markov chain or decision tree, etc.)
However, CS degrees look good for programming jobs (and a good CS education is actually useful), so when people want to get a programming job, they get a CS degree. Do note that CS=/=programming, although most CS people go into CS to learn how to program. Hence why programming courses are so often associated with CS degrees.
SE is specifically designed for those people; people want a programming degree (technically, a SE degree is a degree in software creation and maintenance). This, however (in my experience), does tend be less prestigious and more looked down upon, as it implies that you have no skills other than coding, and so are at best a code monkey.
If you do want to learn to program (i'd recommend it), try looking at python. It's a very neat and easy to learn the ropes
I'm not entirely sure what it is that you're asking about. Are you saying that you have an idea for a new project that would need to be run on a cluster? And that you think the best solution is to gather other contributors to the project who will enhance the code and share computational time?
If so, I don't believe that that is the best approach. Instead of that, I would either:
A) Save up a couple hundred bucks to build a small PC that you can leave running and crunching data (if the project is as parallizeable as you're implying, then something like an 8-Core AMD Chip might be a good buy).
B) Use Amazon AWS to lease some computational time (it's pretty cheap).
C) Look into GPU programming. If you don't use the GPU in your computer a lot, you can devote it to running this project over its many cores. I have seen GPU programming in Python and Java, and I'm sure you can find libraries to do it in other languages too.
D) If you're a student, you can see if your school's CS department has a cluster that they can let you use. You won't be able to hog the cluster 24/7, but you should be able to get some time on it if your project is worthwhile.
i've always heard Comer and Stevens is sort of the canonical textbook on IP networking. i haven't actually read it, but i pre-date it by about a decade...
https://www.amazon.com/Internetworking-Douglas-Stevens-David-Comer/dp/B000MBT73S
You're going to want to be comfortable with graph theory and boolean logic to the degree that you can understand it, but you don't really need to be an expert in those subjects. Knowing more doesn't hurt, of course.
Mathematical proofs are the bread-and-butter of CS theory. You should be very comfortable proving theorems. Working through a book like Velleman's How to Prove It is strongly recommended if you don't have a background in mathematics.
You should also be comfortable with data structures and algorithms at the undergraduate level. If you can work through Introduction to Algorithms by Cormen, Rivest, Leierson, and Stein, you'll be in a good position for the graduate-level work.
Other fields that it doesn't hurt to be familiar with include Combinatorics, Real Analysis, Algebra and Number Theory, Probability, and Statistics.
If you in a CS program and plan to code professionally after school outside of academics, spend time studying what they're not going to teach you - software engineering. Most new college grads I interview can rattle off time complexity of data structures and talk about algorithms just fine, but they've never used source control or a unix command line or a debugger, or...
The standard issues I give out to people I mentor are:
I pick these books because they're very approachable and made a huge impact on me personally. While they can give you the raw material, you've got to use it or it won't stick. So think up a project you're interested in working on for a while and use that as your sandbox to work with some of these ideas.
Hope that helps. Feel free to PM me if you want to discuss any of this more.
Kernighan and Ritchie's (K&R) "The C Programming Language", second edition (ANSI C) is precise, wastes few words, and it's still the best intro to C I know.
If you plan to master C, I'd also recommend Harbison and Steele's "C: A reference Manual", PJ Plauger's "The Standard C Library", and Peter van der Linden's "Expert C Programming: Deep C Secrets". Then to update to the C99 and C11 standards, check out Klemens' "21st Century C", esp. 2nd edition.
I am familiar with this material, and I think that the way the topic is presented in your lecture slides is confusing and obscure.
I strongly recommend going over the mathematics of probability again on your own, then looking at this material again. The maths will make more sense. Additionally read the relevant sections of any textbooks you were provided for this class. If those explanations don't help, Google for other explanations of the parts you don't understand.
I would also strongly recommend going through the material they gave you and identifying specific things that you don't understand and bring that to a tutor and explain to them that you're struggling. Unless your university is run vastly different to the ones I've had experiences with, they are there for you to go to for help (after taking the appropriate measures to figure things out on your own).
I don't know if you've already read something similar and it didn't help, but I've found the explanations in the chapters about reasoning under uncertainty in the textbook Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig (Chapters 8-10, if memory serves) to be helpful. It's a well known textbook, so I'd expect that your library should have a copy.
Code Complete. A copy is given to every new software engineer who starts at our company. If you didn't take a Software Engineering class in college, this gives you really good information about how to do your job that the rest of your classes didn't teach you. If you did take a Software Engineering class, this reiterates the class, and most importantly (in my mind), teaches how to structure your code to be maintainable for years.
The Pragmatic Programmer is a good 'un.
And I've been told The Mythical Man-Month is worth reading, too.
True that, but this might also help break something through to them. Ultimately scratch even if a visual tool to help learn program, it does provide enough in terms of programming tools to enable to make something you would not guess to be possible to make in scratch at the first glance.
Just seeing that something so childish can be used to make games can help internalize the idea that programming is about problem solving and is a lil independent of exact semantics of the language if it provides you variables, conditionals and loops.
Check this out: https://scratch.mit.edu/studios/810/
Linux "time" will tell you close enough what the cpu run time and wall clock time was:
https://stackoverflow.com/questions/385408/get-program-execution-time-in-the-shell
Do internet searches and read the man pages for more information on the Linux command "time".
If my program was called "cppexperiment"
I might type: "time cppexperiment" and that would output the CPU run time and wall clock time.
Based on your post history, coding bootcamps might be better for you.
It's especially good for web development.
Also, there are a lot of self learning resources like Freecodecamp.
Basically, yeah. My prof linked this paper as an example of what he was talking about.
We share quite a similar interest, especially with regard to creating systems that can deal with non-english/European language ;) To start with, you can head to scholar.google.com and try the keywords "under-resourced language nlp". You'd notice that most, if not all, of the current state-of-the-art methods are statistical machine learning methods, so you'd want to do a Masters in CS with a focus in machine learning/nlp. Be warned that this will involve a lot of math... The typical machine translation model works by being fed a whole lot of training data in the source language and the corresponding translation in the target language (grossly oversimplified picture here). The model then 'learns' the pattern present in the data and how to produce translations on new and unseen data. The primary difficulty here is often in obtaining the parallel training corpus for the language pair that you want, especially in under-resourced language where you might have to create one yourself. This is an expensive and time consuming task.
A long time ago, I took this excellent free online course: https://www.coursera.org/course/nlp, which I think is a great intro to the field. It isn't running now, but you can always search around and find its video lectures and just watch them. You also want to learn programming before you can build anything useful in this field. It's possible to teach youself most of the materials through free resources/books online, but it would take a while, especially if you have 0 CS knowledge. Minimally you need to learn some programming, signal processing, algorithm and data structure, and various other basic NLP tasks (stemming, tokenizing, etc) before you can be productive in the field, and I'm afraid your current skills in foreign language won't help much here.
You'll get better answers for this on /r/learnprogramming.
That said, there's a pretty well-regarded free class on Coursera, although it does presume general Java knowledge as a prerequisite.
The first thing I would do is go through Kahn Academy's CS videos. It's pretty short but will help you understand everything you subsequently read better. http://www.khanacademy.org/science/computer-science?k
He's right. It's a hardware exception. See: http://en.wikipedia.org/wiki/Interrupt_descriptor_table
You can see that a lot of those hardware exceptions correspond to specific BSOD causes: http://technet.microsoft.com/en-us/library/Cc750081.bsodd_big(l=en-us).gif
> reflog exists out of the box?
Yes, it's controlled by the <code>core.logAllRefUpdates</code> configuration option. By default it's only enabled for checked-out clones, but you can also manually turn it on for the "bare" repository that's stored on the server.
This assumes you have direct control over the server that the central repository is on. I have no idea whether the same functionality is offered by hosted Git services like GitHub or Bitbucket.
> the problem with that is, for example my IDE does an automatic fetch as soon as I open it and periodically afterwards.
Even if you fetch a version of the repository that has its history wiped, your local reflog will still have a pointer to the previous state of your remote-tracking branch, until it gets garbage-collected.
It's an alternate form called strong induction. Basically to prove P(n+1), you assume the truth of P for everything between the base cases up to P(n).
Git is a version control system. While you're programming something, you usually want to keep track of your changes over time, be able to branch off to run temporary experiments, and that kind of thing. Version control systems help you do that.
With version control, you have "code repositories". Those are places that keep a copy of your code, and the history of changes. Github is one of the services that will let you host a git repository online. Github is one of the biggest ones. Another popular repository host is called GitLab.
As far as getting started, Git has a getting started tutorial, and that might be a good place to begin.
(I'm assuming that when you say "router", you mean the consumer-targeted routers that typically have a few Ethernet ports combined with a Wi-Fi access point. If not, please clarify.)
This is possible, but not really practical. It's fairly easy to install a Linux distribution on many routers and run whatever software you want, but their capabilities tend to be so limited compared to a more general-purpose computer that it's not worth it. (This goes double if you're using older-generation hardware you found in the trash.)
As an example, let's take the TP-Link AC1750 router, the current best-seller on Amazon. It has a Qualcomm Atheros QCA9558 CPU, which has a 750MHz MIPS core, and only 128MB of RAM. Based on these random benchmarks I found, it seems to be anywhere from 10 to 100 times slower than a single core of an Intel Core i7-3770 processor (at least for cryptographic operations, which might not be representative).
Let's be optimistic and assume it's only 10 times slower. That means you would need approximately 80 routers to equal the performance of 8 hyperthreads from a single decade-old Intel CPU. And it would probably cost less to buy that CPU (plus all the other hardware it needs) than the extra money you'd be paying on your electric bill to run 80 routers, even if you got them (and their power supplies and network cables) for free.
On top of that, solving any given problem on a distributed system is typically much slower than on a single machine of equivalent power, because of communication overhead. There are very few problems for which you can solve them 80 times as fast by splitting them across 80 machines. Usually, the individual machines have to coordinate and exchange information in order to make progress. This adds a lot of overhead, which makes your distributed "supercomputer" even less competitive.
Sounds like you're interested in machine learning. While I don't know about specifics to that exact kind of problem, in general your first step should be to learn about how probability applies to programs. If you're good with probability you could start with an online machine learning class; a quick Google search found this: https://www.coursera.org/course/ml. I haven't taken this class myself but I know a lot of people who have and have enjoyed it -- it's Stanford's intro to machine learning class online.
Jumping off of this, one of my favorite sources of external entropy has to be Cloudflare's lava lamp wall:
https://www.cloudflare.com/learning/ssl/lava-lamp-encryption/
Big ass wall of lava lamps. Point a camera at it. Process the image into a random number. Incidentally, some encryption utilities you can run on your own computer will use noise in your web cam the same way.
YES, a degree focusing around microsoft is a BAD, BAD thing. Horrid. You can apparently program and want to focus on CS and math, so there's absolutely need to take a major of programming, and definitely not programming in MS environments. It would be like saying "huh, i can program, so i'll get myself another degree in programming, just so i can have an easy degree". Also, MS technologies are mostly for code monkeys, and i doubt you'd much care to be a code monkey. As a good example for that, look here: https://sites.google.com/site/steveyegge2/five-essential-phone-screen-questions this guy works at google, and none of his interview questions are microsoft specific, and even goes on a bit about a *nix command (grep). Put lightly, a degree in Microsoft will not get you a job at facebook or google, and it might even make your resume look worse for them. Like how some people laugh at "Oracle certified" (#6 of http://steve-yegge.blogspot.com/2007/09/ten-tips-for-slightly-less-awful-resume.html Same guy, I know, but for a lack of other sources of the top of my head)
However, the most important part about learning math and CS, is to understand the fundamentals of them. This is, oddly, something not encouraged by most programs and textbooks, so if you do go into a program of math and CS, MAKE SURE you understand the fundamentals of the class subjects, and not just "what's going on".
Also, if you want, feel free to take some remedial classes first, there's nothing bad about that, it'd be worse than going into a class that builds topics ontop of topics you don't know (pretty rare actually in CS, bit more of an issue in math, though not as much as one might think).
So dude, just go for it. It's not going to kill you. Indecision might.
> Of course it's a joke
Let me introduce you to The Story of Mel. You should read through the whole original story, then check out this article for some additional explanation.
You can use MacDrive (http://www.mediafour.com/products/macdrive) and/or HFSExplorer (http://www.catacombae.org/hfsx.html) to view the contents of HFS formatted drives on Windows. Never used MacDrive myself, but HFSExplorer has never let me down.
Well what I was trying to say is, on an average computer you have billions of lines of code worth of applications running. It's impossible to, not knowing the machine, tell what is taking up more space.
If you really want to know, look into it. Buy (or install the trial version of) DaisyDisk and look where your disk space is actually going.
Use a bootable distro, like Kali Linux (https://www.kali.org/). You can add persistent storage to it as well, as long as your flash drive is big enough. If you don't want to use a flash drive, grab the iso and run it with VirtualBox.
I'm afraid i'm not your expert, but since no-one has answered for a full day, I would say you need to talk to the guys over at the UEFI Forums and see if they can hook you up with some help.
Have you tried asking around your company for an expert? Maybe one of your coworkers has experience in this sort of thing.
You might also try the guys over at ubuntu and see if anyone has a suggestion (or paid support depend on the scale of what you're doing.)
Otherwise you might need to contact whoever you're going through for boards and see if they can hook you up with a custom solution.
you have primary logic gates and universal logic gates.
Primary Logic gates:
>There are three primary logic gates: AND, OR and NOT. All other logic gates could be represented as a combination of these.
Universal Logic Gates
>Though all logic gates can be designed by primary logic gates, but NAND or NOR alone can be implemented to design all primary logic gates. Thus, these are called universal logic gates.
​
Logic Function | Operation | |
---|---|---|
AND | A*B | Primary |
OR | A+B | Primary |
NOT | -1A | Primary |
NAND | -1(A*B | NOT AND |
NOR | -1(A+B) | NOT OR |
​
If you're looking for a tool to simulate digital logic designs, and not necessarily a game, there are definitely options out there. Paul Falstad's web-based circuit simulator is a decent entry-level one, and as a bonus it also simulates analog circuits. I've also seen references to Logisim which is apparently popular as a classroom tool.
A professional engineer would probably use a less user-friendly but more powerful simulator such as LTSPICE, or define their logic using a text-based language like Verilog.
You'll probably have more luck asking this question somewhere else, such as /r/ECE.
I've always appreciated Udacity's videos, both for their adequate introductory coverage of topics, but also because of the format, which does a series of short videos and quizzes, instead of the lecture-and-one-arbitrary-question that Coursera does.
Probably a good place to start: https://www.udacity.com/courses#!/data-science
take a couple cs courses first, learn the basics of SQL and a statistical software program (i suggest R), and you'll be able to find someone willing to take the free labor for an internship.
data science is pretty loosely defined. your math + CS degree combined with any sort of actual experience should be a great start, and you'll fall into whatever niche you are best at. i'd recommending taking any additional elective stats courses you can, and maybe some online courses like this data analysis course from coursera - https://www.coursera.org/course/dataanalysis.
i wouldn't worry about which programming language or type of SQL you learn at this point. as long as you're competent in one you'll be able to learn others. as /u/jasonwatkinspdx says below, python is a good choice.