Edit: I didn't realize the link was just the first chapter. If you really liked it, I do suggest purchasing it. You can find it all online for free, but I do highly recommend just having this book. It's a fun read.
Here's an excerpt that I really love right from the beginning of the book.
>>All programmers are optimists. Perhaps this modern sorcery especially attracts those who believe in happy endings and fairy godmothers. Perhaps the hundreds of nitty frustrations drive away all but those who habitually focus on the end goal. Perhaps it is merely that computers are young, programmers are younger, and the young are always optimists. But however the selection process works, the result is indisputable: "This time it will surely run," or "I just found the last bug."
Here's a link to a Physical copy on Amazon if you want it.
edit: Bonus Dilbert Comic
As an embedded firmware developer recently interested in game design, I can maybe give you some pointers.
Arduino is more a maker tool than an engineer one, but is definitely good to get you started. Think about Arduino like you think about some simple game engine (like GameMaker Studio?): there's certainly people making impressing stuff with it, but pros won't consider it powerful/flexible enough. (I hope my analogy works; I'm still new to game design haha)
The book Making Embedded Systems from Elecia White is pretty good. It is beginner friendly and it's quite good at explaining typical embedded system trade-offs.
> There's also latency vs bandwidth considerations.
So overlooked! Decoding complexity/power is neither inherently good nor bad. Rather, it is an engineering tradeoff -- compact CISC single-byte instructions use less memory/cache bandwidth and can be packed densely into caches. The RISC/CISC war was ended by PentiumPro (aka P6) -- a microcoded RISC micro-architecture implementing a CISC architecture. All of Intel's mainstream desktop/laptop CPUs since then have been P6 derivatives (Atom is a separate story). AMD's Zen micro-architecture (and its derivatives) is a ground-up design based on the same model as P6: RISC micro-architecture implementing a CISC architecture. So pretty much all RISC/CISC debate since ~1996 has been moot.
The real question is what are the tradeoffs? If I make my caches bigger and reduce my core floorplan, what micro-architectural optimizations am I giving up? Obviously 90% cache / 10% core is wrong. But then 90% core and 10% cache is probably wrong, too. So you can't reason in absolutes. You have to run the numbers on real code, using a credible model of the design and see what the various partitions do to real-world performance. There is no shortcut. And there's a whole textbook written on that.
Computer Architecture: A Quantitative Approach by Hennessey and Patterson is the definitive book used in universities all over the world. They are the creators of the MIPS and SPARC.
https://www.amazon.com/Computer-Architecture-Quantitative-Approach-Kaufmann/dp/0128119055/
We used the 2nd edition in school way back in the 90's but it's up to the 6th edition now. We covered branch predictors and Tomasulo's algorithm for out of order execution back then. I'm sure the latest edition has significant updates.
We modeled everything in C but these days our chip architects model everything in even higher level languages like Python. It is easier to simulate the performance effects of different cache sizes, number of pipeline stages, etc. in a high level language and use traces of program execution.
After the architecture is decided they write specs and then the implementation team would write Verilog and go through the whole process of synthesis, place and route, timing analysis, LVS/DRC, etc
https://www.amazon.com/Digital-Design-Computer-Architecture-Harris/dp/0123944244
This book teaches you how to implement a 32-bits MIPS processor with single or multi-cycle design using verilog.
This was famously used by the lead manager of the IBM OS360 development project, to explain why adding more programmers wouldn't make it possible to deliver the OS, sooner.
[Edit: For anyone who's interested, he wrote a book called The Mythical Man-Month on this exact topic.]
Et godt sted at starte er bogen “The mythical Man-month” af Fred Brooks. https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959
IT-projektledelse er på mange måneder ikke anderledes end projektledelse generelt, men der kræves en væsentlig forståelse for, at opgaverne som oftest kan være enormt komplekse af natur.
Du vil derfor opleve, af din aller aller største hurdle ikke er dit team, som står for eksekveringen, men dine interessenter (aka stakeholder management). Uden at smøre unødigt tykt på stereotyperne, så er det de færreste “projektsponsorer” eller lign interessenter fra forretningssiden, som egentlig har bare en snært af sans for hvorfor IT/softwareudvikling er svært at tids/budget-estimere.
Et klassisk eksempel:
Udfordring nummer 2 er, at din interessenter/ resten af forretningen stort ser altid lever i en “waterfall” verden, også selvom I kører agile eller lign. metoder.
Dvs, at I som team ofte kan føle jer enormt klemt af, at den eksterne verden ikke forstår eller gider spille med indenfor jeres best practices. (Men det lyder fedt for virksomheden at proklamere “vi er agile”).
Anyway, det skal siges at jeg til dagligt arbejder med product management, så det er lang tid siden jeg har levet i en “project-only” verden.
Edit: typos
I haven't done that LISP project, but I recommend Crafting Interpreters.
I know that you say you want to build a compiler. And this tripped me up when I first started. But an interpreter and a compiler are very, very similar--to the point of being nearly identical. You can build a compiler by building an interpreter, then swapping out the immediate-calculation functions for functions that generate machine code (or LLVM IR or whatever). You will often want at least a basic interpreter for your language anyway, if for no other purpose than simplifying expressions.
I also recommend the "Dragon Book", which is pretty much the textbook on compiler design. Reading this book was absolutely eye-opening, because it presents a bunch of very basic and general information that everybody in language design just assumes you know.
I really struggled when I started before I read this book because so much stuff is very much simpler than I imagined. For instance, for my first language project, I got stuck on codegen and never finished. I was stuck on the basic question: okay, but how do I know what order to generate the code in? It seemed like a very hard question, requiring DAGs and dependency analysis and all sorts of shit like that, and I got discouraged and quit. Especially because so many tutorials are like "now that you've parsed your language, interpreting it is easy and left as an exercise for the reader". Guess what? It is easy. It's just a brain-dead post-order traversal. But literally the only place that ever said it that simply and directly was the Dragon Book.
Fair point.
I should also have posted a reference to the Black Magic book, which covers this in great detail.
Amazon link: High Speed Digital Design: A Handbook of Black Magic
> Well, multiple devs have came forward with a release month but ultimately they keep say it'll be further down in the future.
Nothing has actually been officially stated. Nothing. Anyone who tells you differently that isn't talking about something from the official Bioware blog is wrong. Period.
> For team size, its unusually small and saying that they'd step on each others toes is a bit disrespectful to other games makers.
I am a software engineer. I in fact do know what I'm talking about here. In this case, they don't need a large team to explore different concepts for how to revamp things. When the problem is one of experimentation, adding more development effort won't help. And putting in anything but mostly placeholder art is a waste of time if that concept doesn't get used, and if it does get used, placeholder art is fine until you need to actually fully design and develop the system.
> Small games have taken hundreds more of people.
Small games can be built by less than a dozen people. I've done it before. I was part of a startup that did just that. Team size does not correlate to game size--small games can be built by huge teams and huge games can be built by small teams. It's a matter of how much work can be parallelized and how much speed up you gain by increasing the team size vs. slow down you encounter by increasing the need for communication between team members. You should read The Mythical Man Month to get a better understanding of why this is.
> The choice they're making with this small team shows either that its a concentrated group or that they no longer care about the game as much as they should.
And you didn't once stop to think that maybe it's actually that they wanted a concentrated group?
Apologies, it looks like I got the name wrong from memory, it’s actually “Making Embedded Systems” and here’s the amazon link: Making Embedded Systems: Design Patterns for Great Software https://www.amazon.com/dp/1449302149/ref=cm_sw_r_cp_api_glc_fabc_3o9cGbZG63W6N?_encoding=UTF8&psc=1
I agree, particularly with the first paragraph.
Ben's CPU videos coupled with Harris and Harris https://www.amazon.co.uk/Digital-Design-Computer-Architecture-Harris/dp/0123944244 is a great way to get junior EEs started with FPGAs.
I would start from there (CPU), but you won't be missing or struggling with anything if you start from somewhere else, or jump around a bit.
To be fair, Ben Eater's 8-bit computer uses microcoded ROM for control of the processor, which is just one of many control schemes for a processor. Though, that's outside the scope of this comment.
OP, check out Digital Design and Computer Architecture for a good intro after you watch through some of Ben Eater's stuff. I'd HIGHLY recommend doing the questions at the end of each chapter to make sure you actually understand the stuff.
The book mostly covers question 3 (and I guess 4) above. 1 and 2 are, as others have stated, more VLSI questions.
On that front (again, briefly): the chip designers use hardware description languages (like Verilog and VHDL) to describe the functionality of the chip. They then use programs to convert it to an intermediary language called RTL (register transfer logic). They then "lay out" the chip - describe how they want things positioned with respect to one another) and the chip fabricator then uses libraries that they have to transfer the RTL to a series of photolithography masks… you know what. This process is too damned complex to go through right now. Sam Zeloof has a great series of videos on this. He built a chip fabrication set up in his folks' garage when he was in high school. He's insane.
But How Do It Know? Is a really good book that explains how computers work in a very simple way. There’s also a short YouTube video that sums up the concepts in the book.
But How Do It Know? - The Basic Principles of Computers for Everyone https://www.amazon.com/dp/0615303765/ref=cm_sw_r_cp_api_glt_fabc_WM1AQKMRCGNFPYW5T2F5
Find it here on Amazon. It's actually a pretty good textbook and worth reading even if you're mostly a software developer.
The "Pillar" book. "Computer Architecture: A Quantitative Approach" 6th Edition by John L. Hennessy (Author), David A. Patterson (Author). It's kinda the bible of high performance computing. (the "downside" is that you always need the newest edition, because they need to update it after every couple of years).
Mai e asta, dar abordeaza probleme mult complexe, cealalta se potrivea mai bine cu topic-urile pe care le-ai mentionat, cred. Un fel de "Vietile Sfintilor" vs Biblie :p
I also wouldn't hesitate to recommend Pong Chu's Verilog/VHDL books, which are very practical.
Digital Design and Computer Architecture, Second Edition is another book I recommend because it takes primitives and then successively builds components that are finally integrated to produce a RISC processor (bottom-up).
https://www.amazon.com/Digital-Design-Computer-Architecture-Second/dp/0123944244/
Other links you may want to check out:
https://www.reddit.com/r/ECE/comments/50tlkl/fpga_bookresource_reccomendations/d7c08i8/
If you plan on doing more of this, here is one of the better textbooks on the subject. It takes a very practical approach rather than pure theory. It is geared towards higher speeds than what you are working with, but the design techniques still apply.
http://www.amazon.com/High-Speed-Digital-Design-Handbook/dp/0133957241
You don't need architecture perse unless you want high performance. If you want high performance, you pretty much want to learn the equivalent of a class in computer organization and computer architecture. So if you aren't dealing with raw pointers directly, advanced multithreading, vector/simd math, HPC, 3d graphics, gpu cards, etc, you really don't need computer architecture. If any of those things interest you, you should probably learn it, and you can definitely learn it as you learn C++. If you are mainly dealing with C++ standard library and gui stuff, you will rarely encounter computer architecture topics.
The standard texts (and my favorites) are
http://www.amazon.com/Computer-Organization-Design-Fifth-Edition/dp/0124077269/ (Patterson and Hennesey).
http://www.amazon.com/Computer-Architecture-Fifth-Edition-Quantitative/dp/012383872X ( Computer Architecture, Fifth Edition: A Quantitative Approach, Hennesey)
Those are not referral links, and I recommend they be read in that order.
Some people might say those books are a bit rough for a beginner, so try to get a sample or borrow one to see if you like them. There are some other easier computer org /arch books out there but I'm not aware of any really good legally free ones off the top of my head.
I can’t tell if you are joking or not. There is famous book already written about this called the Mythical Man Month https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959
Well, depending on how you look at it, it's either Boolean algebra or black magic.
​
TL;DR: You can click through to play with it and see what happens ;)
​
The idea is that you set one of the A/B/C inputs and one of the X/Y/Z inputs.
There are 9 outputs corresponding to 9 different score values for a specific game.
Once you set the inputs, one of those outputs will be set to 1, denoting the score.
​
The circuit uses fairly basic logic gates - AND and OR, plus one NOT.
> They should just mass hire and get shit done.
https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959
Mass hiring would literally slow them down and make it take longer, please consider reading the above book so you can stop saying such insane things.
>Where can I read more about how modern CPUs actually work? I write application software in C++ but there isn’t any need for low level optimizations. However I’m curious though.
This is actually a really good book.
Sounds like cool stuff.
For Zynq specifics and great examples on using Vivado for Zynq SoC development, have you seen the Zynq Book? There is a lot of great information in there.
For bottom up courses on FPGAs I'm not super familiar with any. A great text book which simply covers introductory details on hardware design, FPGAs, and hardware description languages is Digital Design and Computer Architecture by Harris and Harris. Not all chapters of the book are relevant to what you're interested in, but some contain the introductory information I think you're looking for. If you're uninterested in paying for the book, it shouldn't be too difficult to find it online 😉.
I dont know if this is even still relevant to your question, but i always wondered how a pc works and i stubled over this youtube video: How a CPU works.
Ofcause it is only the most basic computer you can build, but with modern computers the basics are the same, you just have some more things that help doing other things faster.
The Video is based on this Book: But how do it know, which i have read and really reccommend if you want to know a little more about the Topic.
Another thing to consider is some of these problems are bottlenecked in *how many* people can work on them at once. There's a book they made us read as part of my degree called the "Mythical Man-Month"- basically the adage that you can't pay 9 women to birth a baby in one month.
A lot of complex systems can also be this way- there is one main system you are building, debugging, testing, etc., and at some point it takes more time to coordinate/communicate lots of complex information, than you save by splitting it between more people.
Agree with Patterson & Hennesy and they also by chance wrote the graduate textbook as well:
It’s called “computer architecture”.
The seminal textbook is this. Usually called “Hennessy and Patterson”.
Nand2Tetris Is also a neat way to learn about it.