There is a tool called Compiler Explorer that shows you the disassembly of your program and matches source lines to asm lines. It is very well known in the C++ comunity and makes this workflow quite painless
A word of advice: I'm a 25+ years C veteran. Its lightning fast and its mandatory when you plan to wander into Linux kernel or embedded development.
But its also a monster, full of unspecified behaviour, dark corners and sources for the most dubious bugs. If you plan to use C professionally I'd highly recommend to take a look into the specs, learn what is speficied and what not and maybe also explore how C code is translated by different compilers.
godbolt.org is the holy grail here
It could be due to thread safety. C++ guarantees thread-safe initialization of static
local variables, and that inserts a chunk of code. -fno-threadsafe-statics
will get rid of this (but now your code might not be thread safe, so be careful).
There's other reasons a static will introduce a bit of memory overhead. For example, if the object has a constructor the compiler needs to make sure it only gets called the first time the function is called.
If you need to dive into this a bit further, Compiler Explorer is a good tool.
So, for the sake of clarity, the website http://godbolt.org lets you type in some source code in C, and it then compiles that into the machine code for many common processor architectures, including RISC-V.
Nobody has remarked on your uint256 types. Are they really 256-bit unsigned?
If so, is this directly supported by the language (and which one)? Because these would be handled somewhat differently from common types like uint64.
Then such optimisations are usually done by a compiler. Big-integer types would make things more complicated and can make optimising harder.
If I write your example in C:
#include <stdio.h>
extern int z;
int main () {
int x = z*100/1000;
int y = z*1/10;
int s = z/10;
printf("%d %d %d %d\n",z, x,y,s);
}
Then using gcc-O3 will combine those three 1/10
operations into one, and turns that into a kind of multiply (something like * 0.1
, but for ints). You can put the code into http://godbolt.org and try different compilers and options.
But it will all be inside the compiler. Not inside the language (which is an abstract thing); nor the OS (only used to launch the program, and display the output), or the ALU, which simply does the operations it's told (the multiply and shifts the compiler outputs).
If you haven't done assembly before, I'd probably start with Paul Carter's <u>PC Assembly Language</u>, which is 32-bit x86.
Once you've gone at least some way through the book, I'd strongly recommend compiling and disassembling some simple functions, e.g. using Matt Godbolt's excellent compiler explorer (use -m32
as a compiler option to get 32-bit code), and seeing if you can step through the resulting assembly and understand at least some of it. You're not expected to know every last opcode (you can Google them) but you should at least have a sense of how to read assembly syntax (note that there are two popular syntaxes for x86; Carter's book and godbolt.org default to Intel syntax, but GNU defaults to AT&T syntax), recognize the common things like mov
and ret
and eax
, and follow along enough to figure out the rest as needed.
Doesn't it make sense to optimize your code with the optimizer turned on? It may also be beneficial to look at a simplified version's assembler output. For C++, Compiler Explorer is very popular for this reason.
Italian can't be understood by everyone here 🤷🏻♂️
For C++ I suggest 3 tools.
has a lot of compilers and versions
can view assembly
can execute code
can use external libraries by cloning the repo from git
doesn't support mobile, it doesn't work.
translate the C++ syntax sugar into the verbose counterpart
what you see vs what ~~she sees~~ the compiler sees
very good for learning the language