Your code will be a little bit faster. That's all.
For GCC developers, this work cleans up some really hairy codes (described as "Satan" by GCC wiki) and makes GCC more maintainable, but that doesn't affect users directly.
It's popular these days to bitch about how much of a mess the binutils/gcc code is, but when you see an article like this or you realize that gcc currently supports around 40 backends, all from the one unified codebase, you start to get an appreciation for why things are done the way they are. Yes, there's a lot of build system magic and macro magic, but it's there for a reason.
> The first half of the article is just lamenting that C++, a language that itself predated C99, doesn't allow C99 style structure designate initialization.
Pretty poor excuse, considering C++11 came 12 years after and still lacks the feature …
> It's also worth pointing out that even C99 has some fairly complicated acceptable syntax for that feature too:
> http://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
Most of those are GNU extensions and not part of the standard. C99 only accepts designators with the .field = value syntax and [index] = value syntax.
>Why am I being downvoted? This is an honest question, I'm really interested in what benefits GCC has over LLVM & Clang.
Well, that's not what you actually asked. You said "is there really a good reason to continue to use GCC for projects?" (Emphasis mine.) Tone doesn't translate well over text, but I can see how people would view your statement as denigrating gcc.
For what it's worth, the answer is yes. For one, gcc's optimizer often out performs clang's. For another, gcc supports a lot more languages than anything else.
Even restricting ourselves to just C and stipulating that clang is universally better feature and performance wise than gcc (which it might well be one day), there are still good reasons to use gcc at least some of the time:
Comparing the results against each other to check for both bugs in the compilers and bugs in a program. Undefined behavior can be revealed when two different compilers give different results. Much of that can be automated.
Fostering competition in the F/OSS compiler community. I think that clang's presence and achievements have driven the gcc team to improve in ways they wouldn't have otherwise
You wish to use other tools that plug into gcc but clang.
There are probably a few more.
Edit: One big one I forgot: Architecture support. gcc supports (or at least attempts to) alot of architecture. A. Fucking. Lot.
> Hoping to avoid the need to write the whole compiler myself, I obtained the > source code for the Pastel compiler, which was a multi-platform compiler > developed at Lawrence Livermore Lab. It supported, and was written in, > an extended version of Pascal, designed to be a system-programming > language. I added a C front end, and began porting it to the Motorola > 68000 computer. But I had to give that up when I discovered that the > compiler needed many megabytes of stack space, and the available 68000 > Unix system would only allow 64k. > > I then realized that the Pastel compiler functioned by parsing the entire > input file into a syntax tree, converting the whole syntax tree into a chain > of "instructions", and then generating the whole output file, without ever > freeing any storage. At this point, I concluded I would have to write a new > compiler from scratch. That new compiler is now known as GCC; none of the > Pastel compiler is used in it, but I managed to adapt and use the C front > end that I had written.
> Personally, I would prefer to live in a world where the Clang people were willing to contribute their improvements to GCC
Except that this is the world we lived in before Clang arrived. EGCS was another separate project (forked off GCC) that eventually was merged into GCC and became the official version of that software.
It seems to me that we have a couple of high profile cases (GCC vs. Clang, Firefox vs. Chrome) of new open source projects stirring up innovation from established "monopolies". Personally I think we should let that situation play out a little longer before deciding that merging is the best way.
I think in most instances of this originally happening they were due to compiler errors. gcc, for instance, removes all comments very early on: http://gcc.gnu.org/onlinedocs/gcc-2.95.3/cpp_1.html (section 1.1).
If you ever find a case where a comment is breaking your code, you've likely just found a compiler error and should either submit a ticket or try to fix it yourself for sweet "I contributed to <core language tool>" cred.
llvm-gcc is replaced by dragonegg. Basically a re-write of llvm-gcc to take advantage of the new plugin capability of GCC 4.5 and later.
"You have been involved with the GNU project long enough to be well aware that this kind of crowbar approach does not lead to much more than headlines about Free Software infighting." -- David Kastrup http://gcc.gnu.org/ml/gcc/2014-01/msg00178.html
> It's worth pointing out that this part of C99 was implemented in GCC 4.5.
Relevant link: http://gcc.gnu.org/gcc-4.5/changes.html#x86
> GCC now supports handling floating-point excess precision arising from use of the x87 floating-point unit in a way that conforms to ISO C99. This is enabled with -fexcess-precision=standard and with standards conformance options such as -std=c99, and may be disabled using -fexcess-precision=fast.
I am a library author. The problem with your request is that warnings "up there" (i.e., higher levels) are at best questionable and at worst utter nonsense. Such warnings can usually be fixed either by uglifying the source code (adding extra casts, unnecessary calls, etc) or by suppressing such warnings if the compiler supports this. Sometimes, however, it is simply impossible to get rid of a warning. The now famous example of this case is the "base class should be explicitly initialized in the copy constructor" warning in GCC:
struct base { base (); base (const base&); };
struct derived: base { derived (const derived&) { } };
g++ -W -c test.cxx test.cxx: In copy constructor ‘derived::derived(const derived&)’: test.cxx:9:3: warning: base class ‘struct base’ should be initialized in the copy constructor [-Wextra]
While in this case it is easy to suppress by adding an explicit default constructor call, there are cases where this is not possible (for details see the bug report for this warning). There is also no mechanism in GCC prior to 4.6 to selectively suppress this warning.
For what is actually involved to make this work, you want to read this wiki page: http://gcc.gnu.org/wiki/Better_Diagnostics
Macro expansion is the big item, and tracking accurate line, column, and range and preserving them across passes. Typedef preservation is also tricky.
I can't think of what you might be referring to. If you can find what it was that you read, that'd be interesting.
The GHC stack is fairly traditional and has the usual performance characteristics. It grows dynamically, and like gcc's "split stack" it allocates it in non-contiguous chunks, to make the allocation cheaper.
It's perfectly determinate if your operands are both computed to the same precision. Since comparing floating point values for equality is well-known to be a highly unstable operation, they need to have ensured that both operands were computed to the same precision. The comments in their own code reflected this.
They could easily have used any number of flags to ensure this. The behavior is well-documented under the -mfpmath
flag here, in the manual. Instead, they tossed in a volatile
keyword in a manner that also relies on unspecified behavior.
I don't know how it could be any clearer that the PHP developers didn't and still do not understand how this code works, and that is why this bug arose.
> I'm quite sure GPL licenced code can be incorporated into GCC standard library
The GCC Runtime Library Exception (http://gcc.gnu.org/onlinedocs/libstdc++/manual/license.html) means that's false for general GPL code.
> did you really, honestly, expect a C code base that reaches back more than a decade to be under surveillance of ideal unit-tests, by modern standards?
maybe not ideal, but some very old code bases do have unit tests, i.e. GCC.
Also, this seems an issue related to "testing" in general, not specifically "unit".
It is, anyway, very much a cultural problem.
It may be possible to fix it with LD_PRELOAD too, something like this that you can preload by hacking your /etc/init.d/httpd:
#define X87CW_PC53 (2 << 8) #define X87CW_PCMASK (3 << 8)
static void attribute((constructor)) set_precision (void) { unsigned short int cwd; asm volatile ("fstcw\t%0" : "=m" (cwd)); cwd &= ~X87CW_PCMASK; cwd |= X87CW_PC53; asm volatile ("fldcw\t%0" : : "m" (cwd)); }
It is the infamous GCC bug 323 (which is actually a feature request, so you're right in saying it is a PHP bug).
http://gcc.gnu.org/ml/gcc/2005-01/msg00008.html
From: Richard Stallman <rms at gnu dot org> Date: Sat, 01 Jan 2005 00:25:40 -0500
" is there a reason for not making the front ends dynamic libraries which could be linked by any program that wants to parse source code? " RMS: One of our main goals for GCC is to prevent any parts of it from being used together with non-free software. Thus, we have deliberately avoided many things that might possibly have the effect of facilitating such usage, even if that consequence wasn't a certainty.
We're looking for new methods now to try to prevent this, and the outcome of this search would be very important in our decision of what to do.
It's not really a big deal. When GCC is bootstrapped on new architectures it is almost always cross-compiled to it. As far as I know GCC isn't even completely written in ANSI standard C now as it is. I'm pretty sure GCC requires GCC to build. I could be wrong, but I think if you tried to build GCC with Clang, tcc, icc, pcc, or any other C compiler, you'd have a hard time anyway.
A quick perusal of their website says the following:
> To build all languages in a cross-compiler or other configuration where 3-stage bootstrap is not performed, you need to start with an existing GCC binary (version 2.95 or later) because source code for language frontends other than C might use GCC extensions.
It does appear that the C compiler can be compiled with any conformant ISO C90 compiler, until the C++ switch at least. Still, like I said earlier...you're better off just cross-compiling GCC anyway.
GCC is already playing catchup from Clang from what I hear. GCC 4.6 will have the ability to, if it sees a symbol it hasn't seen before, offer suggestions of what symbol you may have meant, a feature taken directly from Clang.
Edit: I can't seem to find that change mentioned on the http://gcc.gnu.org/gcc-4.6/changes.html so I may be wrong about this. It does still seem that Clang's appearance on the scene has caused the GCC project to pick up its game a fair bit though.
>strtod_l isn't available on all platforms.
Then you use stock strtod(). I seriously doubt there's a non-POSIX platform that has a localized version of strtod().
>I'm sure you can show me the place in the gcc manual where the implementation choice for the semantics of doubles is documented.
http://gcc.gnu.org/wiki/FloatingPointMath
>It's also not at odds with ANSI C to open a root shell once an integer overflows.
A childish and/or moronic argument. Floating point math has never worked the same on all platforms, especially not in C, which was specified long before IEEE 754. If you think that C is portable with regards to how floating-point rounding is done, you just don't know the language.
>I think something like a defined semantics that's reliable and unsurprising for the programmer would be a good thing too
As if anyone said otherwise. They also added that with C99.
Not that much of a gain in Go, more like syntactic sugar. You still have to manage mutexes and avoid deadlocks with some manual work. At least a branch of gnu C++ has thread annotation that does some static checking to detect deadlocks. Gcc 4.7 now supports the experimental transactional memory, which seems to be the holy grail of not just concurrency but also parallel programming.
EDIT: Come on, people. This is /r/programming not /r/circlejerk. Don't just downvote, refute what I said!
Actually this article is slightly incorrect.
The builtin function strlen has attribute((pure)) declared in the header.
This attribute means it always returns the same value when called on the same input, so optimizations like this are correct.
See http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html
They're talking about explicit register variables (local is one kind) which are a GCC extension used in the Linux kernel (not the C 'register' keyword). These are naturally sensitive pieces of code as they can also invite exploits or other vulnerabilities if not treated carefully.
A while back, GCC switched from Bison/YACC-generated parsers to hand-written recursive descent parsers for some of its front ends (C++ in 3.4, C and Objective-C in 4.1). Clang also went that way.. I'm fairly sure that quite a few other production compilers use hand written parsers that are probably closer to LL than anything else.
Interesting list. I've recently written my own manual, scannerless, recursive descent parser for SQL without any tool. I found this to be much more straightforward.
Here's some feedback by the two main maintainers of H2 on the subject (which also uses a hand-written recursive descent parsers): https://github.com/h2database/h2database/issues/484#issuecomment-290641025
Interesting link to the C parser, which also favours this approach: http://gcc.gnu.org/wiki/New_C_Parser
Not everyone agrees, though: https://twitter.com/1ovthafew/status/735196845899669504
There are two parts of this. First, I do believe that using "advanced" instructions is relatively rare in real world. Heck, even 64-bit binaries took some time to catch on. This is also one reason 64bit kinda is beneficial, because it gives new baseline as all x86_64 CPUs support SSE2.
Second part is that there is CPUID functionality which allows code to query the capabilities of the CPU. This allows selecting the appropriate version of a function at runtime, eg to have SSE4 optimized and a fallback version. http://gcc.gnu.org/wiki/FunctionMultiVersioning
No. From the article:
>$ gcc -o uninit-gcc uninit.c -O3 -Wall -pedantic
-Wall implies -Wuninitialized
Documentation is here for you to read.
Compilers can indeed sometimes turn a nested loop inside-out to attempt to improve data access locality. It can't always, though, because there's a limit to how complex optimizers want to be (they have to strike that magical balance between good results and finishing compilation within a reasonable time), so many compilers will not analyze the code deeply enough as to find each and every opportunity.
GCC's Graphite library is an implementation of such an optimization strategy, which gives very good results.
ViewVC != Sourceforge
ViewVC just happens to be one of the more popular web front-ends of Subversion repositories. Many, many projects use it (random example: gcc). I'm not sure what your rant has to do with Sourceforge, other than the fact that they apparently chose to use the default ViewVC stylesheet which offends you greatly.
I'm happy that the referenced GCC bug is not a compiler bug at all, but just a consequence of the way x86 FPUs work. The long answer (at least) to skim What Every Computer Scientist Should Know About Floating-Point Arithmetic, the short answer to regard tests for equality of floating-point values as bugs in their own right.
The author claims to have eliminated race conditions, but he really hasn't. He's not introduced any protection on his flag variable other than wrapping one reference to it in an interrupt subroutine. This subroutine doesn't disable interrupts, so even here the variable is not well-protected. An air-tight solution would be to wrap all references with some sort of locking mechanism.
As a side note, the "8 years experience in aerospace/defense" claimed by the author is a red flag for me... having worked in these industries I've seen the most horrendous programming being passed as production code, simply for the fact that these industries live in a vacuum of innovation. Due to incredibly costly and time-consuming validation processes required on everything that touches production code, new tools and technologies are shunned. These industries are veritable time capsules of technology, often employing programmers who'd prefer COBOL or PASCAL over anything else...
What's more interesting than this blog post are the comments on computed gotos, a feature I've never seen used before: http://eli.thegreenplace.net/2012/07/12/computed-goto-for-efficient-dispatch-tables/
Many of the most used C++ compilers today already implement a lot of C++11 functionality - for example, g++.
For some specific examples of new things:
For setting up (default) or disabling (delete) default constructors/destructors/copy assignment/copy constructors:
ExampleClass() = default; ~ExampleClass() = default; ExampleClass& operator=(const ExampleClass& rhs) = delete;
Auto keyword:
//std::map<Key, Value>::const_iterator it = map.begin(); auto it = map.begin();
This can be used with many other things as well, eg:
auto i = 1; auto s = "string";
But is most helpful with things like iterators which can be really cumbersome.
Initialization Lists:
vector<int> vec = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
Lambda Functions:
transform(vec.begin(), vec.end(), vec.begin(), [](int i) { i *= 2; });
Range based for loop:
for(int x : vec) { ... } for(int& x : vec) { ... } //By reference
This is just scratching the surface of some of it. Also, there are new things in the standard library (like unordered_map, unordered_set, tuples, regular expressions, improved random number generation). A lot of it is very similar to the boost functionality for these things. There's a full list on wikipedia.
Interestingly, GCC has an experimental branch supporting STM. (I don't know anything about it except that it exists.)
Really, though, I think the answer to, "Which default implementation?" is "The least interesting one." I think they should have looked around at existing accepted, best-in-breed implementations, ironed out the warts and idiomaticities, maybe simplified them down to what looks like an irreducible core and release it to the world. The C standard isn't really a place to test "exotic" ideas (however proven they may be in other languages).
Apparently the C standard only requires implementations to support 4096 characters in a single logical source line. If your program is longer than that, it might be non-portable.
There are numerous reasons why texinfo is superior. Just to name a few: table of contents, indexes, hyperlinks, multiple sections.
Most people seem to conflate the info
viewer with Texinfo the format but they are very much not the same thing. Yes, info
is a piece of shit and it's a real pain to have lines wrapped at 80 columns even though your terminal is much wider, and to have to learn a bunch of new keystrokes, and to not have anything bolded or underlined which at least man
has figured out. However, texinfo is not info. Texinfo is a device independent markup language and from the same .texi
source file you can create .info, HTML, DVI, PS, and PDF. This means you can easily read the documentation with your web browser online, because most projects post their texinfo manual as HTML on their site.
Take the gcc manual for example. You can view it entirely online in HTML and never have to deal with info
, despite the fact that it's written in texinfo. And there are actual sections, with actual hyperlinks, and indexes! Here's the option index and here's the keyword index. A man page cannot do that. In fact the gcc man page is humongous at over 80,000 words but that is only a summary of the various command line options, but that is only a small fraction of the overall gcc manual. If you only know about gcc from the man page, you are missing a ton of very important and useful information. If the whole gcc manual were a man page, it would be completely and totally unusable. The printed PDF version is currently 730 pages, how in the world do you expect to manage that as a man page?
This is misleading, GCC 4.9 has not been released yet. It's just RC1. According to the e-mail, gcc 4.9.0 release is expected to happen on 21 April.
Actually, it is an ANSI-C compiler, just not C99. You can check that here: http://msdn.microsoft.com/en-us/library/sk54f3f5.aspx
Also, technically, GCC isn't fully C99 compliant either, as shown here: http://gcc.gnu.org/c99status.html
But, really, if the OP can't even find a C compiler in the first place, do you think full standards compliance is going to matter all that much to them? I figured Visual Studio would be an easy way to get a full editor/compiler/debugger suite together to start writing some code without trying to get Cygwin, MinGW, or Linux environment loaded to use GCC.
Well you might be right about some stuff being missing, and I've tested hardly anything, but that page doesn't list thread status at all, but they're libraries. It doesn't talk about tuple<>s either, which I know are there and work. 'thread', 'mutex', 'condition_variable' headers are all in place, and this works at least basically:
st@shade:~/projects/c++-play$ cat thread.cpp
#include <thread>
#include <iostream>
#include <unistd.h>
#include <stdlib.h>
using namespace std;
void go() { cout << "yeah!" << endl; }
int main() { for (int i = 0; i < 3; ++i) { std::thread* pth = new std::thread(go); ::sleep(1); } }
st@shade:~/projects/c++-play$ g++ -std=c++0x -pthread thread.cpp st@shade:~/projects/c++-play$ ./a.out yeah! yeah! yeah!
edit: thread stuff was added in 4.4.
But hmm, indeed incompletely
> but then replacing every instance of tar -xJf $myfile -C $mydir
is a pain nobody was interested in.
Personally, i would be. I really hate the trend of "backwards compatibility."
Sure, backwards compatibility is nice, but not for 40 damn years. Not when it means you can never improve, or even fix bugs because they're now relied upon. Not when everything is so interconnected and interdependent on specific versions and bugs that you're locked into them forever. For a OS preaching choice and user customization it sure locks you in a lot..
Imagine, if you will, a world where you can install two different versions of a program on linux, without having to resort to weird hacks. Imagine not having a global system-wide directory for installing all software, for all users, making upgrades downright dangerous.
Imagine something sane, a folder for programs, and each program in its own folder. Hell, another folder for versions!
Imagine software installation as simple as moving into /Programs/$Name/$Version
or /Users/$User/Programs/$Name/$Version
, dependency management as simple as one little file saying "I depend on $Version of $Name", build systems as simple as adding those directories to the PATH(You dont even have to rewrite the build systems! Backwards compatibility!)
> Basically Unix is an extreme mess, and we're stuck with it.
:(
The arduino is just some fancy stuff surrounding an AVR microprocessor, so you can build your code with avr-gcc: http://gcc.gnu.org/wiki/avr-gcc
Once you have your output files you can load them straight into your arduino using avrdude: http://www.nongnu.org/avrdude/
This blog has a helpful paragraph at the end: http://digitalfanatics.org/2012/02/arduino-toolchain/
I personally use the GNU GCC compiler given the choice:
gedit is actually just a text editor with some features for source code, it doesn't do the compilation. If you're taking a course, I would just use whatever compiler is available or whichever one the instructor wants you to use.
EDIT: I feel like the question you really had is, should you use a text editor to input your C code, or use a full-featured IDE? Both approaches have their respective advantages and disadvantages, my personal opinion is to stick with the text editor, it lets you focus on learning the code without all the bells and whistles in the way.
> look at templates in C++: few people would argue that for the most part heavily templated code is write-only, and pursued for runtime performance at the cost of comprehensibilty of code.
To be fair, they were originally only intended to provide pre-made cookie-cut chunks of code that you can just pick up and use without having to throw away type safety. The whole compile-time metaprogramming aspect came to be used and abused quickly but, for the most part, was incidental.
It's also worth pointing out that proprietary extensions to the C preprocessor have allowed for static analysis of code and template-like features - specifically typeof(), variadic macros and some of the other built-ins. You can look at the Linux kernel source tree to see where this is used at the expense of (compiler) portability.
That said, being able to dump the preprocessed out put of C code (e.g. with gcc -E) is damn handy at times.
I read it on the GCC mailing lists sometime between 1999 and 2003. Roughly 3.1 era if I remember correctly. This may be the thread "named warnings & individual warning control" but I didn't have time to look through it. As I recall the assertion that tools were more important than humans came from Mark Mitchell so I regarded it as authoritative.
I thought about doing this as a teenager, but the parts were very expensive - about a buck for a 7400, and several for MSI chips like the 7475 quad D flip flip.
Do you realize gcc is portable - you can define register transfer instructions and build yourself a cross compiler, so you don't have to muck around with interpreted languages.
I don't have a copy of MSVS on this machine, but gcc's vector does not normally use a for loop to initialize a vector. It uses std::uninitialized_copy, which uses memmove for PoD types. It only uses a for loop to init vectors of non-PoD types, or for vectors using nonstandard allocators. I would be very surprised if MSVS's implementation did not also behave this way.
Source for gcc's vector. Line 296 begins the range-based constructor. It routes through _M_initialize_dispatch. Eventually, _M_range_initialize is called, which starts on line 1022, which calls std::__uninitialized_copy_a.
Source for std::__uninitialized_copy_a. Line 230 begins the for loop version and even has a doxygen comment about when it is used. Line 248 begins the PoD version.
This author should not be speaking with such authority on this matter, which is surprising, considering what he's built.
> the invalid code is dynamically unreachable but we can't prove that it is dead
I think any compiler can face a situation where it deems best to emit a warning, but it is wrong and after a while irritating, due to the quoted reason. That's why I find local supression of warnings with eg. <code>#pragma GCC diagnostic</code> so useful.
It's defined to be implementation defined behavior (meaning that the compiler must chose and document its behavior) - see section 3.3.2.3 in the C89 standard.
gcc for example, documents its behavior here: http://gcc.gnu.org/onlinedocs/gcc/Structures-unions-enumerations-and-bit_002dfields-implementation.html
Running XCode the IDE wouldn't work anyway because it needs Apple's proprietary libraries to be installed. But Apple can't really control what you do about stuff built with this compiler or this compiler even if it links with all of this stuff and was built on their platform. And that's what this project does.
4.5.2 to 4.6.0 is quite ok, usual number of problems (GCC stopped including stddef.h in another header and code relying on it now has to explicitly include it, stuff like that). But enabling LTO exposed a lot of issues, some in LTO and some in user code. This is more severe than usual GCC upgrade troubles.
Stuff like this: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48207 makes icu, chromium, qt, kde etc. not compile.
Libtool strips CFLAGS and CXXFLAGS during link stage, but of course with LTO link stage is when the code is compiled. So unless you patch libtool or specify your flags in CC and CXX instead of CFLAGS and CXXFLAGS, your binaries will be compiled at -O0.
387
.387
as the -mfpmath on GCC, at the optimization levels chosen by PHP, floating point operations are implemented with x87 instructions. The citation is in the same section as I linked previously. I quote from the GCC manual:> 387
>
> Use the standard 387 floating point coprocessor present majority of chips and emulated otherwise. Code compiled with this option will run almost everywhere. The temporary results are computed in 80bit precision instead of precision specified by the type resulting in slightly different results compared to most of other chips. See -ffloat-store
for more detailed description.
>
> This is the default choice for i386 compiler.
These are very simple flags that are part of the tools being used. They ignored them. That's PHP's fault, not GCC's.
For GCC, the produced ABI depends on the fabi-version switch, see here: http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html Using the same switch will retain compatibility but the default of 0 will change with GCC versions and produce incompatible code.
Between GCC and MSVC, there was never any form of compatibility, and there cannot be any, as not even the mangling is the same.
Between msvc versions, there is no compatibility either, as long as they use a different runtime library version. VS2015 and 2017 are explicitly compatible though. Using a different runtime library will already break things like new and delete.
Interfaces - that is to say pure abstract classes that don't transfer ownership - have to remain compatible on Windows, since that's what COM APIs use, but that's a result specific to MSVC. This assertion holds in practice for GCC as well but there's no official guarantee for it. The Steamworks API uses this method in cross platform applications and it seems to have panned out for them so far. This still won't obtain GCC<->MSVC compatible code though.
New?... it was a thing back in 2002, and probably before because that’s just as far as the Wayback Machine can go for GCC documentation.
>I have to wonder why the PGP library situation is so bad.
Same reason why gcc will never be a library: http://gcc.gnu.org/ml/gcc/2014-01/msg00209.html (Read the whole email. It summarizes a decade of strife)
Both GnuPG and gcc are controlled by the FSF.
In the DOS days, busy-loops like that were used to control timing, so there's no way compilers could optimize those out.
I was curious if they do now, and it turns out the answer is pretty complicated! (see the 'Deleting "empty" loops' section)
Wut?
> The length of an array is computed once when the storage is allocated and is remembered for the scope of the array in case you access it with sizeof.
Interesting feature. I'm not sure if I'd ever use it but it is a neat shortcut.
It's not new. GCC comes with full Go 1 support since version 4.7.1:
http://gcc.gnu.org/gcc-4.7/changes.html "GCC 4.7 implements the Go 1 language standard. The library support in 4.7.0 is not quite complete, due to release timing. Release 4.7.1 includes complete support for Go 1. The Go library is from the Go 1.0.1 release."
What's new is Go 1.2.1 support.
Hey there,
Linux seems like a good choice for your laptop. I'd personally recommend Ubuntu since there is a new, long-term support version that's just around the corner (14.04, which will be released on the 17th of April.)
Most drivers will typically come with the OS, however some wireless and graphics card drivers have to be installed manually. In that case, these drivers can be installed via the 'Proprietary Drivers' tool that can be found in the settings menu.
Also, to address your question about compilers, most Linux developers typically use GCC or Clang . These compilers (or infact most popular open-source software) can be downloaded and installed via Ubuntu's package management system using the Ubuntu Software Center, a graphical tool that lets you access the thousands of free software packages in Ubuntu's software repositories.
Finally, in regards to your last question, there isn't really one resource you can use to learn the whole of Linux as it is constantly changing. I personally just asked questions and used Google, but there is a wealth of written documentation on the Internet if you choose to use that.
There are also multiple sub-reddits where you can ask questions, like /r/LinuxQuestions /r/Linux4noobs and maybe even /r/Ubuntu if it's an Ubuntu-specific question, and remember, don't be afraid to ask questions.
How to make a setup program? Using a setup-program-maker, in general. Makefiles are a completely different (but not really platform-dependent) issue.
A dynamic link library is a library that is linked in dynamically. You interact with it by including the relevant header files and telling the linker to link against it (e.g. using -l
). Finding and using it is the OS's (actually, the dynamic linker's) problem, and you have almost certainly been using them already - libc is handy if you want anything built-in to C to actually work. Making one mostly involves producing a file without an entry point that exposes symbols
System calls: see the documentation, such as there is any.
How the C compiler finds header files: it follows the standard search path. When using <filename>
it searches system directories; when using "filename"
it searches in the current directory as well as the system directories. For GCC, see the documentation.
Header files aren't magic. The content of the headers is dumped into the relevant location with very little thought about it (besides some attempt at recursion detection). This is why there is a relationship between headers and compile time. (There is some magic that can be done with precompiled headers to make this less of a problem).
In short, most of your questions aren't really platform dependent. The only ones that are would be "how to make an installer" (which, uniquely, also varies between Linux distributions) and "using system calls". The remainder can be trivially answered through use of Google or your compiler's manual.
> However, one of his points is that when a header is included twice, even though it's got a guard around the whole thing, the compiler still has to preprocess it again a second time in good faith, only to know that it was safe to discard everything.
See e.g. http://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html
The compiler can check the first time it visits a file and see if it's safe to discard in the future and what the guard macro is called. Then when it sees another #include of that file, it just has to check if the guard macro is still defined.
I think it's the furthest along: http://gcc.gnu.org/projects/cxx0x.html
It's bizarre to me that they standardize on feature's that nobody has implemented. Why isn't there a reference implementation in one of the open source compilers?
LLVM won't help here. I'm an LLVM developer, and this comment in the bug report is extremely poignant:
volatiles on single structure members is of course under- (or even un-)specified. Consider
struct x { int i : 1; volatile int j : 1; };
Where we clearly cannot access i without modifying j (but it's still valid C). So I don't think that a volatile member inside a non-volatile struct guarantees anything.
It's impossible for the compiler to guarantee correctly generated code in all cases, so its therefore impossible to fix the compiler.
You can special-case it, but which special cases do you do and which not?
Looks a lot like a typical std::streambuf implementation to me. (Maybe just the 'read' half of it anyway).
http://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-html-USERS-4.2/streambuf-source.html#l00195
00195 char_type* _M_in_beg; // Start of get area. 00196 char_type* _M_in_cur; // Current read area. 00197 char_type* _M_in_end; // End of get area.
The error/eof state is stored in the file or string stream object however.
There's also decimal floating point (DFP). Some platforms like POWER6 support this in hardware, and it's usable from C with the IBM XL compiler and gcc.
This is not the same as fixed-point arithmetic. You still get the vast range of floating point, but without the base conversion: the number is still stored in base 10. So you no longer have the issue where you start with an input number that can be exactly represented in base 10, but then it's converted to base 2 which doesn't have an exact representation for that number, so it must be rounded. However, this does not mean that you are free of roundoff errors: after all, you still have a fixed number of digits. And no matter the base there are always an infinite number of values that can't be represented with a fixed number of digits (e.g. 1/3 in base 10 repeats forever, 0.1 in base 2 repeats forever.)
To get around that you have to use a different representation of numbers alltogether, such as storing them as a ratio. If you represented 1/3 as having a numerator of 1 and denominator of 3, you can represent it exactly without the roundoff of having to store it as 0.33333333333 where the digits eventually stop. This is the principle behind bignum/bigfloat libraries, which let you do arithmetic in arbitrary precision without any rounding. The downside is that it's much slower compared to decimal float, which is itself slower compared to binary float, unless you have hardware support.
It depends. If your compiler is a commercial product, the code may not be available. The code for the GCC libraries are all open source (see http://gcc.gnu.org ) but examining them is not a good way to learn C, though once you have learned it they are interesting and instructive.
But there are GCC plugins: http://gcc.gnu.org/wiki/plugins
This is distinct from being a library: plugins allows other libraries to be pulled into GCC, being a library allows Clang to be pulled into other programs.
For those interested, here are the C99 feature-implementation in GCC: http://gcc.gnu.org/c99status.html
Does anyone know why implementing these things into the compiler is harder than it sounds? (Well, to me at least-- Disclaimer: I am just a very amateur programmer.)
Manual optimization: See Redshift64's response.
Automatic optimization: Most modern C/C++ compilers can do automatic optimization, for example in gcc just throw -O1 to -O3 (the level of optimization). The assembly code generated will contain these optimizations. Read http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html to get an idea of what they do.
To avoid the both the UB and repetition, one may use an unnamed union:
typedef struct pixel pixel; struct pixel { void (shader)(float out[], pixel left, pixel* right, double u); union { float ambient; float color[3]; float normal[3]; }; };
#define LINEAR(a,b,u) ((1.0-(u))(a) + (u)(b))
void ambient_shader(float out[], pixel* left, pixel* right, double u) { out[0] = out[1] = out[2] = LINEAR(left->ambient, right->ambient, u); }
void gouraud_shader(float out[], pixel* left, pixel* right, double u) { int i; for (i=0; i<3; i++) out[i] = LINEAR(left->color[i], right->color[i], u); }
void phong_shader(float out[], pixel* left, pixel* right, double u) { int i; for (i=0; i<3; i++) out[i] = magic_use_of_normal(LINEAR(left->normal[i], right->normal[i], u)); }
void rasterize_scanline(pixel* left, pixel* right, int x1, int x2, int y) { float d = 1.0/(x2 - x1), p = 0.0; int x; float out[3]; for (x=x1; x<x2; x++) { left->shader(out, left, right, p); set_pixel(x, y, out); p += d; } }
Yet, run-time overhead remains; but the code can be read without having to know anything about inheritance or templates.
Although non-standard, this extension is supported by gcc or kencc for instance.
Not quite. i686 is x86. x86 is used to denote processors which evolved from Intel's 8086. In the context of 32-bit vs. 64-bit instruction sets, the term x86 is often abused to imply 32-bit, even though corresponding 64-bit architecture is also technically x86. As a term it's pretty heavily overloaded.
edit: Here is a list of architectural options for gcc. The command line option -m64 will force 64-bit code generation. But under normal circumstances I don't expect that you will need it, as it should be enabled by default in a 64-bit environment.
It does produce safe code, assuming you pay any attention to your compiler flags. From the GCC manual for -mfpmath:
> 387
>
> Use the standard 387 floating point coprocessor present majority of chips and emulated otherwise. Code compiled with this option will run almost everywhere. The temporary results are computed in 80bit precision instead of precision specified by the type resulting in slightly different results compared to most of other chips. (emphasis mine) See -ffloat-store for more detailed description.
>
> This is the default choice for i386 compiler.
>
> sse
>
> Use scalar floating point instructions present in the SSE instruction set. This instruction set is supported by Pentium3 and newer chips, in the AMD line by Athlon-4, Athlon-xp and Athlon-mp chips. The earlier version of SSE instruction set supports only single precision arithmetics, thus the double and extended precision arithmetics is still done using 387. Later version, present only in Pentium4 and the future AMD x86-64 chips supports double precision arithmetics too.
>
> For the i386 compiler, you need to use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default.
>
> The resulting code should be considerably faster in the majority of cases and avoid the numerical instability problems of 387 code, but may break some existing code that expects temporaries to be 80bit. (emphasis mine)
>
>This is the default choice for the x86-64 compiler.
-mfpmath=sse solves it as well, and is a better (temporary) solution than -ffloat-store.
There are two possible option (as of gcc 4.x)^1 for mfpmath: 387 or sse.
>`387'
>Use the standard 387 floating point coprocessor present majority of chips and emulated otherwise. Code compiled with this option will run almost everywhere. The temporary results are computed in 80bit precision instead of precision specified by the type resulting in slightly different results compared to most of other chips. See -ffloat-store for more detailed description.
>This is the default choice for i386 compiler.
>`sse'
>Use scalar floating point instructions present in the SSE instruction set. This instruction set is supported by Pentium3 and newer chips, in the AMD line by Athlon-4, Athlon-xp and Athlon-mp chips. The earlier version of SSE instruction set supports only single precision arithmetics, thus the double and extended precision arithmetics is still done using 387. Later version, present only in Pentium4 and the future AMD x86-64 chips supports double precision arithmetics too.
>For the i386 compiler, you need to use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default.
>The resulting code should be considerably faster in the majority of cases and avoid the numerical instability problems of 387 code, but may break some existing code that expects temporaries to be 80bit.
>This is the default choice for the x86-64 compiler.
You mostly should be using -mfpmath=sse along with a -march tuned to your CPU if your machine is not too old. This is something source-based gnu/linux distro users usually find out.
> The Google Style Guide was written in the early 2000's when the compiler used was gcc 2.95.x. Are you saying C++ wasn't standardized then?
C++ was standardised in 98, that is one and a half years at most to get everything standards compliant - from this page gcc 2.95.x still predates compliance.
> The goal of this guide is to manage this complexity by describing in detail the dos and don'ts of writing C++ code. These rules exist to keep the code base manageable while still allowing coders to use C++ language features productively."
The problem with that is that at the time the style was written there was no good C++ and by now it deviates in several important sections from what is considered does and donts - hell some decisions put the "standard" classes in quotes, which either means these rules are horribly out of date or whomever wrote them is horribly full of himself/herself.
> The purpose is quite clear - it's to manage complexity, not to deal with a buggy, unstandardized C++.
Disproven by the first point. GCC 2.95.x to GCC 2.96 where still considered to predate adoption of the standard.
You said that the Free Software movement no longer contribute any valuable software. GCC was made as a result by the free software movement.
And I quote from GCC's official page >Supporting the goals of the GNU project, as defined by the FSF.
And again >Copyrights for the compilers are to be held by the FSF.
So rephrase what you mean because the FSF has contributed a lot and still does to the Linux community
Well, here's an interesting little project for you - write a program that writes a program that contains increasing levels of indirection, and then compiles it with the GCC compiler. When the compilation fails (I guess it will at some point, but I doubt the level is documented) you have your answer! Alternatively, examine the GCC source code - but I think the experiment will get results quicker (or at least demonstrate that the limit is very large) as the indirection limit doesn't seem to be mentioned in the GCC limits page.
I got interested and found this link using gcc compile through a pipe (apparently a feature as of gcc4).
from the link:
echo 'int main() { printf ("Hello, world\n"); }' | gcc -xc - -o outfile
gcc has this as a nice extension, called "conditionals with omitted operands". It's a bit more flexible than C# because it handles any conditional expression, not just nullable types.
Still, recompilation will not speed up the JavaScript interpreter in any meaningful way. Writing a better JavaScript interpreter will.
Also using -O6
will not net you anything as -O3
is the highest one doing anything, as per http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
And then there is -funroll-all-loops
which is often touted as the holy grail in some Gentoo threads. Beware: That one makes you code slower because it increases the size and creates more cache-misses.
In the end: recompiling a normal user program will not speed anything up beyond some arcane benchmarks.
Only when you are into HPC or mass-video-transcoding there are benefits to gain by using optimized libraries and compiling for the target arch instead of a generic baseline CPU model.
I am not sure that kind=-1 has ever been a part of the language. The gnu documentation (link below) for the kind statement does a good job of describing the acceptable values.
http://gcc.gnu.org/onlinedocs/gfortran/KIND-Type-Parameters.html
It might interest you to know that sometimes people sacrifice even more precision for speed: >-ffast-math Sets -fno-math-errno, -funsafe-math-optimizations, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans and -fcx-limited-range.
>This option causes the preprocessor macro FAST_MATH to be defined. This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.
http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
For a lot of applications (games spring to mind) exact numerical results are not so important.
gcc 4.7 was the first version released after the standard was ratified. Prior to that version, the option was spelled -std=c++0x
because it was not known which year the standard would be completed, and as you can tell, the expectation was that it would be in a single digit year prior to 2010, i.e. the 'x' represents an unknown digit.
C++14 is on track to be ratified very soon, but it still technically has not been passed yet. Current compilers accept -std=c++1y
to indicate C++14 mode, reflecting the same uncertainty that it will be ratified in 2014. (But in this case it's a much less invasive standard and it would be extremely surprising for it not to pass this year.)
With all that out of the way, the fact that you're using gcc <= 4.6 means that you won't be getting a complete C++11 experience. There is a chart here. gcc was not feature-complete in language conformance until 4.8.1, and it wasn't feature complete in library conformance until 4.9.0. (<regex>
is the notable last library feature that doesn't exist in working form in 4.8.)
That would be slow as fuck as a malloc costs up to a thousand cycles plus a possible system call if it needs to allocate more memory. malloc() also has to scan noncontinuous chunks of memory to find free space, fucking up your caches in the process.
What exists is a technique known as split stacks. Each function contains a small prologue that checks if the stack space is sufficient and allocates more (albeit in a different location) if needed. This is currently used in the Go language reference implementation, although the developers are going back to continuous stacks wherever possible as split stacks have a not-so-small performance penalty.
TL;DR malloc()'ing your stack-frames is like using a HDD for your memory – you can do it, but it's slow as fuck.
> -grecord-gcc-switches
Yep, and there is also -record-gcc-switches. Either way, cnile creates a parsable csv list. Not that the plugin is any better or worse, it's just another way to accomplish the same goal. In fact, the plugin also can serve as a nice little example for anyone wanting to explore gcc plugin development.
__INCLUDE_LEVEL__
is not part of the standard C++ language itself. It is a GNU extension. See this page:
http://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html
__INCLUDE_LEVEL__
does not count the number of #include
directives in the file containing the __INCLUDE_LEVEL__
; it keeps track of how deeply the file containing the __INCLUDE_LEVEL__
is #include
d by other files. For example:
If the file containing the __INCLUDE_LEVEL__
is being read directly by the compiler, because you gave that file name to the compiler on the command line, then __INCLUDE_LEVEL__
will be 0.
If the compiler is reading the file because it was #include
d by another file that was specified on the command line, then __INCLUDE_LEVEL__
will be 1.
If the compiler is reading the file because it was #include
d by another file that itself was #include
d by a file specified on the command line, then __INCLUDE_LEVEL__
will be 2.
For awhile, distros were compiled optimized for i686, but would run on i386. More recently, though, I think the x86 Ubuntu just doesn't care about anything less recent than i686. For example:
I may be a bit off on this, though. The docs say:
> While picking a specific cpu-type will schedule things appropriately for that particular chip, the compiler will not generate any code that does not run on the default machine type without the -march=cpu-type option being used. For example, if GCC is configured for i686-pc-linux-gnu then -mtune=pentium4 will generate code that is tuned for Pentium4 but will still run on i686 machines.
This was also one of the strengths of Gentoo, back in the day. If you had a Pentium 4, you could tell it to compile code optimized for the Pentium 4, without worrying about compatibility. Similarly, I'd compile code that was only guaranteed to run on the athlon-xp I had at the time.
I switched to Ubuntu when I noticed that any improvement was marginal, and certainly the first batch of x86_64 processors weren't different enough for it to matter -- running a 64-bit Ubuntu meant you got all the optimizations. But maybe it's relevant again now?
Oh, and:
> Core saw the introduction of x86-64, and the i*86 naming scheme died.
Was that the first time it was called x86-64, or are you just completely forgetting about amd64? AMD was first on this one. It's actually one of the reasons Intel and AMD are eternally locked in patent MAD -- x86 is mostly Intel's tech, but AMD developed x86_64, and there have been other things (MMX, SSE, etc) that were developed by one or the other.
Well, no... it's written in C that also happens to be valid C++ code (previously it was written in C, some of which wasn't valid C++ code).
Now that it's all valid C++ and compiles with g++, the could start writing code that's valid C++ but no longer valid C. But just because it compiles with g++ doesn't mean it won't still compile with gcc...
Edit I'm wrong. My understanding of the gcc-in-cxx branch was about polyglot - compiling in both C and C++. This merge is from the cxx-conversion branch, however, which is about using a sane subset of C++ for implementing gcc.
gcc's not portable? See here for an incomplete list of supported platforms: http://gcc.gnu.org/install/specific.html
Also, gcc certainly helps to write standard C software, since it embraces new C standards quite rapidly, thus removing the need to resort on non portable extensions.
As for gcc specific extensions, their use was not mandatory last time I checked.
Linky - they're called "designated initialisers".
EDIT: An actual reference. A shame - they're not in C++11.
EDIT2: Whoa:
int whitespace[256] = { [' '] = 1, ['\t'] = 1, ['\h'] = 1, ['\f'] = 1, ['\n'] = 1, ['\r'] = 1 };
That's a good question, and I decided to look it up. It appears to be a gcc extension that is in the C99 standard. You can read more about it here
Oooh! True! Maybe Microsoft has some secret Java VM project which runs on CLR - but we might never hear about such weird projects. ...well, <em>actually,</em> it's not a secret project at all. With this, you can theoretically run stuff on any platform that runs C#.
Though, of course, it might be cool if you could somehow compile Java to native code. Bet that's not at all possible. ...well, <em>actually</em>, it's quite possible. Just that Sun always snorted at the idea, because bytecode is just good enough and is more portable (yeah, well, there's this one quibble about the 360 not running it, but still.) Besides, it's quite difficult to write such a compiler. It's not like you could get multiple different people working on that doomed idea ...well, <em>actually</em>, it's becoming a pretty popular idea.
Random edit: I have absolutely no idea how well these things work. I've successfully managed to get a "hello world" program compiled to a native binary in GCJ. Five years ago, or whatever. This is considerably a less complex program than the Minecraft application. Still, I hope this is enough to point out that it's likely that a suitable way of getting the code to 360 has been found. It's not as impossible as it sounds.
I don't know about that. If you read some of the plugin discussion then it's pretty clear that they're explicitly attempting to block proprietary plugins, and they're willing to make their developers jump through hoops in order to do so.
Also, the EFF doesn't have an opinion on which software licenses people should use, you're thinking of the GNU Project :)
gcc refers to this optimization as "tail call by accumulation." There's no real documentation on it, but there's extensive comments in the relevant file.
Essentially it has some code that recognizes and handles recursive functions that return something of the form m + n * f(x)
. It does this by introducing "accumulator" variables m and n, which then become parameters, which can then be used to transform the function to tail-recursive form. i.e. you can think of it as rewriting factorial like so:
int factorial(int x, int accum = 1) { return x <= 1 ? accum : factorial(x-1, x * accum); }
which is now subject to TCO. This allows it to optimize functions like factorial, hashes, LCGs, or even simple "counting" functions, when written in a recursive style.
Yes. Here is the current status http://gcc.gnu.org/projects/cxx0x.html
here is a comparison of different compilers (but I'm not sure if it's really kept up to date) http://wiki.apache.org/stdcxx/C++0xCompilerSupport
You should look at one of the bug's duplicates and think again. Very many sorting algorithms (including qsort from libc) rely on a variable being equal to itself (note: not the result of some manipulation to it, but just itself), as they take in just one comparing function ("<") and no separate one for "==". That bug may crash them all. Good thing is that "-fexcess-precision=standard" does the job for gcc-4.5 and I hope it will become default as widely as possible, otherwise the duplicate list of the bug will just extend ad nauseam.
Good discussion on the GCC mailing list of this:
http://gcc.gnu.org/ml/gcc/2003-08/msg01195.html
Anyone interested in the actual bug report can find it here: http://bugs.php.net/53632
Along with the most useful comment from Rasmus here:
> Guys, we already know the problem. We are hitting an annoying x87 FPU design > flaw. You can read all about it here if you are interested:
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323
> And there is a good paper on it here:
> http://hal.archives-ouvertes.fr/docs/00/28/14/29/PDF/floating-point-article.pdf
> We don't need any more compile reports. If you are on an architecture that uses > the x87 FPU and you haven't forced SSE or float-store then you will see this > problem.
According to this page, -funroll-loops
is only enabled by -fprofile-use
. I don't see any options enabled by -O3
that would be loop unrolling. (Loop peeling is, but that's different).
Further, we can confirm this using godbolt: compare the output when -O3 -funroll-loops
is passed versus just -O3
.
Kinda. Only GPL-licensed plugins. From http://gcc.gnu.org/onlinedocs/gccint/Plugin-API.html#Plugin-API
> Every plugin should define the global symbol plugin_is_GPL_compatible to assert that it has been licensed under a GPL-compatible license. If this symbol does not exist, the compiler will emit a fatal error and exit with the error message:
> double the memory usage for pointers and ints
While pointers are definitely 64 bits long because of address space, integers are not necessarily 64 bits on an x86-64 machine. GCC, a compiler for C, makes ints 32 bits even on 64-bit systems:
> The 32-bit environment sets int, long and pointer to 32 bits and generates code that runs on any i386 system. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture.
To store 64-bit integer values, put them in a variable of type long. The reason for the difference is to conserve memory: if you know for sure you're not going above 2 billion, why not save the 32 bits, or even store it in a short (2 bytes) or char (1 byte) if you're only going to store very small values in it.
I hate to be pedantic about this, but since the int is used often and integers take up a big fraction of the memory used by a program, it's good to know that 64-bit programs are not necessarily that memory inefficient. And while x86-64 certainly has some disadvantages, it also has general performance advantages over x86: having access to more general purpose registers, being able to operate on 64-bit integers directly and some other low-level advantages.