You aren't going to like hearing this, but most IDE's that do C also do C++ because of their interoperability. One of the better ones I've found is actually Qt Creator, which you can install sans the whole Qt SDK. Visual Studio Code is actually pretty good and does all those things as well.
Good Luck.
This isn't specifically for C (in fact the examples are mostly in Java) but Herlihy and Shavit's The Art of Multiprocessor Programming was a fantastic resource for me for learning parallel algorithm and data structure design. It does a really good job of walking you through a bunch of examples of how to use and write everything from locked data structures to lock-free and wait-free techniques. You'll just have to seperately look up how to use mutexes or atomic instructions (<stdatomic.h>) in C, but the techniques they go over are fairly language agnostic.
There is a 30 year old book that imho has never been bettered: Numerical Recipes in C. You should be able to pick up a 2nd hand copy fairly cheaply. e.g.
The C Programming Language (aka "The Old Testament", "K&R") is still one of the better books on C. The later edition that was updated to describe ANSI C (aka "The New Testament", "K&R2") should of course probably be preferred now.
Another good book that you might want to consider is The Practice of Programming by Kernighan and Pike. It's not about C specifically, but most of the examples are in C, and it's a fantastic book.
I don't have time to do a real review, but I did read the online version of the book and I have some things to say about it.
I've been writing C off-and-on for almost 20 years now. I'm also the author of The Silver Searcher (aka ag), a popular search tool written in C. One might agree that I have some experience in this field. So please believe me when I say that Zed Shaw's LCTHW is a waste of time regardless of your skill level. It contains many inaccuracies. Worse, it doesn't contain information on important aspects of the C language and its ecosystem of tooling. The best criticism of the book is probably this post by Tim Hentenaar. The post also shows Zed's unprofessional behavior in response to criticism.
I cannot stress it enough: Avoid this book.
Sure, it's quite a bad idea, but it happens. Debian OpenSSL, for example, reads uninitialized memory for extra entropy. It caused a pretty well-known security bug back in 2008, too: link.
It depends on what system you are developing for, to an extent. The de-facto standard for most OSS is GTK+. There are plenty of tutorials on how to get started with the API, and most of the basic functionality is intuitive enough.
I’ve had great success with PortAudio
They have a couple different modes of operation. One where you just block IO and send audio direct to the device as needed. And another mode where you write a callback that runs in its own thread and constantly returns audio samples.
It’s a great library and fully cross platform.
No, not really. LFS is a manual to assemble your only Linux system. As far as I know, it doesn't actually teach you a lot about how the components work, it's just a list of steps to follow.
Perhaps read learn C the hard way or just The C Programming Language and then see how far this brings you. I can also not stress enough the importance of reading other people's code to learn how to write good C code.
I feel like C is most useful when you are programming directly to an OS and its resources, rather than through a framework or library. And you don't often need to use the most elegant data structures to accomplish a simple task.
The Linux Programming Interface is still one of the best introductions to Linux programming.
C is not a derivative of UNIX. It was created by the creators of UNIX so that they can write a new version of UNIX in a language that is more high-level than assembly.
While the Windows kernel is also written in C, learning C on windows is painful because the C standard libraries on Windows are slightly off compared to the C standard libraries on POSIX-like systems, not to mention that the Win32 API is a clusterfuck compared to the POSIX API.
I suggest either installing Cygwin or setting up an Ubuntu virtual machine in VirtualBox.
Typically you have to open the correct device files under /dev and start writing to them directly.
There are several competing sound driver "infrastructures" in Linux (OSS and ALSA being the most notable) and it has evolved over time. You'll need to determine what sort of drivers your kernel has been compiled with, and then go from there to determine which devices you need to flog.
I'm no expert in sound card handling so I can't give more specifics than that. You may want to start by going over the kernel documentation on sound drivers:
What kind of projects do you want to do? Do you already know the basics of C?
I just finished writing a book with many C networking projects in it (a program to do a DNS request, send an email, download a web page, a web server, etc).
Edit: It was just published last month, and it's called Hands on Network Programming with C. It's my first book, so I'd love to hear any feedback.
Chris Wellons blog series is good for minimalist C library design, and he has excellent examples on his Github too: https://nullprogram.com/blog/2018/06/10/
For a more opaque approach, check out: https://www.amazon.com/Interfaces-Implementations-Techniques-Creating-Reusable/dp/0201498413
NULL pointer must compare equal to zero, but it is not guaranteed to have all zero bits. Actual values may vary on some platforms. As per C99 standard 7.20.3 and 7.17
> "NULL which expands to an implementation-defined null pointer constant"
and 6.3.2.3.
> "An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant) If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function."
You see they are trying to avoid mentioning actual value of a NULL pointer, because it is not fixed for all platforms.
Discussed here https://stackoverflow.com/questions/9894013/is-null-always-zero-in-c
As I have benchmarked a few years ago, my single-header khash library is faster and much more lightweight than uthash. Even if you do not like my library, there are more decent C hash table libraries (e.g. glib and stb.h) than uthash, which I believe chose a wrong way. Generic programming should not come at the cost of speed or memory.
"The C Programming Language: 2nd Addition" is 189 pages without the appendixes, so learning the language isn't hard at all, IMO. However, understanding the intricacies like how pointers work can take a lifetime. Read the linked article and see how long it takes to wrap your head around what Linus says. Blew me away when I first read it; I had to work it out on paper just so I could visualize what was being described.
Note on content:
name4
in your video is a char array not a pointer to char. In most cases the two can be interchanged but not always. See this SO question.Notes on video:
I like the change to no face camera, it did nothing but distract from the content of the video.
It sounds like your audio quality has improved from previous videos, but could still use a little improvement. I found that when developing videos that cutting it up into "scenes" and multiple takes along with cutting out breaths improved the quality quite a bit.
It's not luck, it's endianness. The x86 architecture is little-endian, so the lower bits of the '?'
converted to an int
are stored first, and the rest are zero. Check.
1) It's almost certainly not worth switching operating systems just to learn a programming language. If you want a linux-like environment on Windows, you can use something like Cygwin or MinGW.
2) Linux and Windows use different executable formats, so an executable compiled for one will not, generally speaking, run on the other. However, you can write C code that will compile on both Linux and Windows, as long as you avoid relying on OS-specific libraries.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)
Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
Pike's rules 1 and 2 restate Tony Hoare's famous maxim "Premature optimization is the root of all evil." Ken Thompson rephrased Pike's rules 3 and 4 as "When in doubt, use brute force.". Rules 3 and 4 are instances of the design philosophy KISS. Rule 5 was previously stated by Fred Brooks in The Mythical Man-Month. Rule 5 is often shortened to "write stupid code that uses smart objects".
WRT C and C++, Eclipse is just an IDE, not a compiler. It can use several compiler suites, and is typically used with gcc and/or g++ (depending upon which language or languages you want to use).
Netbeans works quite well as a C and C++ IDE, as well, and can also be configured for different compilers.
I've used both professionally, and both work reasonably well (debugger interface, language aware symbol and source navigation and auto-complete, static analysis, etc.). That said, I do most of my professional C programming at the command line. If I were forced to do more C++, I would break out the IDE, though.
JetBrains also has their new C and C++ IDE CLion in early access, though I've not yet used it in anger. Given the quality of their Java IDE IntelliJ IDEA, though, I have no doubt that it will be excellent. From what I've read so far, it is somewhat limited for now, though.
Of course, especially since you're new to C, you might be better served simply using your compiler (whether gcc, clang, or <shudder> cl) and an editor (doesn't matter which at this point -- I prefer Vim, but, given the size programs you'll write at first, any that you're comfortable with will do).
Once you progress past simple programs, you'll be well served learning at least one professional quality editor that supports the extensions that are helpful for C programming (e.g., Vim, emacs, Sublime, etc.) and a debugger (standalone, such as gdb, or in an IDE).
I assume you are on Windows (since you mention .exes) so you can actually install Visual Studio Community edition - https://visualstudio.microsoft.com/vs/community/
If you want something that is cross-platform I highly recommend QtCreator - https://www.qt.io/download-qt-installer
Both of those come with everything you need for C/C++.
One nice thing: choose a coding convention (linux kernel coding style comes to mind) and stick to it for a while.
You may think that paying attention to the way you place your brackets is the most useless thing ever and in a sense you are right, but respecting a coding style means introducing order into the code. It means posing rules and respecting them. It's useful in the same sense that learning to write on a line is needed to learn how to write at all.
What coding style you choose doesn't matter much to be honnest. All projects have their own after all, and there will alway be parts that you like and parts that you don't. What is important is taking the habit to write good-looking code. That's the first step toward code aestetics.
Then you'll move on to more general principles about not the way the code is written but the way it is structured. KISS, DRY, early returns... Those are all great generic thumb rules. In time you'll learn about design patterns (don't be scared by what others may write about them, they're just know solutions to know problems). This is for the generic programming side, the most important.
On the C side... I don't see much to say. If you provide code we can surely help you declutter it, but only programming and reading a lot can teach you good C. And honnestly I think this is the least important thing of the three because it is the most specific to C. That said it creeps me out when I see badly written code of course.
Not much meat in here, but I think that needed to be said :)
Hi.
This is the second, or third, post on concurrency/parallelism related subjects that I see you posting.
I think maybe you'll find this material interesting: https://www.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
You could write a packet capturing/analyzing tool with pcap. I did that for a class this semester, and learned a fair bit about structuring C code and a cool application of structs (see the link in the next sentence for that). You can get started with this: http://www.tcpdump.org/pcap.html. Plus, knowing that kind of stuff could be useful in the IT sector (particularly in security, but I imagine it can be useful and just a good thing to have at least a rudimentary understanding of in general IT).
You could use getchar()
to read one character and then check what has been entered. Here's a slightly more extensive version of your program: http://codepad.org/XMOK475V
Some notes:
Do not use gets()
, for this reason (from its manpage):
> Never use gets(). Because it is impossible to tell without knowing the data in advance how many characters gets() will read, and because gets() will continue to store characters past the end of the buffer, it is extremely dangerous to use. It has been used to break computer security. Use fgets() instead.
You will notice that I used sizeof
. This operator returns the size of a variable, i. e. how many bytes it can store. It comes in handy when using function like fgets()
that need to know how large a buffer is.
The strrchr()
bits in my code kill the trailing newline that fgets()
adds to the input it reads. In C, strings are terminated with a null byte, so we just put one where the newline is (sttrrchr()
returns a pointer to the last occurrence of a character in a string or NULL if it isn't found).
Never do something like printf(my_data)
! my_data
can contain conversion specifiers (like %s) and an attacker could use them to gain access to sensitive data that your program keeps in memory. Not important for a small test program like this one, but important for other projects. Use printf("%s", my_data)
instead.
C is much more static than you're expecting. The latest revision was published in 2011, K&R ("The C Programming Language") is for the first spec for the language and is still considered an excellent resource for learning C. Definitely check out some C99 / C11 resources, because the revisions are powerful, but you don't have to worry about resources being out of date.
Your matrix multiplication is suboptimal. By changing the order of loops you can get much faster. https://stackoverflow.com/questions/7395556/why-does-the-order-of-loops-in-a-matrix-multiply-algorithm-affect-performance
I don't know why you were downvoted, if you wanted to write 2d or 3d games, SDL has a simple API.
Although having to install SDL_TTF, SDL_image, and SDL_mixer in addition to SDL is a pain in the ass, also most SDL tutorials are written in C++ rather than C.
Have you tried Valgrind?
Your problem is probably you exceeding the boundaries of the memory you have allocated. Malloc can catch this the next time you call it, but the problem will be before this call somewhere. The simplest example of this would look like
char *p = malloc(10); // Lots of code here p[8000] = 'a'; // Lots more code char *d = malloc(20); //Fails, but where was the error?
If you use Valgrind, it should be able to tell you where in your program you overflow memory, not just where malloc caught it.
I like Geany. Technically it's a text editor I guess but it has lots of IDE-like components, like auto-completion or whatever else, although I hardly use them. It just organizes code nicely, is light-weight and FOSS.
Just install Visual Studio Community. During the installation process choose the C++ feature. When you make your programs save the file with .c at the end instead of .cpp.
It's a lot easier to set up than Visual Studio Code, so it's a lot less discouraging for first-time users. You'll probably use it more in your job, too.
https://www.amazon.com/Programming-Language-2nd-Brian-Kernighan/dp/0131103628. Do as many problems as you can along the way.
And take my golden advice, if you can't seem to understand something, don't bitch out until you've put in considerable effort.
Probably The C Programming Language by Dennis Ritchie and Brian Kernighan aka K&R. It was many, many peoples' introduction to C. See the first link under Resources on the sidebar to the right.
But if that book feels impossible, then I recommend C Programming: A Modern Approach by K.N. King. This book has more modern practices, and goes much more in depth over many chapters, exercises, and programming projects. It's a comprehensive college textbook whereas K&R is dated and terse.
I'd suggest chapter 1 of The Structure and Interpretation of Computer Programs, but that's a completely different language (albeit one well worth learning, and one of the best programming textbooks ever written). Perhaps you can tell us what it is you don't understand?
Read the sidebar.
If you haven't had programming knowledge before, then The C Programming Language by Dennis M. Ritchie and Brian W. Kernighan is right up your alley.
The best thing you can do is probably reading a good book. Most people recommend "The C Programming Language" (K&R). You should read it carefully and do the exercises. The book is from 1988 and minor details have changed such as "int main(void)" instead of "main()" but it's still relevant.
As for your Java knowledge, I would say try to forget it while writing C. In C, things work quite different compared to Java.
$ apropos precedence operator (7) - C operator precedence and order of evaluation $ PAGER=cat man 7 operator OPERATOR(7) Linux Programmer's Manual OPERATOR(7)
NAME operator - C operator precedence and order of evaluation
DESCRIPTION This manual page lists C operators and their precedence in evaluation.
Operator Associativity () [] -> . left to right ! ~ ++ -- + - (type) * & sizeof right to left * / % left to right + - left to right << >> left to right < <= > >= left to right == != left to right & left to right ^ left to right | left to right && left to right || left to right ?: right to left = += -= *= /= %= <<= >>= &= ^= |= right to left , left to right
COLOPHON This page is part of release 4.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at http://www.kernel.org/doc/man-pages/.
Linux 2011-09-09 OPERATOR(7)
Why not work in a virtual machine online? Just SSH in and write all your code in that, then download it when you are done. Digital Ocean have VMs for $5 a month, and you get a web shell so all you need is a browser.
The C standard does not guarantee that all bits in the representation of a type are used in the representation of values; "padding bits" are allowed.
> Your statement is incorrect. The _Bool type may be larger than one bit
Yes, but the standard says
6.3.1.2 Boolean type
When any scalar value is converted to _Bool, the result is 0 if the value compares equal
to 0; otherwise, the result is 1.
So (_Bool)2 == 1 yields 1. Also,
6.7.2.1 Structure and union specifiers
122) While the number of bits in a _Bool object is at least CHAR_BIT, the width (number of sign and value bits) of a _Bool may be just 1 bit.
The maximum width of a _Bool bit-field is 1 with GCC (8 with LLVM).
Computational mathematician here. Are you sure you want to write this program? Here are the reasons I ask:
There are a number of libraries out there that will compute the inverse of a matrix much more efficiently than the naïve method you're proposing. See, for example, this StackExchange question on using the highly-optimized LAPACK routines.
Computing the determinant of an n x n matrix using the "direct" equation requires O(n!) computations—exponential time. More clever methods exist that can compute the inverse in O(n^3 ) time (or even O(n^2.373... ).
Real-world problems rarely require a full matrix inverse. Solving a linear system, for example, requires O(n^2 ) time using back-substitution. You don't have to compute the inverse.
If you are still feeling masochistic, then you can follow the rather nice blog post here.
By default, Microsoft their compiler will compile .c
files as C code, and .cpp
files as C++ code. You can force the compiler to compile as C or C++, regardless of file extension. For more information, see here.
Visual Studio comes with its own C / C++ compiler, so MinGW (for example) is not required. If you don't like the idea of installing the monster that is Visual Studio, then there is also the option to install only the build tools.
Based on this subreddit, many developers seem to prefer a Linux-like development experience on Windows, but don't be fooled into thinking that there's something wrong with not going that route. If you have good reason to develop in a Linux-like way on Windows, then luckily there are plenty of options nowadays, but if you don't have a good reason for it, then don't go out of your way to set it up, as there is nothing objectively superior about it.
The main benefit of having a Linux-like development environment on Windows is so that you don't have to deal with different compilers when switching between a Linux-based OS, and Windows. If you mostly use / develop for Linux, then it is worth looking into.
I'm the lead developer of https://github.com/uTox/uTox We're exclusively str8c. And will trade deeper knowledge of safe str8c, (as it pertains to a crypto/security project) for code donations. I.e., we'll teach you to write better/safer C if you wanna help us close some issues. :D
The only way you're going to overcome that issue is by writing code. You can't learn everything just by reading about it.
Some guides (such as the tried and true book 'The C Programming Language') have exercises which you can work through to solidify your understanding. I would highly recommend a book such as this.
A good place to start would be getting the latest set of XCode Commandline tools. Check this link for a quick rundown. If you have already installed the brew package manager on your machine, you already have this. This will provide you with all the executables you need to make your program.
Not sure what you’re doing in C that requires environment virtualization. If you could explain your situation in more detail I might be able to help.
>behaviour of 'malloc', which most (afaik all) implementations targeting linux do not uphold
The "don't overcommit" option (2) makes malloc behave like the C standard specifies it.
And, to state the obvious, it's not a Linux-specific thing either.
The entire virtual memory layout of a process is documented on the kernel.org website
I'm not sure what you mean by 'kernel code'. There is space in this layout to for the kernel to store important things in memory, but if you mean the currently running kernel code, then no, that doesn't get projected into a process's virtual address space.
You technically can create a pointer to whatever address you want, but if you try to use a custom pointer to some location outside of your process's address space you're going to have a bad time.
> my friend mistakenly set the first char array to have size 10 instead of 20, you would expect it to fail
That program has undefined behavior, which means you have absolutely no guarantees about the behavior of any part of the program.
You should not "expect it to fail," nor should you "expect it to succeed." You have absolutely no guarantees about its behavior in any way. It may work some of the time on some systems under some compilers, and it may fail in other cases.
The C standard is a contract between the programmer and the compiler. The compiler is obligated to produce an executable program that behaves in a specified way as long as the programmer follows the rules in the standard. By writing beyond the bounds of an array, you, the programmer, have violated the rules in the standard, and so that contract is no longer in force. The compiler is under no obligations to produce a program that behaves in any particular way.
I use kernel style which combines the both. Functions are done like this:
int main() { ; }
while switches, structs, loops, conditionals, etc... are done like this
switch (i) { case 1: break; }
I learned about it here, and thought it made the most elegant code.
Cool! Here's what I noticed with the first couple things I tried:
$ ./simplify --isolate x 'x^2 = 4' x = 2
What about -2
?
$ ./simplify --isolate x 'x^2 - x - 1 = 0' x = (1 + x) \ 2
Where did you get the n \ 2
syntax for the square root of n
from? Never seen that before.
And obviously that's not the absolute simplest expression for x
. Might be nice to recognize and solve quadratics (and maybe cubics and quartics).
The documentation of Maxima's solve function, for example, gives an overview of some possible solution approaches for equations.
A text editor (the one you prefer)
A compiler: gcc or clang
A debugger: gdb or lldb
Additionally, Valgrind to check for memory issues.
I work on gnu/linux and sometimes on Windows.
On Windows I usually install msys2 which lets me use the same tools as on linux (although gdb is not as smooth as on linux and I can't remember if I ever got Valgrind to work on there either)
What OP wanted to say is you do not actually get the high level concept. You're saying you get the 'high level' concept of data structures but are not able to use malloc correctly. These are unrelated concepts, you do not have to use malloc() and free() with data structures. There is no direct co-relation.
Start writing your code in small sections and unit testing before moving forward. Learn to use valgrind and gdb. Enable stack smashing protection when you compile code (-fstack-protector, -fstack-protector-strong, -fstack-protector-all). Try using clang instead of gcc. There is no catch all solution to seg fault issues, just ways to catch them earlier and often or learn from your mistakes and figure out the obvious mistakes.
C is a lot less forgiving than java/python and relies upon the programmer to know how to correctly handle memory allocation and release
To me, Go very much is the C philosophy of programming but lifted into a different domain. i.e. One where a runtime is fine. The thing about Go though, is that in C I will find myself writing runtime enabled generic code, while Go doens't even feature such things. So it also feels like it's trying to recreate C too much.
D I think suffers from not getting a solid launch, and essentially having other alternatives beat it to the punch. C++11/14/17 basically takes away the flame of D's original mission statement. Go brings back the simple static natively-compiled language to the world of interpreters, dynamic typing, and vms. Rust raises the bar on static compiled languages and arguably solves the more foundational problems of that language domain. So D wasn't really chasing the most critical shortcomings of C++, a reason why people just waited for C++ to get better.
The one cool feature of D that I'd really like a reason to play around with, is -betterC. Which has been intriguing me ever since I first heard of it.
Rust has already been mentionned so I'll root for D.
D, if used with its runtime, has about the same expression level as C# but being compiled it doesn't so it can easily solve most highlevel problems way faster while still being very safe.
Without its runtime it can be used as a safe C: it has the exact same expression level and can be as unsafe if needs be, but provides templates, bound checking and other nice compile-time things to make programming easier and less error-prone while retaining the speed.
This Stack Overflow post asks for tools similar to valgrind on Windows.
1350 lines isn't that much, consider putting it in a gist so we can have a look at it, too. You could also try to run your program on Linux, there you can use valgrind (which is really easy to use).
Modern Operating Systems by Andrew Tanenbaum is fantastic. https://www.amazon.com/Modern-Operating-Systems-Andrew-Tanenbaum/dp/013359162X/ref=sr_1_1?crid=3GFWMJB7LKL2F&dchild=1&keywords=tannenbaum+operating+systems&qid=1600412939&sprefix=tannenbaum+%2Caps%2C223&sr=8-1
Well when I was learning there was just the one book - 1st Edition K&R. The C Programming Language. The book had only been out a year. I guess that's still my favorite C book. After reading it I didn't need to read any others!
Semi-interesting side story. I got my first job as a C programmer having only read that book. Had never actually compiled a line of it as the University didn't have a compiler for it.
It is an awesome book, both as a tutorial, and for insight into the "style" or "flavor" of the language. Uniquely, it could be your only book on C, assuming you're ok using the older standard, as it is a fair (if not great) reference, as well.
That would not be ideal, though.
A reference, preferably covering at least C99, would also be good. I strongly recommend the most recent edition of Harbison and Steele.
To not suck, you'll also need to do two very important things: complete lots of programs, and expand your knowledge of programming technique.
For the latter, you could get a decent start with another of Kernighan's books: The Practice of Programming.
To really master C, I suggest also reading and understanding Deep C Secrets, as well.
Of course, all of that is just the start - there is no substitute for reading and writing code. Honestly, both quantity and quality are important.
> int strlen(char *s)
>
> int strcpy(char *s1, char *s2)
>
> int sanitise(char *s1, char *s2)
Broken APIs. In general, you shouldn't pass an array to a function in C without also passing its length.
> char string1[50], string2[50];
What if we need to handle strings longer than 49 characters?
> /* first we check we have space available for both strings */ > if(malloc(100) == 0)
This is just complete nonsense. Even if we did free
this, we'd still be allocating the same memory twice, which is an anti-pattern.
> string3 = (int *)malloc(50); > string4 = (int *)malloc(50);
Here, on the other hand, we call malloc
twice without checking its return value. Even if you can get away with it, this is a bad practice.
Also, the cast to (int *)
is wrong, since string3
and string4
are char *
s.
> printf("Please enter your name: "); > gets(string3); > printf("Please enter the format to print your name: "); > gets(*string4);
> printf(string1, string3);
printf
will treat string1
as a format string, and (attempt to) substitute the value of string3
(a char *
) wherever it sees a conversion specifier (%s
, %d
, etc.). I'm not sure what this line is intended to do, but I'm guessing it's not that.
> /* delay before returning so user can see output */
Better to just use sleep
.
Oh my fucking god! If you're gonna paste code, indent it by 4 spaces to get proper code formatting.
int main(int argc, char *argv[]) { // like this. return 0; }
If your code is that long, please paste it to https://gists.github.com/ (you should have a github account anyways) or on https://hastebin.com if you don't understand git
.
I find the Linux kernel style very appealing with some added Plan9/BSD style extensions sich as declaring structs used outside their source file as extern.
https://www.kernel.org/doc/html/v4.10/process/coding-style.html
Nice approach!
What I usually do is something similar, but using some extra macro trickery to create a 'with'-statement akin to what Python has:
https://repl.it/repls/RoughCircularPerimeter
This allows you to be more free with your scopes. I'm sure this idea can be extended upon to introduce a keyword that auto-unwinds all nested blocks (so you can do an 'early return' which is currently not possible).
As with most of Windows' shortcomings, I thought it was simply unwilling — until this past June. Microsoft is putting resources into building better, and failing badly at it. Is that "unable" as in they lack sufficient expertise within the organization? Or maybe they're just "unwilling" to put the appropriate expertise towards the problem since it's more needed elsewhere (i.e. kernel work), leaving the console update in incapable hands.
I don't see, how your question is related to C programming language.
Since you're using a Linux-based OS, you can use dd to easily read/write an ISO image from/to any device.
If you really need to write this program in C, and I will consider you know what C is and how to make applications using it, - start from studying POSIX Programmer's manual, ISO 9660, and libcdio.
I would disagree since the question is explicitly about good style in traditional C. While you are definitely are going to find CamelCase in some code bases, they are definitely not good traditional style.
As an example from the Linux Kernel style guide:
> HOWEVER, while mixed-case names are frowned upon, descriptive names for global variables are a must. To call a global function foo is a shooting offense.
While I don't agree with everything they ask for, it's still an amazingly good starting point. https://www.kernel.org/doc/html/v4.10/process/coding-style.html
If you are new to programming, check out the Linux CodingStyle document: here
It will help you avoid developing too many bad habits.
*Edit: And the document also has good guidelines about function length and complexity.
C++ programmers tend to be embarrassed about their C origins, and what would be the right way to do things in C is often poor style in C++. It has several features to help you avoid making simple mistakes, and adds a few other convenient features. But this leads to one big difference between C and C++ which is that C++ code tends to be more over-engineered than the idiomatic C equivalent would be. The C++ language allows you to specify many details in the language itself, and it seems to be hard to know where to stop. For instance, I don't know if this is a joke or not: http://www.boost.org/doc/libs/1_57_0/libs/geometry/doc/html/geometry/design.html
A lot of people are going to recommend Vim. If you don't like to tinker with things, Vim is for you.
If you want to customize your text editor, I suggest you do not torture yourself with VimScript. Instead, you can get the best of both Vim and Emacs: https://github.com/hlissner/doom-emacs
Doom Emacs is an optimized configuration of a set of packages for Emacs. The defaults are all great out of the box, and making changes is simple (you will need to understand basic lisp, though).
Not sure why Clion doesn't have a free edition. But I get it free for being an open source project.
​
the &&
operator is a logical AND. It returns true if both arguments (on the left and right) are individually true. While you're learning, you might find it helpful to have additional parentheses to make it more clear. You should also know that a value of 0 is considered false (in an if statement) and anything else is considered true. Usually, a comparison operator will return 1 to represent true.
Your original statement worked like this (assuming double score = 0.95
)
if (0.91 <= score <= 1.0) if ( (0.91 <= 0.95 ) <= 1.0 ) if ( (1) <= (1.0) ) //true
Or assuming double score = 0.5
:
if (0.91 <= score <= 1.0) if ( (0.91 <= 0.5 ) <= 1.0 ) if ( (0) <= (1.0) ) //true
As you can see, the condition is met in either case which is why you're always seeing that line print.
Using the &&
operator will evaluate the two conditions individually (there's also something called short-circuit evaluation but you can ignore that for this example).
if (0.91 <= score && score <= 1.0) if ( (0.91 <= 0.95) && (0.95<= 1.0) ) if ( (1) && (1) ) //true
And with 0.85
if (0.91 <= score && score <= 1.0) if ( (0.91 <= 0.85) && (0.85<= 1.0) ) if ( (0) && (1) ) //false
You only need one resource to really get started with C: The C Programming Language, by Kerninghan & Ritchie. It's clear, concise, and written by the stewards of C itself, one of whom is no longer with us.
This book is such a mainstay of software culture that many developers simply refer to it as "K&R", after its authors' last initials. I read it once a year even if I'm not writing any C, just to remind myself what great technical writing looks like.
K&R is not necessarily an easy book to read. It's concise because it is dense with knowledge, but that makes it perfect for self-education, because you can re-read the first chapter a dozen times and still learn something new each time.
Other books that shaped my career: Clean Code, The Design of Everyday Things, Game Programming Patterns, and Ruby Under a Microscope (being written in C, Ruby is a great resource for learning about how to push C, and by extension computing, to its limits).
Welcome to the world of programming, and remember: Everything you see on the screen is an illusion! C is the closest we get in modern times to talking directly to the hardware, a privilege quite lost to the newest generation of framework programmers (for better or worse–no judgment here!). Enjoy it, respect it, explore it!
I wouldn't say you are missing anything. The main book to read is K&R (The C Programming Language). That book in particular gives a lot of great detail about C, and is really all that you need to get started. You can certainly get more in depth in certain areas by reading the other books, though.
You made my day with "apocrypha".
The Practice of Programming as mentioned above is pretty good. The UNIX Programming Environment is another. Reading the debates on brace styles is surprisingly informative.
The wonderful thing about C is that it's small enough that the whole language can fit inside one programmer's head, so the number of questions about best practices is small. :)
Just get the The C Programming Language book by Ritchie and work through it, will easily be enough for undergrad university courses. Most C stuff is pretty easy, the biggest hurdle students have are understanding pointers and memory management
I had no real programming experience before I started C as part of a university course. Thankfully, my first course was on the language itself so the problems were all based around understanding the fundamentals of C itself (not on the theory of algorithms).
I know some people here are criticising it as a first language choice, and I'll admit that I am in two minds about recommending it. However, I'm struggling to think of a better choice. So I'll assume that isn't up for debate or maybe even the class recommends it?
Unfortunately I have never taken a relevant MOOC in this area. I have looked at several books however:
I can't avoid mentioning The C Programming Language by Kernighan and Ritchie aka K&R - if only to say I dont recommend it. I own a copy and feel its not really appropriate for a complete beginner. Its not a book you can sit down and read through. Its basically a reference manual, to be used to help provide context for a particular problem. Unfortunately, at the start I would know what I wanted to achieve but didn't know the correct terminology to describe it, in which case the index becomes useless (even Google can't work these types of miracles).
Absolute Beginner's Guide to C by Greg Perry is however very readable. Its probably a little too verbose for someone who has prior programming experience - as the title suggests its written for the absolute beginner. It should be ideal for you however. If you are prepared to go for a book rather than a MOOC I highly recommend it.
The second edition of <em>The C Programming Language</em>. A classic. Sadly it's a bit dated, so there isn't anything about, for example, C99 in that book. It has a lot of suggested projects along the way (many, back then useful, little programs). I think it's a book everyone should read at least once.
Maybe a bit out of context but if you're on a Linux machine "The Linux Programming Interface" is an incredible book. Yes you will learn the Linux interface but more importantly you will get real world experience writing C if you follow the samples and do the suggested exercises.
<strong>DroidEdit (free code editor)</strong> - Search for "Droidedit" on the <strong>Play Store</strong>
^bleep ^bloop ^I ^am ^not ^a ^bot. ^Apparently ^this ^subreddit ^doesn't ^have ^the ^app ^robot ^that ^/r/Android ^has.
It's an opinion. Follow your coding standard.
The Linux kernel says no.
OpenSSL says yes (Chapter 5)
I would not use gotos here in this case but return right on the guard clauses because there is nothing to clean before return like free, code depth is shallow as well. I just forgot or missed to replace gotos with return since logging module was quite different before. In the very beginning version of logging, there were more responsibilities than now.
By the way, centralized exiting is a common practice, I believe. not only firmware but also any in C. Let me leave a link to Linux kernel coding standard about it: https://www.kernel.org/doc/html/latest/process/coding-style.html#centralized-exiting-of-functions
Looking at the function scanf
page here: http://www.tutorialspoint.com/c_standard_library/c_function_scanf.htm
It's returning the number of characters read in. Even if you enter -1
scanf will return a number greater than 0.
if(scanf("%d",&i) != 0)
is basically saying if there is nothing written to the variable then do this
.
You haven't said any of the problems when you ran it but I suspect this will be one of them.
Also important to know:
malloc(...)
allocates memory on the heap, which makes it dynamically "resizable", while the array declaration declares the memory on the stack.
Memory allocated on the heap ALWAYs has to be freed via free(...)
when it is not in use anymore. Also as a best practice, always check malloced pointers if they are NULL
, 1st on creation and then before each use, this will save you a lot of time trying to debug dangling pointers.
A further discussion on stack Vs heap can be found here: https://www.hackerearth.com/practice/notes/memory-layout-of-c-program/
I built this and took a couple of screenshots, for those of you who are curious like me:
This is a cool project, but I have a hard time calling it a clone. Clone to me suggests something that closely imitates the original. This actually feels more like a twitter prototype than facebook.
The code is nice though.
This works for C: http://cppcheck.sourceforge.net
There aren't a lot of good free tools out there that I know of. Especially ones that check for Misra compliance.
That would be slow as fuck as a malloc costs up to a thousand cycles plus a possible system call if it needs to allocate more memory. malloc() also has to scan noncontinuous chunks of memory to find free space, fucking up your caches in the process.
What exists is a technique known as split stacks. Each function contains a small prologue that checks if the stack space is sufficient and allocates more (albeit in a different location) if needed. This is currently used in the Go language reference implementation, although the developers are going back to continuous stacks wherever possible as split stacks have a not-so-small performance penalty.
TL;DR malloc()'ing your stack-frames is like using a HDD for your memory – you can do it, but it's slow as fuck.
For the IDE; I would recommend CLion, hands down the best general purpose C/C++ IDE I’ve used, but if you’d rather stick with something not so “high profile”, just use Emacs.
For the distribution platform; it’s a little more finicky. C doesn’t really have a widely renowned distribution platform, the closest you’re going to get is using CMake to distribute libraries.
It does exactly that. Though you should have a good working knowledge of C beforehand as the author in my opinion misses his goal of readable code at some points.
You can't go wrong owning the Bible (K&R's The C Programming Language), but that's really more of a reference book than a step-by-step instruction manual.
I had decent results from an Amazon Kindle book, Linux + C. There were some errors introduced by formatting, apparently, but there's also a link to the original code which compiles fine. Using books like that alongside K&R works pretty well to teach you the basics.
After that, you just need practice and experience. There are a lot of ways to really mess up your code and ways to make life easier that just come from experience.
It's actually all right. But I still recommend this approach for beginners:
Start reading "Absolute beginners guide to C". // Read it through in one sitting. It's easy digestible, not long, and gives you a perfect start to understanding what C is. Optionally you can write the sample code shown, for better understanding. There is no exercises, but it's well good put together.
Next "The C Programming Language 2nd edition (kernighan, Ritchie)" // Go through it slowly and with attention. Do EVERY exercise, and than do them again, and if you can't do it in your sleep, do the same exercise again. When you done with this, which can take some time, you'll have a sound foundation.
I really liked C Primer Plus by Stephen Prata. I would like to know other posters opinions on this book. I also purchased his C++ Primer Plus book and am enjoying it.
I feel like I learned a lot from the first book I mentioned, the pace and way he explained things was just right for me, enough so that I got his other book. I learned a lot not just about C, but about computers and how they work.
But the most important thing is to do every exercise and review question etc.
EDIT: I'm sorry!! I didn't even see that you needed a book in German. My apologies.
I'm currently going through http://c.learncodethehardway.org/book. It's a bit boring but it's trying to teach you a mindset too.
If you want to learn it together, send a pm. We could make a chat and help each other understand better.
The C Programming Language must be good too.
It's not all that C specific, but have you read Microsoft Press's Code Complete? Some handy tips for not tripping over yourself as a project scales up. I think they do use C in the book (it's been a while), but the tips are more general.
The Joel on Software blog has some good stuff, too.
Design patterns tend not to be language specific. The big gotcha with C is that any OO is not a formal part of the language, so it can take a bit of creativity to use object-centric patterns.
I agree with whoever said that looking at other successful large projects is a good idea, and would suggest that the BSD source codes tend to be cleaner and easier to follow than their Linux and GNU counterparts.
https://stackoverflow.com/a/8185382 Answers it pretty well. Basically, you can cut an object of n items in half log base2(n) times.
Not necessarily. You can describe the complexity of an algorithm using any function, really. It's just that those ones you listed are really common, and because of the way big O notation works. A function is, for example, O(n) if you can come up with a function y=kn for some constant k where y is greater than or equal to the number of operations of the algorithm. y does not always have to be greater than or equal to the number of operations in the algorithm as long as it remains greater than or equal to it after some value of n.
My mistake, seems like they aren't online anymore (they were called QtC, but I can't find a repo to them anywhere -- just the odd reference on some old forums).
Nowadays using Qt with C involves writing non-GUI logic in C, a GUI layer in C++, then probably using C++'s CFFI to integrate one with the other.
Shameless selfplug: I wrote a simple JSON serializer for cases like that
https://sr.ht/~rkta/microtojson/
​
You need to define the JSON beforehand, though.
Fabrice Bellard's Tiny C Compiler can do this out of the box. You just do tcc -run myprogram.c
. No makefiles or extra shell scripts required. It's fast enough that they even have a related project that can compile and then boot the Linux kernel from source.
This may be a relevant resource: https://www.kernel.org/doc/html/v4.10/process/coding-style.html
If you're interested in writing anything related to linux it's probably best to follow the kernel coding style.