Most of the memory management nightmare in C/C++ comes from dynamic memory (objects created with malloc/new, destroyed with free/delete). The biggest source of memory leak is programmers forgetting to destroy those objects. Well, like all problems in computer science, anything can be solved with a level of indirection, so these heap objects (dynamic objects usually go on the heap unless specified elsewhere - you can actually specify an address to store objects, very useful for embedded devices) can now be wrapped around what are called smart pointers. The original implementation is an auto_ptr; this was flawed because back then there wasn't really a "move rvalue" concept; newer code should use unique_ptr, shared_ptr, or boost's scoped_ptr. The point of RAII is that programmers shouldn't have to worry about resource allocation and deallocation; this should be done automatically through stack unwinding - this also guarantees that exceptions do not escape destructors.
C++ is one of the few languages that does not support this first class, but lets you elegantly accomplish it (IMHO). That's because it is one of the only languages (and the only popular language, depending on your definition of popular) that supports non-type template parameters. That is, you can make a compile time integer part of your type. Which is really what is needed in order to support this properly.
For a library that actually implements this in C++, see Boost Unit. Though it was written a while ago and likely more elegant implementations are possible now: http://www.boost.org/doc/libs/1_61_0/doc/html/boost_units.html.
Yep. Sorry, should have read more into what you meant. Also sorry if I came across a little crass. You forget people are on the other side of the computer screen sometimes. Other than my nitpicking, it's a pretty good article. You responded to my criticism with more maturity than I probably would have, had I been in your shoes.
To explain my problem with Rust a little more precisely, I'm not all that bothered by the strict memory safe-ness. I suppose it's the limited-ness of the library. I got too used to things like this in C++: http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/overview/networking/iostreams.html
Now I'm back to making a byte buffer and trying to deal with that. I feel so dirty.
You may not like the shift operators. I find them to be pretty controversial among the programmers I know. But either way, you have to admit that's a lot easier than Rust's Read/Write streams. (At least it seems so to me, unless there's a trick I'm missing.)
So I hate to be the guy who starts talking about math at a beach party, but...this boils down to math. The server absolutely does generate a truly random shuffle. We use the Fisher–Yates algorithm, and boost's high quality random number generators.
Over the years, we've had a number of discussions around how random that approach is, and where we've landed is:
I use Plex for shuffling music all the time, so please don't think we don't take music seriously here at Plex, or, for that matter, shuffling. We took pains to ensure that we could shuffle 50,000 tracks in about 50ms, and maintain the shuffle even across media being added or removed.
I think I qualify as a professional template metaprogrammer, being the author of Boost.MultiIndex, a library which lets the user specify a container as a composition of indices with very rich compile-time configuration options.
C++ TMP is admittedly hard to grok (somewhat less so now with variadic templates, which considerably simplify dealing with one of the most cumbersome aspects of TMP, namely type lists), but more because of the contrived syntax than the underyling complexity per se.
To me, the most rewarding aspect of TMP is how you can increase with it the amount of compile-time intelligence so as to provide the user with terse, powerful interfaces where all this internal trickery is not exposed. This is the "magic" of TMP that appeals to both users and lib programmers alike, IMHO.
Boost.Program_Options if I already use boost. All other cases, it's this header-only library: cxxopts.
I would strongly recommend to use asio or its boost version. It is on its way to standard, so I consider it to be a choice by default, and one should have strong reasons to use something else for network stuff.
Every resource I've seen online has basically said fire up a number of threads based on the number of cores the OS says is available and then feed them bite-sized tasks. I don't know where the heck you're getting "making a number of changes to the source code, then make changes in the compiler scripts, then run it again."
Heck with Boost::thread (which made it's way to std::thread) it boils down to a handful of function calls to set up the threads for 1-10000000 cores. Granted, it's up to the developer to design their code to use it efficiently but the "you have to use multiple builds for different core counts" is bupkis.
Indeed, Boost.Geometry is now used by MySQL. This thread is basically a meaningless circlejerk bashing Boost. The example page for Boost.Geometry has a very concise listing showing how to use the library to efficiently run a k-NN query.
This just seems like flamebait, starting from the unfounded assertion of "In theory, the two languages should be equivalent." No, C does not have templates and thus it's much harder to get the degree of inlining that C++ and its standard library is able to achieve. If you compare std::sort()
with qsort()
there's a definite difference.
And then it goes on to deconstruct this strawman of the non-intrusive linked list container. Well, if you want an intrusive linked list in C++ (i.e. one that works just like the C version with the links as part of the node) you can do that too. Boost has a very rich library of intrusive containers of all kinds including singly and doubly linked lists. The standard library one is non-intrusive because it offers maximum flexibility, but you don't have to use it -- after all, the comparison is against hand-coded C so "it's not in stdlib" cannot possibly be a strike against C++.
And then there's some preposterous assertion that item erasure is always O(n) using std::list
, but it's not. You can call std::list::erase()
on an iterator pointing at the item you want to erase and it's O(1), just like the C example. You can even keep on iterating by saving the return value, because only the iterators of the deleted items are invalidated.
I can't fathom how anyone that claims to know C++ would write this drek.
I don't see why this matters. If you're selecting a non-cryptographically secure PRNG, there are always tradeoffs in quality, speed, memory use and convenience. The PCG website shows a number of TestU01 failures across commonly used generators, including xorshift variants:
http://www.pcg-random.org/statistical-tests.html
At the end of the day most people working on things like Monte Carlo simulation are most likely going to pick MT19937, maybe another generator conveniently available in std::random or boost::random, only falling back to xorshift for performance critical applications.
You basically embed it in your application in the same way as with LUA - registering function calls and objects to the python environment and then running scripts.
Saying that - the embedding code is a bit more verbose than when dealing with LUA and python probably considerably slower. Don't let that put you off though, it can and has been done.
Some starting info here:
http://docs.python.org/extending/embedding.html
Also Boost Python does it all for you in C++ but personally I think the learning experience of embedding a scripting language and working in C makes it worth doing yourself.
http://www.boost.org/doc/libs/1_49_0/libs/python/doc/tutorial/doc/html/python/embedding.html
>Tango is based on D1's Phobos and had to adapt its license (BSD). When D2 surfaced, Walter changed the Phobos license from BSD to the Boost license, which made the licenses of both projects incompatible.
As I've told you before, Phobos never used the BSD license, nor did I ever suggest that Tango use BSD. Most of Phobos1 was public domain. The complete source code history of Phobos is up on github.
The Boost license was selected for Phobos2 because it is the least restrictive open source license we could find. As such, Boost licensed code can be included with BSD without affecting the aggregate, but not the other way around.
Many former contributors to Tango have relicensed their code to Boost and have incorporated it into Phobos2. Phobos2 is rapidly growing in capability and quality.
Regarding only needing some specific header-only boost libraries: Have you tried <u>bcp</u>? A little tool shipped with Boost to extract only specific boost libraries including all dependent files.
I've used it with success for our own projects with both compiled and header-only boost libraries. For compiled libraries its a little tricky but Google helped me all the way.
boost::variant
is already implemented as a tagged union, basically the same as a Rust enum. It can be initialized from any of the possible contained data types. For type-safe match
ing, use boost::apply_visitor
. See here for an example of that. The main drawback is that you need to use a separate functor for the match body (so the "match
" arms can't implicitly access the outer scope).
boost::variant
won't always be as efficient as Rust enums (boost::variant
is sometimes forced to heap-allocate, and the underlying representation might be bigger because C++ allows null pointers), but as far as I'm aware, it's more or less as good as it can be with current versions of C++.
Well the problem of a single line is well known so you can solve that with a library.
Or use Boost.ScopeExit: http://www.boost.org/doc/libs/1_48_0/libs/scope_exit/doc/html/index.html#scope_exit.intro
Bjarne Stroustrup, inventor of C++, said this about programming languages:
> There are only two kinds of languages: the ones people complain about and the ones nobody uses
Amateur critics just complains; Professionals actually DO something about, or present workarounds.
Technically, C++ isn't the (entire) problem. It is the mind-set of the amateurs who lack discipline and tend to over-engineer EVERYTHING instead of K.I.S.S.
The classic complete clusterfuck is CRC in the Boost library. In the over-zealous attempt to use templates and keep everything "generic" we have 1,109 lines of over-engineered C++ crap for a simple CRC32 function instead of a mere 25 lines of C code!?!?! People defend the template usage but honesty when was the last time you actually had to roll you OWN crc#()
function?? If we were actually solving the problem one would instead use a different hashing algorithm such as FNV-1a, SHA, MD5, etc that:
As Mike Acton says, solve today's problem -- not tomorrow's maybe problem.
You'll love C++17 then:
template<auto v> struct integral_constant { static constexpr auto value = v; // [...] };
But boy will you love Boost.Hana.
N.B. Unless I've missed one, there are four C++ submissions for regex-dna
, and none of them use glibc. Two of them use RE2 (where one is parallel and one is not), one of them uses Boost.Regex and the other uses Boost.Xpressive. Rust's regex engine has a similar implementation as RE2 (but does better on this benchmark because of more aggressive literal optimizations). IIRC, Boost's regex implementation uses backtracking, but I'm not familiar with the kinds of optimizations it does.
At that point you would need to use an arbitrary precision math package. GMP is the most commonly used. Boost.Multiprecision is also nice, as you can choose between a slower but header-only backend implementation for convenience, or it can use GMP for maximum performance if you have that built and installed.
There is a time and a space for Boost.
However, most of the time it is typical C++ over-engineering that bogs down with slow compile times.
1109 lines of over-engineered C++ crap for a simple CRC32 function when 21^* lines will do the same job!?!?
I mean, how many times do you NEED to parameterize a polynomial function?
^* Note: These 21 lines will solve the same problem 99.99% of the time
const unsigned int CRC32_REVERSE = 0xEDB88320; // reverse = shift right const unsigned int CRC32_VERIFY = 0xCBF43926; // "123456789" -> 0xCBF43926 /* */ unsigned int CRC32_Table[256] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }; // i.e. 0x00000000, 0x77073096,
void CRC32_Init() { for( unsigned short byte = 0; byte < 256; byte++ ) { unsigned int crc = (unsigned int) byte; for( char bit = 0; bit < 8; bit++ ) if( crc & 1 ) crc = (crc >> 1) ^ CRC32_REVERSE; // reverse/reflected Form else crc = (crc >> 1); CRC32_Table[ byte ] = crc; } if( CRC32_Table[8] != (CRC32_REVERSE >> 4)) printf("ERROR: CRC32 Table not initialized properly!\n"); }
unsigned int crc32_buffer( const unsigned char *pData, int nLength ) { unsigned int crc = -1 ; // Optimization: crc = CRC32_INIT; while( nLength-- > 0 ) crc = CRC32_Table[ (crc ^ *pData++) & 0xFF ] ^ (crc >> 8); return ~crc; // Optimization: crc ^= CRC32_DONE }
Knowing when to use, and when not to use 3rd party libraries is what makes an experienced developer great.
Your comparison operator is O(n). That makes insertion O(n log n). That's not good. You might want to have a look at Boost.MultiIndex.
In general, if I was going to skip boost multi-index and roll my own of this type of thing using stl containers, I'd use a vector to store the values and a hash to map keys to vector indices (using form 2 of this. Then you get amortized O(1) insertions while still having the properties of uniqueness (by key) and iteration by insertion order. But really, I'd just use Boost.MultiIndex because the problem has already been solved.
Hi there. The error is actually pretty simple if you know how to look at it. The reason for that error is that Boost.Phoenix V2 is gone with Boost 1.56. Phoenix V3 is now the default. The problem here is that Phoenix V2 and Phoenix V3 use a different result type deduction protocol. Phoenix V2 uses that one here: http://www.boost.org/doc/libs/1_47_0/libs/spirit/phoenix/doc/html/phoenix/composite.html#phoenix.composite.function "A nested metafunction result<A1, ... AN> that takes the types of the N arguments to the function and returns the result type of the function. (There is a special case for function objects that accept no arguments. Such nullary functors are only required to define a typedef result_type that reflects the return type of its operator())." While Phoenix V3 uses the much newer and better result_of protocol documented here: http://www.boost.org/doc/libs/1_56_0/libs/utility/utility.htm#result_of
So, the author of Wt just has to adapt his nested result types. One example is here: https://github.com/kdeforche/wt/blob/master/src/Wt/Render/CssParser.C#L165
You can force spirit to use Phoenix V3 for older than 1.56 by defining BOOST_SPIRIT_USE_PHOENIX_V3 before any spirit includes.
C++ overload resolution can be used to perform compile-time pattern matching on types. It can also perform arbitrary compile-time calculations as part of this pattern matching. For example, a different overload can be selected based on whether an integer type parameter is prime. Since C++11, the calculations can now be performed via compile-time function evaluation, rather than instantiating types recursively. C++14 (which clang
implements) relaxes the restrictions on CTFE quite a bit too, so it no longer requires strict adherence to recursive algorithms.
The power of templates can be used to encode complex invariants like arbitrary combinations of units with arbitrary dimensions, type-safe generic tagged unions and even features like pattern matching. Rust is explicitly designed to deny this kind of power by forcing the usage of trait bounds. It has more features baked into the language itself to make up for this (sum types, pattern matching), but libraries have much less power.
Rust will likely gain support for neat features already in C++ like compile-time function evaluation, associated types, associated constants and integer type parameters. It will never have the same kind of power in the type system though, because it's not going to have this kind of type pattern matching via template instantiation and overload resolution.
> attribute
People don't want C++ attributes to alter the program behavior. Metaclass might do the job, but it's still in the early stage. Another choice is to use CRTP (e.g. use boost::totally_ordered
in <boost/operators.hpp>), but it's intrusive, as one needs to modify the class definition.
isn't the correct solution using a proper parser combinator, such as boost.spirit or PEGTL without rewriting everything ? that's more important than the host language imho.
They've been using the terminology for some time. Changing it now is silly. That you, presumably a non-C++ programmer, are inconvenienced is irrelevant next to inconveniencing existing users. They're doing it for themselves, not for you. And it's already that way anyway
No, that's the intended use. Also there's std::ignore
, so if a function returns multiple values but you don't care about them all, then you don't need any declarations for the ones you don't care about.
bool b; std::tie(std::ignore, b) = mymap.insert(std::make_pair(23, 42)); if (b) ... else ...
I wouldn't call using std::tie()
to implement comparisons as you describe an 'abuse', but as far as I can tell it wasn't an intended use. The two intended uses I find in the related proposals are returning multiple values and decomposing a tuple returned from a function. Also, the boost documentation on tie()
indicates the same thing:
> A tuple unpacking operation like this is found for example in ML and Python. It is convenient when calling functions which return tuples.
If you gonna pull boost into your project, Boost.Hana is also worth a look.
struct Cat { BOOST_HANA_DEFINE_STRUCT(Cat, (std::string, name), (int, age), (std::string, furColour) ); };
Cat cat = {"Stimpy", 5, "Red"};
const auto& a = [](const auto& pair) { std::cout << boost::hana::first(pair) << ": " << boost::hana::second(pair) << "\n"; };
boost::hana::for_each(boost::hana::accessors(cat), a);
const auto& b = [](const auto& x) { std::cout << x << "\n"; };
boost::hana::for_each(boost::hana::keys(cat), b); boost::hana::for_each(boost::hana::members(cat), b);
Although I like C++ and boost, I once came across a template which replicated lambdas that suddenly stopped compiling.
I considered that it would probably take me half a day to debug the problem, or I could write a for loop in two minutes. So I wrote the for loop.
I do like the new lambdas in C++11 that are actually part of the language.
Probably the widest used candidates are Boost test and Google test
Note that Boost test is header only, and is heavily templated, so can cause your build times to increase.
Google test is statically linked, so will typically build faster than the equivalent tests in boost test.
Depending on your usage the additional build overhead with Boost test may not even be noticeable, so it is not necessarily an issue. Particularly if you're just starting out then I wouldn't worry too much about this.
If you're looking for something really simple, catch is a good starting point - it's not as feature rich as boost or google, but for most simple use cases you don't need the extra features.
http://www.boost.org/doc/libs/1_46_1/boost/optional/optional.hpp
That's the header for boost optional.
Do I want boost assert, config, and all the other supporting libraries? Generally no. I think boost is a great testbed for new techniques and constructs, but as a dependency, including any of it in isolation is inconvenient at best and untenable at worst.
Also, unless you need performance improvements for specific use cases, I'd prefer wrapping the class within boost::synchronized_value : http://www.boost.org/doc/libs/1_62_0/doc/html/thread/sds.html
This makes it possible to guarantee unsynchronized access cannot happen by mistake.
Yes, for example consider boost graph library likes this.
> How do I perform an early exit from an algorithm such as BFS?
> Create a visitor that throws an exception when you want to cut off the search, then put your call to breadth_first_search inside of an appropriate try/catch block. This strikes many programmers as a misuse of exceptions, however, much thought was put into the decision to have exceptions has the preferred way to exit early. See boost email discussions for more details.
The simplest solution:
void print_grid(int* grid, int rows, int columns) { /* ... */ grid[row * columns + column]; }
However this is inflexible and error-prone. You don't want to use arrays unless you have to; it's better to use std::vector instead unless you have a compelling reason not to. So a better (but still simple) solution would be:
void print_grid(vector<vector<int> > grid) {
This is a "jagged" 2D array, i.e. it allows each row to be of a different length which is not what you want for a grid.
Even better but more complicated solution:
void print_grid(boost::multi_array<int, 2> grid) { /* ... */ grid[row][column] }
Boost.MultiArray is a general multi-dimensional array library, but is a slightly more complicated solution than just using a vector of a vector.
Haskell is designed in such a way as to impose a rather strict set of constraints on you by default. In particular, "all operations are referentially transparent." So much so that the language gets away with lazy evaluation by default, because unexpected side-effects can't screw you up.
That's good news and bad news. The good news: unexpected side-effects can't screw you up. The bad news: your intuition for writing everything to be referentially transparent is probably really, really poor, just like everyone else's (that isn't already a Haskell, Clean, or Mercury programmer). :-) Also, Haskell puts a much greater emphasis on the role of static types as provers-of-correctness than the popular languages do, so I suppose, bad news: you'll have a lot more trouble getting your code to even compile at first than you do in most other languages. Good news: when it does compile, it's significantly less likely to be buggy than it would be in most languages.
So what's the payoff? The payoff is that you'll develop habits of using a good static type system and referential transparency to develop correct code. Here's the kicker:
You can write referentially transparent code that adheres to a good type system in any language.
Now, as a practical matter, you'll probably want to use a language that provides explicit support for this. If you're a Microsoft developer, I strongly recommend F#, although recent C# will do in a pinch. If you're anything else, I strongly recommend Scala if you can afford to run on the JVM, and either Haskell or OCaml if you can't. But honestly, you can do the same things in Java or even C++ if you absolutely must.
Like somebody already mentioned, the STL has a surprising amount of functional programming constructs. I don't mean that you shouldn't do something like this for shits and giggles, but if you're looking for "production-quality" standardized code, the STL is definitely the way to go. Boost also has a huge heap of all kinds of fun bits like Boost Lambda or the insanely cool Boost Metaprogramming Library which will definitely melt your brain.
gnu multiprecision library is one option. The boost multiprecision library is another.
boost::containers::flat_set
boost::containers::flat_map
boost::containers::static_vector
boost::containers::small_vector
default_init_t
overloads for all vectors types in Boost.Container to allow default-initialization of elements when constructing or resizing rather than always value-initializatingNot in Boost, but also noteworthy is Howard Hinnant's <code>short_alloc</code> allocator, which can make allocation overhead and locality issues of any container completely moot.
First, before you write your own library, check out what is available in the C++ standard library and boost (www.boost.org). It looks like you are doing stuff with probability so check out http://www.boost.org/doc/libs/1_60_0/libs/math/doc/html/dist.html to make sure it is not already done.
Use templates. The Concepts that people are talking about is based on templates.
If you are using templates a lot, why not just use a .h (or .hpp) header file. Header only libraries are actually the easiest for other people to use. If you just have a single .cpp file, I personally would not bother to try to make a library and instead just tell the user to incorporate probability.cpp into your build. Pugixml (http://pugixml.org/docs/quickstart.html) does this.
For small libraries, I prefer header-only and if not possible statically linked. Especially, if you are doing a lot of math, the header-only may provide some performance improvement as the compiler can better inline stuff.
> I just did not want to use boost
as an external dependency
I think the fact that people just say this without qualification reveals a lot of the things that are wrong with the C++ community at large.
Why would you not want to depend on a widely-supported, heavily peer reviewed library, which is mostly header-only, and which has been the basis/testing ground for so much of the standard library, and instead roll your own?
Do you think you can write code for your one-of project that's of higher quality, better tested/reviewed, et cetera than the code in Boost?
If you said that you were writing the library as a hobby project and just wanted to see how something like a reactor/proactor was implemented (which you clearly wouldn't get to do just consuming one) that'd be one thing, but your comment indicates that you actually built this software for actual use somewhere. Why then would you forego an already built, heavily reviewed and tested, and widely praised piece of software (Boost.Asio) in favor of creating your own?
If you want to use futures Boost.Asio can do that out of the box with no glue code.
If you like the reactor model better than the proactor model it's easy enough to write some code on top of Asio that makes it behave like a reactor.
One nitpick with the Boost.DLL example:
my_plugin::name() return a std::string. Unless your toolchain is using the same compiler and runtime, you really don't want to return a std::string. Otherwise, you run the risk of your dll using a different implementation of std::string that what the client code uses. Additionally, you're allocating the string memory in the dll and freeing it in client code. In the case of Windows, this can blow up if you're using two different CRTs.
I realize that they address this in the "missuses" section, but there should really be a disclaimer with this example.
EDIT: I'm specifically referring to the example in http://www.boost.org/doc/libs/1_61_0/doc/html/boost_dll/tutorial.html
coroutines let them chain naturally: http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/overview/core/spawn.html and http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/overview/core/coroutine.html
(coming to standard C++, too)
For an overview of the boost non-standard containers:
http://www.boost.org/doc/libs/1_59_0/doc/html/container/non_standard_containers.html
An unsigned, 64-bit integer can store numbers between 0 and 18,446,744,073,709,551,615. ^source
If you need even more space, you can use 128-bit integers, which can store up to 340,282,366,920,938,463,463,374,607,431,768,211,456. You can even do that without any special hardware or a special compiler^^example , though I'd assume there would be a significant performance loss.
For just one example: there's an idiom in C++ called base_from_member.
Base classes are constructed prior to derived classes, so if you want a base class to be given an object that is defined inside your derived class, you also inherit from base_from_member and bind the object there, and then the base class can access it from the base_from_member class.
You may not have had a need for that, but others have. Never confuse "I don't need this feature" with "nobody needs this feature", because there will always be a scenario you can't anticipate.
EDIT: for a second example, say you have a class C that wants to inherit properties from two disparate base classes, A and B. But A has no relation to B, and B has no relation to A. It would make no sense for one to inherit the other. But it does make sense for C to inherit both.
Yes, you could say class C inherits class A { class B b; } but now all of your accesses to b have to be prefixed, and class C isn't directly polymorphic with functions that take class B, so you would have to expose the b object externally. You end up adding a lot of boilerplate for no reason. You would also lose dynamic casting to C inside functions that ask for B references/pointers, since b is no longer derived from C.
Professional game developer for the past 4+ years.
Yes, I/we use the STL. Reinventing the wheel is fine as an exercise, but there is no reason to do it in production code if you already have a good implementation, which most standard STL implementations are. Sometimes you need to write your own, but don't do it unless you're sure you have to. Be warned: writing robust and performant containers is a lot harder than you think, which is another good reason to just use the STL.
All the time. Templates are not difficult to understand, and they make your code a million times easier to read and work with. Learn to understand templates! You don't have to know all the intricacies of template metaprogramming for 99% of your tasks, but the basics are so simple and will help you a lot.
Yes! C++11 is the way forward. Learn it inside and out. It will make you more productive and open brand new ways of thinking if you're used to the old standard.
Don't get discouraged and don't listen to people who insist it's too difficult. Also get acquainted with Boost, which often goes hand in hand with the STL in modern C++ productions, although be aware that C++11 obsoletes part of Boost! (Those parts that have been absorbed into std::tr1.)
Boost and STLport provide compatible implementations of hashed associative containers that the author could have used.
ETA: He could have even used the std::unordered_map
provided by Microsoft.
Using boost::enable_if you can move the check into the function signature:
template <typename T> void sort(std::vector<T>& vec, typename boost::enable_if<boost::is_base_of<Comparable, T>, void*>::type = NULL);
With a little macro it'll even look somehow nice:
#define assert_extends(Cl_, Base_) typename boost::enable_if<boost::is_base_of<Base_, Cl_>, void*>::type = NULL
template <typename T> void sort(std::vector<T>& vec, assert_extends(T, Comparable));
The sort
function is only available for types, that extend Comparable, without any additional check in the function body.
The only drawback is the error message, e.g. MSVC 2010 gives
> error C2893: Failed to specialize function template 'void sort(std::vector<T> &,boost::enable_if<boost::is_base_of<Comparable,T>,void*>::type)'
>
> With the following template arguments:
>
> 'IsNotComparable'
That's cool feedback, thanks!
<..>
vs. ".."
is one of many needless C++ idiosyncrasies, but I see your point.max
is constexpr
, while std::max
is not until C++14std::unique_ptr
whenever possible. You are right that I should merge make_tagged_array
and make_tagged_ptr
, but I think it is fair though if make_tagged_array
does more than make_unique
for the array case. The restricted interface of make_unique
for arrays does not make sense to me.It comes from Boost.Filesystem. Non-member begin()
and end()
for directory_iterator
were added in 1.51 in 2012 using this feature request. This duplicate ticket was proposing directory_range()
instead.
I don't know why they went for the former approach. I agree that treating an iterator as a range is confusing.
Yes, I asked the question a couple of months ago but I still don't really know for sure why a similar concept was not added to std::unordered_map
.
But if you can use external libraries, you may try the tsl::hopscotch_map I did or boost::unordered_map with a CompatibleKey
and CompatibleHash
.
I'd rather it didn't. I want to see thoughtful, well-designed, generic, and efficient async support go in first. The proposed std::future and std::promise changes, integrated well with coroutines, would get us there I think. Once that's done we can start from scratch on networking and I/O (the amount of platform-specific code that needs to be written for this isn't huge).
Boost ASIO, even though it vaguely supports futures and coroutines, is too callback-centric and has too much weirdness. For instance, you have to create a new socket and bind it to your AcceptHandler[0] before calling async_accept(), instead of the framework doing the sane thing of creating it for you later and passing it in. This basically forces you to either heap allocate the socket, even when you're using coroutines, because you can't guarantee the async_accept caller scope outlives the scope of the handler.
It's also a mission to extend ASIO in any meaningful way.
[0] See example here http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/reference/basic_socket_acceptor/async_accept/overload1.html
If you didn't notice: Boost.Container now contains an implementation of Polymorphic Memory Resources (which is part of the C++17 standard library extensions draft)! (Or should it rather be "!!!" instead?)
IMHO those are the second best thing right after C++ Modules TS and probably every game programmer's wet dream... Well actually getting rid of the current abomination what is called "Allocator" should be everyones "wet dream", but see for yourselves:
EDIT: After reading more about them I'm actually kinda unhappy with them since they violate the RAII principle. Read more about it here: https://www.reddit.com/r/cpp/comments/3xa0m5/boost_version_1600/cy4es2k
> boost::variant
is sometimes forced to heap-allocate
It seems scary when expressed like that...
boost::variant
never heaps-allocate behind the programmer's back. Only if you need to use recursive variants (ie, variants referring to themselves) then you have to use a specific construct which may heap allocate; however referencing to self has the same issue whether you use a variant or not, so it's not exactly surprising.
You need to use a library for that. Boost.Multiprecision is one example of a common one. This can use its own internal dependency-less implementation, or it can use GMP if you have that installed. Here's an example that multiplies ten primes, resulting in an integer with 93 digits:
#include <iostream> #include <vector> #include <numeric> #include <functional> #include <boost/multiprecision/cpp_int.hpp> using boost::multiprecision::cpp_int;
int main() { std::vector<cpp_int> primes { 1968335053, 1730834921, 1438477871, 1941229991, 1180536913, 1757605079, 1869803783, 1377664501, 1602778811, 1830635993 };
cpp_int prod = std::accumulate(begin(primes), end(primes), cpp_int{1}, std::multiplies<cpp_int>{}); std::cout << prod << '\n'; }
The thing you have to be careful when using a library like this is that literals are still integers. If you try to write something like
cpp_int foo = 2870943857394857934857029348573957;
...you will not get the result you want. (It will most likely be a compilation error.) To express literals that are bigger than native types, you'd need to use strings:
cpp_int foo {"2870943857394857934857029348573957"};
Something like pipable can be found in Boost (no suprise - Boost contains everything ;)): boost::range adaptors. Recently Eric Niebler wrote a standard proposal which also includes the above.
About infix, there's also a library available on a github. Although I'd probably kill someone who'd use either of these. ;)
One big use case is portability across different compilers. Consider this example from Boost's <code>config.hpp</code>
// BOOST_FORCEINLINE ---------------------------------------------// // Macro to use in place of 'inline' to force a function to be inline #if !defined(BOOST_FORCEINLINE) # if defined(MSC_VER) # define BOOST_FORCEINLINE __forceinline # elif defined(GNUC) && __GNUC_ > 3 // Clang also defines GNUC (as 4) # define BOOST_FORCEINLINE inline attribute ((always_inline)) # else # define BOOST_FORCEINLINE inline # endif #endif
Standard inline
is just a hint to the compiler that you think a function should be inlined, however the compiler is free to ignore you. On the other-hand a number of compilers include non-standard extensions to allow you the force the compiler to inline a function. If you want to write code which works with multiple compilers then you need to use a trick like BOOST_FORCEINLINE
.
A second usage is so library users can enable or disable certain behaviours through a compile time #defines
. Consider this example from GLEW
/* * GLEW_STATIC needs to be set when using the static version. * GLEW_BUILD is set when building the DLL version. */ #ifdef GLEW_STATIC # define GLEWAPI extern #else # ifdef GLEW_BUILD # define GLEWAPI extern __declspec(dllexport) # else # define GLEWAPI extern __declspec(dllimport) # endif #endif
If you are building the library as a dynamic library then some functions will need the non-standard dllexport/import
attributes. However if you are building a static library then these can be ignored. The user can specify which behaviour they want by #define
ing GLEW_STATIC
.
> In essence, do what you learnt in kindergarten: reduce the possible causes of mistakes and catch them early in development.
That seems like a very good reason to use a fairly small subset of C++. The earliest time to catch errors is at compile time, and a stronger type system reduces the possible causes of mistakes. C's type system is incredibly weak. Even something as simple as C + boost::unit would be a big improvement, but C + templates + assorted boost libraries would significantly aid catching errors early, as well as make code reviews easier.
On the other hand, template metaprogramming in C++ tends to make large executables, so perhaps they can't afford it.
It sounds like you're describing boost::optional.
boost::optional also has familiar-looking syntax (like a possibly-NULL pointer) and doesn't construct an object in the "no object" scenario.
First, the speed of bit operations is quite likely simply not be an issue at all in your program. You should always write the clearest, most obviously correct code using algorithms whose overall O()
running time is good - and then profile your application to see which areas are consuming most of the CPU time so you know where to optimize.
I'm not sure why your bosses are using <code>boost::dynamic_bitset</code> instead of <code>std::bitset</code> - it's very rare indeed you need to change the size of a bitset, which is the only advantage that the Boost version offers.
std::bitset
should be faster than boost::dynamic_bitset
in many cases as it one less level of indirection and doesn't have to carry the size of the bitset around at runtime.
Overall, if I were you, I'd not worry about this issue unless you are seeing performance issues, you have profiled them, and the bit operations are in the top ten consumers of CPU cycles. I'll bet you any amount this will never happen...
Libraries like Boost make great effort and compromises to maximize compatibility with compilers. Just look at the version spread for 1.61. This support can lead to ugly code. I'm not sure that's the golden standard, at least not to everyone's eyes.
> these chunks of code that were linked to were last touched 4+ years ago, which is just when C++11 support was getting to be mainstream among compilers.
Smart pointers have been around for almost 20 years: http://www.boost.org/users/history/version_1_10_3.html
> My point exactly is that they don't solve all problems. [...] Tackling one set of problems does not make a panacea. It just solves a set of problems.
I might have been mistaken about the exact definition of the term 'panacea', I concede that.
> They do have overhead, however small.
Nope. Replacing an owning raw pointer and new/delete with std::make_unique
has 0 overhead. std::shared_ptr
does have significant overhead but that's a semantically different construct altogether.
> It is easy to write safe Rust.
I've had a suspicion that you're just trying to shill for rust from the very start...
> Even something as simple as an index going beyond the end of an array.
You have no idea what you're talking about. <code>std::array::at</code>
I just looked up tag dispatching. It's this, right? If so, wow, what a ridiculous hack. Abusing function overloading with unused parameters to implement what in D is a trivial static if
... and wow, I can imagine how terrible the error messages must be.
Ali didn't mention it, but his first example does not compile because the method is completely absent from the type. Here's what the error message looks like:
test.d(7,12): Error: no [] operator overload for type Take!(ByLine!(char, char))
One line, and it's obvious what the problem is.
> So basically, you are a rank beginner in C++,
Please watch your tone. Personal attacks achieve nothing and are against the reddiquette.
I downvoted, here is why
In 1940 they ran a war economy (neither socialist nor capitalist), because they were being invaded. It's a bit much to complain about the lack of progressive vision, in the middle of utter mayhem.
> "Planning" is not an alternative to "markets"
there are 4 economies
traditional -> unchanging hereditary occupation
command economy -> also called blueprint economy with this distribution
market economy -> also called scramble economy with this distribution
gift economy -> no money (currently a example would be free software ^^Darwinian ^^design , there's more sophisticated concepts that have a more guided & intentional design)
Most economies are a mix.
communism wants to eventually have a pure gift economy.
Soviet central planing bureaucracy is not equivalent to corporate monopoly capitalism, the crucial difference is that Soviets had very nearly full employment (low precarity/uncertainty) while the capitalists have lots of unemployment (high precarity/uncertainty). You may equally dislike these, for their hierarchy, however their methods are different, and the consequences are as well.
> The dialectical materialism thing is complete bullshit... because crude education and lots of ignorance
That's not a reasonable argument. Please make a separate post where you deconstruct the misconceptions people have about dialectics. Try to dial down the aggressive sentiment i could almost smell the smoke coming from your keyboard...
What you think about using the parallax-view version of explaining dialectics, is that better ?
I was thinking along the same lines as tending - operate on integral_constant
like objects. Take a look at boost::hana, and in particular how it handles <code>integral_constant</code>. It defines arithmetic operators that are constexpr and return other integral_constants. If you need the value at runtime, there are constexpr implicit conversion operators to the underlying types. If you REALLY want to make sure a computation only happens at compile-time, figure it out inside an unevaluated context such as decltype
. E.g.
using namespace boost::hana::literals; auto runtimeValue = decltype(42_c + 23_c)::value;
Of course the expression inside the decltype
can be an arbitrary constexpr function that works with integral_constant
I am from C++. To observe the deduced type from the compiler, there are different options that I am aware of:
Rust seems simpler!
boost::operators is a good library to use for automatically generating these types of operator interdependencies. It's very useful for avoiding having to type a lot of boilerplate code and subtle bugs.
property_tree is not meant to be your json goto library: http://www.boost.org/doc/libs/1_55_0/doc/html/boost_propertytree/parsers.html#boost_propertytree.parsers.json_parser
AFAIK it only offers partial JSON support, especially arrays are not supported by the internal format used by property_tree.
Boost is a large collection of high-quality C++ libraries. Each library is carefully reviewed before being admitted to Boost. The HTTP library will be reviewed starting this Friday.
There are different ways this can be done; one is having these as separate programs that communicate with one another using named pipes, sockets, or something else similar.
Another similar option is that if the C++ program is a command line application and the output is what you're after, you can simply execute the C++ program and read the output, parse it, and do whatever you want with it.
The other option is to integrate the two as one program; you can find libraries that help you do this like this one http://www.boost.org/doc/libs/1_49_0/libs/python/doc/ - this route is a bit more complicated, but definitely possible.
No thats not the purpose of the conversion. The conversion is to provide a fallback for when the function isn't callable. You can inject it into any function object through inheritance(it doesn't matter if it has state since its only used in a non-evaluated context):
template<typename Fun> struct funwrap2 : Fun { funwrap2(); typedef private_type const &(*pointer_to_function)(dont_care, dont_care); operator pointer_to_function() const; };
Then you can tell if the function is callable by checking if the result type is a private_type
:
template<typename Fun, typename A, typename B> struct can_be_called : not_<std::is_same<private_type, decltype( std::declval<funwrap2<Fun>>()(std::declval<A>(), std::decval<B>()) )>> {};
Eric Niebler explains this in detail here. Although, he uses only C++03, so his solution requires more tricks(such as overloading the comma operator).
EDIT: Here is a more generalized varidiac template version:
template<typename T> struct never_care { typedef dont_care type; }
template<typename Fun, typename... Ts> struct funwrap : Fun { funwrap(); typedef private_type const &(*pointer_to_function)(typename never_care<Ts>::type...); operator pointer_to_function() const; };
template<typename Fun, typename... Ts> struct can_be_called : not_<std::is_same<private_type, decltype( std::declval<funwrap<Fun, Ts...>>()(std::declval<Ts>()...) )>> {};
I imagine that the relational operator was defined for the purposes of being used as a key for a std::map or a std::set.
I'm not really forgiving that usage, I sort of agree that they shouldn't be comparable like that, but I'm guessing that was the motive.
EDIT:
> The goal is to have any semantically correct default ordering in order for optional<T> to be usable in ordered associative containers (wherever T is usable)
boost choose to return the container (remove_erase_if). Maybe useful for chaining calls?
Either way looks like a welcome addition.
> The stdlib is very fond of things like bounds checking, too.
No, it isn't. Stop the FUD and read a book; the standard library is suitable in most cases, and even when it isn't, the solution is usually best expressed as a similar structure anyway (see for example Boost.Container).
I really like Boost in theory. The things that always really bother me in practice are:
Boost is so interconnected that whenever I want to use a single class, I end up #include
ing half of Boost.
C++ compilers do not handle type-based programming well at all. Including a single boost header means I'm increasing compile time by an order of magnitude or two (this is further hampered by point 1).
As you alluded, the template system in C++ is atrocious. Compilers practically give you a core dump any time you have even the simplest template error, which makes debugging them really difficult. The one thing I've really loved in my adventures in D is that its template system is done right. It seems like the template system C++ wants to have.
These points always cause a conundrum for me. Say I want something as simple as a circular buffer. I could use Boost's one - it's stupidly well tested, efficient, and versatile. But I usually end up rolling my own because using Boost seems akin to calling in a carrier battle group when I need a small fishing boat.
Regarding his original question, no, vector.erase() won't delete the original pointer. It will only call the destructor on the type. Since T* doesn't have a destructor, you have to do it manually. If you're using unique_ptr then yes, the underlying pointer will be freed correctly since the destructor for unique_ptr deletes the underlying pointer.
It's usually a bad idea to store pointers in arrays directly these days - unique_ptr is the idiomatic way to represent a uniquely owned pointer. shared_ptr has more overhead - as the name implies ownership is shared & there's more bookkeeping required to accomplish that.
Boost has a ptr_vector! you can use if you can't use unique_ptr instead of doing a vector of shared_ptr.
You can remove much more easily than shown by Kranar by utilizing the STL (& it's simpler to validate the logic):
C++11
auto bulletIsDeletable = [](const unique_ptr<Bullet>& b) { return b->isActive(); // returns true if should be erased }; container.erase(remove_if(begin(container), end(container), bulletIsDeletable), end(container));
C++0x w/ boost ptr_vector:
static bool bulletIsDeletable(const Bullet* b) { return b->isActive(); }
container.erase(remove_if(begin(container), end(container), bulletIsDeletable), end(container));
remove_if & begin/end are part of std:: but std:: omitted for clarity. <algorithm> & <iterator> are the headers respectively.
The javascript implementation has a highly tuned regex engine the engine. Its written from and could probably be used by a c++ solution that would probably be even faster. Instead the c++ solution uses a boost library arguably is more authentic c++ solution and is 4 times slower.
The c solution on the other hand uses tcl's regex implementation and has a note that says "Is this a C program or is this a Tcl program?" and takes about 1/3 of the time of the c++ solution to run. If tcl was in the shootout it would probably be only about the same speed as the c solution.
> I was hoping Qt4 would finally bring us namespaces.
And why...? They are perfectly fine as-is, prefixing everything with Q.
> maybe use of boost-libs for everyday things
While I partially agree, Qt libraries are big enough already; another 20MB of libraries to ship? No thanks.
> like signals/slots
It's impossible to implement Qt-like signals/slots with standard C++, hence they use MOC to generate them. Moreover, boost signals lack important features that Qt needs, like thread safety, which Qt signals provide.
> NIH-mentality that duplicates allmost all of the of the C++ standard lib
In some cases this is perfectly warranted though, e.g. just compare awesome QString with crappy std :: string. (Hint: std :: string doesn't really support utf-8.)
Switching to STL wouldn't have many benefits, and it would only bring quiet a few problems to the table. Oh and, keep in mind that Qt is already compatible with STL, so there is nothing stoping you from using STL's std :: vectors and such.
Boost noncopyable is your friend (Source if you just want to copy and paste it).
Boost asio has a stiffer learning curve than poco if you are not familiar with boost ecosystem. Boost has better documents, more examples, a lot of 2ndard libraries above boost. Another advantage of boost asio, a small part of this is going to be part of c++20. So, I would recommend you to use boost library.
For Boost, the documentation are * http://www.boost.org/doc/libs/master/doc/html/boost_asio.html * https://www.amazon.com/Boost-Asio-Network-Programming-John-Torjo/dp/1782163263
Good luck !!
Boost.Spirit.Karma is my go-to (though having a Boost dependency is a given for me, unlike many).
EDIT: Here are the results of this benchmark on my ageing Win10 system:
MSVC v19.11.25617 x64:
sprintf: 0.454s iostreams: 1.294s format: 1.786s karma: 0.097s karma (string): 0.112s karma (rule): 0.129s karma (direct): 0.090s
Clang 6.0.0 (trunk, r311150) x64:
sprintf: 0.458s iostreams: 1.258s format: 1.758s karma: 0.090s karma (string): 0.096s karma (rule): 0.120s karma (direct): 0.086s
That's really cool, but scrolling through it I cannot see how this is unique to Rust. You can probably implement the same in C++98 even.
There is Boost meta state machine for example. I don't know about potential performance hits, but allegedly it has been used to implement MQTT on a Cortex M0.
What I am mostly missing is the equivalent of D's pure
, or at least C++'s constexpr
. I saw one dimensional unit library for Rust that used the equivalent of std::integral_constant
with all its shortcomings. I think this is in the works, after that I will give Rust another closer look.
It sure is.
Check it out.
http://www.boost.org/doc/libs/1_57_0/doc/html/boost_asio/reference/ssl__verify_mode.html
Now maybe you want to know what verify peer does? Go ahead. Click on that.
'Verify the peer.'
Definitely encourages reading the code instead of the "documentation".
Yes, you can use boost::thread - this has the added advantage of being cross-platform too (e.g. works on Windows with native Windows threads).
I'm not sure your comment is relevant to the other two in this string, but I think the docs have a good answer to your question:
http://www.boost.org/doc/libs/1_63_0/doc/html/array.html
They even suggest using std::array<>
:
>The differences between boost::array and std::array are minimal. If you are using C++11, you should consider using std::array instead of boost::array.
It should be pretty clear that boost::array is basically as good as std::array, but I wouldn't equate them to a struct with a few member functions.
> CImg, is C
No, CImg is C++.
But, I think what you're looking for is ImageMagik
https://www.imagemagick.org/Magick++/tutorial/Magick++_tutorial.pdf
Ya, you're going to need to link to some stuff and set up includes, but that's C++. If you're using CMake, it's a lot easier these days.
For something header only, there's boost.gil, but it's more about abstracting pixel formats so you won't be rendering text on those images, for example, using Gil.
http://www.boost.org/doc/libs/1_63_0/libs/gil/doc/index.html
You should definitely take a look at the changes from the C++11 standard.
Network libraries for production use from the top of my head:
I'm not aware of any special data structures (as in collections/containers) being planned for C++17. You can look at the C++17 proposals on the isocpp.org website.
Boost might have something for you. See here: http://www.boost.org/doc/libs/
> you're not going to have RAII manage your server sockets for you either.
Why not? boost::asio provides RAII for sockets:
Heh, you say "C++" and think "C++ language and the standard library".
Standard library in C++ is minimalistic, mostly for reason of performance, just like it is the case with C. So it is nowhere near close to standard libraries of other languages.
However, if by "C++" you mean "C++ ecosystem", then there's a plethora of split() functions for you. I personally prefer the boost split.
Now, if you look at that b oost thing, you see something interesting: this splits virtuall anything using anything as separators. Standard libraries of other mainstream languages will give you split on strings. From that standpoint, I hope that C++ never adds a split function to std::string (or a standalone one that gives you a vector).
I think this page is a bit more helpful as it has examples.
Massive pedantry: isn't the argument to the macro strictly an expression rather than a statement?
Looks cool though. Presumably it builds up some clever expression template to get the diagnostic message.
> Where are the boost XML, unicode, GUI, graphics, database, etc... libraries?
If you can avoid this construct, it would be better to do so. If you must do it like this, it's best to use an existing library, like Boost.Any.
You can write computationally expensive parts of your code in C++ and interface it with Python using Boost.Python. It's still easier than writing the entire application in C++.
C++ programmers tend to be embarrassed about their C origins, and what would be the right way to do things in C is often poor style in C++. It has several features to help you avoid making simple mistakes, and adds a few other convenient features. But this leads to one big difference between C and C++ which is that C++ code tends to be more over-engineered than the idiomatic C equivalent would be. The C++ language allows you to specify many details in the language itself, and it seems to be hard to know where to stop. For instance, I don't know if this is a joke or not: http://www.boost.org/doc/libs/1_57_0/libs/geometry/doc/html/geometry/design.html
I'm a major detractor here in that I prefer to always have my preprocessor things defined as either 0
or 1
like so:
#ifdef _DEBUG # define MYAPP_DEBUG 1 #else # define MYAPP_DEBUG 0 #endif
For your example, my code would look like this:
bool BasicApp::initOGRE() { if (MYAPP_DEBUG) { resources_cfg = "resources_d.cfg"; plugins_cfg = "plugins_d.cfg"; } else { resources_cfg = "resources.cfg"; plugins_cfg = "plugins.cfg"; }
root = new Ogre::Root(plugins_cfg);
I do this for a number of reasons:
0
branch.> bii_find_boost(COMPONENTS system coroutine context thread REQUIRED)
So it looks like it doesn't directly handle the dependencies of boost yet. It looks like a good start.
Perhaps in a newer version it can take advantage of autolink.hpp to automatically link in the libraries. A patch has been proposed to this header before to build the pkg-config files. Biicode could patch the files to spit out the cmake targets or components.
Being open source makes this more feasible with help from the community.
> * templates, template metaprogramming, CRTP,
not even a mention of SFINAE? I don't think you know what template (meta)programming is about.
My all-time favorite of crazy C++ template thingies: Boost Spirit, a parser generation library.
Seriously, have a look at it. They have a 250 lines example file that can parse simple XML (everything without attributes or <start-and-close-tags />
AFAICT).