iirc, it was originally written in lisp, then some crazy macro transformed the source to c/c++. I don't know if it has been rewritten since then, but there are (apparently, didn't watch the video) details here: .net gc talk
This is a big deal. Microsoft Visual Studio is unmatched as far as development tools go. It's a dream come true for many Android developers.
More info and hands on video.
Plentyoffish is an amateur site.
The guy who wrote it put it together when he was teaching himself asp. He got lucky.
http://channel9.msdn.com/Shows/IIS+Show/IIS-Show-10-PlentyofFishcom-and-IIS-6--Plenty-of-performance
http://plentyoffish.wordpress.com/2006/06/14/how-i-started-an-empire/
What version of VS are you seeing these perf issues on? We did a TON of work on intellisense perf in VS2013 Update 4 specifically for making UE4 based apps run well in Intellisense. See video here: http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-29-Massive-Improvements-for-Browsing-in-Large-Codebases- If you are still seeing issues, please let me know. (I'm the VC++ dev manager)
In my experience C# stays consistently ahead of Java when it comes to language features and seems to implement them in a more consistent manner.
Java still has no concept of properties, which I think leads to far too much boilerplate for class definitions (a tendency found throughout Java and most Java frameworks).
Generics in Java are hobbled in such a way that you can write quite a lot of code around them and then realize...you cannot do what you want.
There are no lambas or method references until Java 8.
And Java also tends towards verbosity while C# tends towards brevity. See things like the var keyword, automatic property implementations, etc. etc.
The team behind C# and .NET are very bright. Check out some videos with Anders Hejlsberg (who also worked on Turbo Pascal and Delphi): http://channel9.msdn.com/Events/Build/2014/9-010
I watched this video with the Windows kernel team way back, in which they basically all slate the registry for being one of the greatest fuckups of the NT architecture.
I dunno. Given a showdown between Chen and the kernel team, my bets would be on the kernel team (and I've been reading his blog since at least 2003).
Aaaand just got to my hotel. IIRC that is a PC reference. Also, the full video of Phil's presentation will be posted tomorrow.
Edit: And the video just went live here http://channel9.msdn.com/Events/GDC/GDC-2015/The-Future-of-Gaming-Across-the-Microsoft-Ecosystem
I am about to watch it as well since I have not seen this either.
Yeah they said it's closer to an editor, and Visual Studio 2015 will always be a more complete product, but it gets the job done cross platform. I'm following the live feed here if you're curious http://live.theverge.com/microsoft-build-2015-live-blog/
Edit: Apparently there's a live stream http://channel9.msdn.com/?wt.mc_id=build_hp
Here's a link to the full talk, in a higher quality, and with links to download it for off-line viewing: Channel 9 - Going Native 2012 Day 1 Keynote - Bjarne Stroustrup: C++11 Style.
If anyone is interested in learning more about DX12 there was a pretty good talk by Microsoft back in April explaining a lot of the changes and new features of DX12. Note that this talk is aimed at graphics programmers, not laymen, so I imagine a lot of it isn't going to make a lot sense to most people. Still there is a pretty neat demo around 54 minutes in where he demonstrates some of the CPU efficiency gains of DX12.
This guy goes pretty deep into it in lecture form (31 minutes). For the TL;DW, using an appropriate engine (such as mersenne twister) with an appropriate algorithm on top of it (such as std::uniform_int_distribution) will do the job well. He goes into a few better ways too if you're looking for cryptographically secure generation (which mersenne twister isn't).
Edit: clearing up some poor wording.
21-part free tutorial series. Looks quite in-depth.
EDIT: fixed url
Here's a relevant talk from Going Native 2013 from Chandler Carruth: The Care and Feeding of C++'s Dragons.
One relevant part of the talk is: the range-based for loop can actually give you a performance increase (around 40 minutes into the talk).
Reminds me of a comment Stroustrup made in a more recent talk on language design. Paraphrasing:
Developers want compact, terse syntax for well-known things, and loud, blaring syntax for new things
Boils down decades of frustration and endless man-hours of discussing "what's better".
But as you observe, it is a taste.
[edit] It's this talk on Channel 9.
Slides are here. Actual quote is on Slide 14 (but really, at least watch the entire "Language Myths" section)
No, that's not their vision.
Edit: It isn't hard to look at GDC 2015 annoucements: http://channel9.msdn.com/Events/GDC/GDC-2015
In short:
Now, they haven't been yet any annoucments of Xbox One exclusive titles coming to PC but as I said just wait.
It's Herb Sutter's pet project, not an official MS thing. He spent a while talking about why he thinks it's important in his keynote at GoingNative 2013.
This is generally what is called blogspam. It's just a pointer to a youtube reupload of a GoingNative 2013 talk.
Here is the real link: Bjarne Stroustrup - The Essence of C++: With Examples in C++84, C++98, C++11, and C++14, where you can download the video directly to your computer and watch at your leisure. There are also a handful of relevant comments below the video.
The rest of the GoingNative 2013 videos are also available.
I agree. The comments seem to have turned into a C++ vs Java. The reason C++ is used more for games and gives better performance in that context is pointer dereferencing.
In a game you have to touch a lot of objects and this inevitably means in Java dereferencing ( because all objects are references ). This adds a huge cost because the CPUs pre-fetcher can't predict what memory you are going to touch. In C++ you can pack your objects into contiguous blocks of memory with std::array and std::vector, so the pre-fetcher can get your objects into the cache before you need them.
In conclusion, although Java and C++ can be equally fast, C++ has a much better memory access model which can give a x50 performance boost in the best case.
There is an excellent talk on this topic and others here.
First study c# on microsoft virtual academy
Then use this course: http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners It is by Bob Tebor, one of my preferred teacher. His courses are really simple, easily understandable and complete. You'll love them (and him).
http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Clang-Defending-C-from-Murphy-s-Million-Monkeys is the URL you should have posted. You don't need a link shortener to post a link like this. Good video and discussion anyway.
No need to wait, really. These videos are from 2009, but they are by Erik Meijer and on Functional Programming Fundamentals using Haskell.
C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals, Chapter 1 of 13.
Links to lectures 2-13 are present if you scroll down a little.
Rob Pike is on the record saying the language was designed for people that don't know how to program. I think it was some kind of programming language summit. The uniformity and brain dead abstractions are designed for a programming workforce at google that needs to write and maintain server software without having to understand a whole lot. Here's the video: http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/From-Parallel-to-Concurrent. I think it is a pretty good talk and shows why the language was designed the way it was and where his sensiblities lie when it comes to language design. Expecting anything but mediocrity from Go is pure folly. The language was designed for the average programmer and I think that is partly why it is so successful. The success of Go says more about programmers than it does about the language or its designers.
He explains the rationale at around the 20m mark in the video.
IIRC, they did the experiment using linear search on the vector. It is not clear from the video, but Herb Sutter explained it later here: http://channel9.msdn.com/events/Build/2014/2-661
Edit: Relevant part starts around 45:50
Here are two Channel 9 (a Microsoft community site) shows/series that are must views for VS users, Visual Studio Toolbox and Visual Studio Time Savers
http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners
http://channel9.msdn.com/Series/Building-Apps-for-Windows-Phone-8-1
and Google problem you can't find. stackoverflow can be your friend.
A must watch video! Erik is a great guy.
His Lectures about Function Programming!
PS: That shirt is his signature! :)
> seeding off of time(nullptr) isn't in of itself that bad
It's super bad. It has 1-second granularity, and makes identical seeding (across different processes, computers, etc.) especially likely. random_device is a simpler source of a non-deterministic 32-bit integer, and (with a bit more work) it can be used to fill the whole state of an mt19937.
> Everywhere I've seen
Not everywhere. In Rand Considered Harmful I extensively skewered this, plus its more subtle and virulent form, "use floating-point scaling".
> you're still at the mercy of the implementation
I'm full of mercy! I dispense mercy in 10-megaton airbursts.
> I don't recall seeing anywhere a guarantee of quality that any of the random number generators would or should past muster for being cryptographically sound.
The Standard does not guarantee this, but it suggests that random_device should be such an engine. VC guarantees this.
> Even if you utilize OS level entropy generators doesn't make you safe from attack or even a DOS style attack.
That's not what "cryptographically secure pseudorandom number generator" means. What that means is, given any amount of the output, it is infeasible to predict other output.
They used the phrase "systems language" when they introduced it, and it still appears prominently in the FAQ, but what they had in mind vis-a-vis what-exactly-constitutes-a-system was very different. (They were thinking more along the lines of a cloud infrastructure language.)
If you watch this excellent panel discussion from this year, you can hear Rob Pike express how they later regretted how their poor choice of terminology created such confusion:
(approx 6 minutes 45 seconds into the talk)
Steve, VC Dev Mgr here. I think I want to dispute this slightly. While it is of course true that our compiler is closed source, I think we are making a contribution to C++17 with features in our compiler. Some examples would be the new await / co-routines work and parallel STL which our team is helping to standardize. We just did a GoingNative episode on Lenexa, C++17, and some of the work our team is doing. http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-38-The-future-of-C17-Updates-from-Lenexa
It's an opinion, not a statement of fact. It's meaningless to ask whether an opinion is true or false.
The author is Stephan T. Lavavej, who works for Microsoft and helps maintain the Visual Studio C++ standard library, and is involved in the C++ standards committee. He also occasionally gives good presentations on C++.
Please DO NOT use the wp8 series suggested above. That's old and still use silverlight. Bob Tabor suggest to start with http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners that is more updated and use winRT
I learned SO MUCH from this series of videos:
http://channel9.msdn.com/series/C-Fundamentals-for-Absolute-Beginners
I watched your video and really love that you start simple.
Actually, these functionalities are public and developers can leverage them too. The reason why it might be delayed even by developers who want to add this function is the late documentation from Microsoft. Many are still not aware that they can leverage social extensibility sdk into their app. I personally searched a lot for documentation and it was only by July I could find offical blog posts, videos and documentation regarding this feature. That is 3 months after the developer preview was announced. I think it is just a matter of time this feature gets available on amazing apps. Also, this feature is only limited to Silverlight WP8.1 apps and not for universal apps.
For developers listening this, here is the link to all the documentation. Check the description of the video.
Go is not a systems programming language and really aimed at servers. Also the language has been dumbed down for more mediocre programmers. Don't believe me? See http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/From-Parallel-to-Concurrent @ 20:42 the author of Go makes the following statement: "the key point here is our programmers are Googlers, they're not researchers. They're typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They're not capable of understanding a brilliant language but we want to use something to build good software. The language that we give them has to be easy for them to understand and easy to adopt."
That's pretty much what the original talk says. The "Why you should avoid Linked Lists" title was made up by whoever uploaded this snippet to YouTube.
the article is wrong, just saw on the //build/ stream 'this will come in an update to windows 8.1'
announcement @ 3h 09min onwards.
conf that it will come in an update to win 8.1 @ 3h 10 min 25 sec
There's so much lack of respect for the people and the amount of work that goes into ensuring backward compatibility that it's astonishing. If people only knew what was involved.
There was a good video on channel9 recently from one of the leads on the team that ensures much of that compatibility in Windows but I can't remember what it was called (and a search brought up nothing).
> Re: no persistence when you switched device, I'm actually kind of glad to hear that.
I've been chasing down this question over in /r/HoloLens, and with some success.
They finally dug up this video (forward it to 05:01) where they explicitly say that all the processing is happening inside of the device, and there are also no external markers or cameras. There was zero ambiguity in the response. No weasel words.
The holographic processor is responsible for all the heavy processing of the video data. It sends all the digested useful data, in a thin stream, to the mobile processor. The mobile processor is relatively unburdened to focus on real content.
I think I've turned from being skeptical to being cautiously excited.
UPDATE: They do not allow direct access to the sensors. There is a general-purpose camera right in the middle of the glasses for capturing pictures/video (which then can also be annotated to produce mixed-media output).
This lecture series by /u/stl is probably the best introduction to the STL.
From what I've heard, why FB uses D:
1. compilation speed. (order of magnitude faster then C++) Actually, it's a big one for FB. At Lang.NEXT panel Andrei said they have entire team whose full-time job is to maintain build system. And at the moment there is no amount of extra money (for servers, engineers), that would accelerate builds.
2. simpler, shorter code which can lead to better performance, because you can go further with optimizations without turning code into unmanageable mess. See Warp and related interview with Walter Bright (the author).
I don't think any one inside FB can force others to use some technology. They can and do hire some of the best engineers in the world and with all due respect to Andrei, he is just one of those engineers.
Would FB use D today, if Andrei wasn't working there ? I don't think so. Why FB do use it ? As I see it, because as an insider, Andrei have a better chance to convince others with technical arguments.
Indeed! The Microsoft developer website for all thing Windows Phone, including design, can be found here. Select Design from the top menu.
Bob Tabor has a free course in Windows Phone development (@ Channel 9 - a MS community website):
http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners
There's also the Microsoft Virtual Academy if you need to study more. It's all free.
Earlier at the Build Conference. You can watch the recorded video here. ~~I can't remember which timeline he demonstrated it though.~~ Found it. Starts at 0:09:40.
EDIT: Rudy Huyn made the diagonal style lockscreen.
I honestly do not know. I've never used a 3rd party application on Windows Server 2008 or 2012 that involved a GUI. Microsoft has informed developers that apps for Server 2012 need to be able to run without a GUI, but I'm not sure how well that's been adopted (I don't work in the OS division, I don't know whether they've done market research on that, I don't have access to that market research if it does exist, and I probably wouldn't be allowed to share it even if I did have access)
Other comments have covered your specific issue, so I'll just give you some advice about rand. Don't use it. It suffers from some serious problems, and the new C++0x standard has better methods for generating randomness. This video covers it's issues well, as well as how to use the new <random> library which replaces it.
Sequential consistency refers to the values of std::memory_order. The basic briefest explanation refers to what kind of barriers the compiler is forced to do. Basically, does the order of this read or write matter compared to other read or writes?
You can gain optimizations by using other memory orders, but it's incredibly recommended that you are absolutely sure that this will noticeably improve performance before mucking about with them because using looser memory orders make atomics not-so-atomic, and can really introduce some difficult to find and reproduce bugs.
Any more explanation from me would likely be full of errors and wrong, because this isn't my strong suit, and for 99.9% of purposes, the default is the best. The long version is in Herb Sutter's excellent Atomic Weapons talk. It very clearly breaks down atomics and the headaches that go into making a program with no data-races. I highly suggest it for anyone who is interested in parallelism and has 2 hours to spare.
http://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-2012-Herb-Sutter-atomic-Weapons-1-of-2 http://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-2012-Herb-Sutter-atomic-Weapons-2-of-2
That's a very lengthy explanation, but I think the answer is a lot simpler, actually.
Basically, when Microsoft initially designed the JIT compiler, they made sure it was capable of generating machine code fast, but not necessarily ensure that said machine code runs efficiently. When they had to make a 64-bit version of the JIT, they decided to make the generated machine code run more efficiently, with the trade-off that the JIT takes a little longer to generate that code.
This (and the reasoning behind it) is explained in this video @ 48:34.
Multiple years ago when Microsoft developed first self-improving AI, they used all data collected by Cortana and other company services to feed it's machine learning algorithms. For defining it's goals that AI analysed external tasks with system of points, inspired by Raymond Chen's algorithm for weighing future Windows features (every task starts with -100 points, to be considered it must have positive number of points). First in the queue with highest number of points was task which came from Windows Phone user (+1 point), with IQ higher than 110 (+5 points), which was one of the oldest (+100 points) hard-working (+50 points) Microsoft employers (+1000 points) , executives (+5000000 points), CEOs (+100000000 points), shareholders (+10 points) and fans (-1 point). After future analysis this task was also awarded points for PR/marketing impact, promoting Microsoft's entertainment services and fulfilling previous Microsoft's public promises. Intermediate goals required for executing it like creating technology to resurrect dead people, augment humans and colonise space also all had high numbers of points.
At the end of Build 2014 conference where and when Cortana was officially presented Nadella jokingly asked her to remind him to became Master Chief after staged question weather he want to became Master Chief. "Now you are Chief Executive Officer, do you have any plans to became Master Chief Executive Officer?" "Sure, remind me to become Master Chief in five hundreds years." "OK, I will remind you to became Master Chief on Monday, April 2nd 2514" "Great."
(Source, see 2:58:10)
So AI decided to resurrect Satya Nadella, turn him to Master Chief, turn some nearby star systems to their Halo Universe equivalents and stage all events like in these games. That is why we celebrate April 2nd 2014 as the day when humanity fucked up.
Edit: I mixed up Satya Nadella with Joe Belfiore.
The experts agree with you, always default to int
unless you have a good reason not to.
It comes up twice in Going Native 2013 - Interactive Panel: Ask Us Anything @ ~9.45 & ~41.05.
also related to hardware/metal : Herb Sutter talks about memory hierarchy and how simpler structures + prefetcher will beat fancy ones with high constant factors http://channel9.msdn.com/Events/Build/2014/2-661
The general wisdom of using std::rand()
with %
is starting to go out of favor, particularly considering how nice the random library in C++11 is. However, it’s definitely true that for some uses, it Just Doesn’t Matter™. Number guessing games, for example, don’t need high quality randomness.
But you are trying to use this for learning. Here’s a really good talk on C++11’s new random number facilities’ superiority over the old stuff.
So what might your code look like with the new random? Keeping in mind you should code assuming the reader is a little more skilled than you (since the reader always has Google), this lightly slightly more verbose stuff, which maybe you’re tucking away somewhere in a utility function so you can just stick my_ns::rand(...) where you need it. Note that even the Merseinne twister engine is being seeded with even higher-quality randomness. This code comes pretty close to the ideal, no compromises randomness you could actually want from a machine, short of actual true randomness (which you can also get, but which is slower).
If it is still too verbose for you, either tuck it away in a function or evaluate whether you actually need anything that good (Like I said, you probably don’t for a learning-level number guessing game.). Personally, though, once you understand how C++11 random number generation works, the code makes perfect sense, is pretty boilerplate (though it might change depending on your needs), and just brings to light stuff your machine might be doing anyway (sometimes the name of the game in C++).
A very dense but relatively complete overview of the things that are new in C++11 is the wikipedia-article about C++11.
Then there is this video by Bjarne Stroustrup that every C++-programmer should have watched: C++11-style
Concerning pointers: Try to avoid them completely. If this is impossible (sometimes it is), try hard to just use std::unique_ptr
. If this is still not possible go long ways to ensure that a std::shared_ptr is enough. Exception: Sometimes you might want to implement your own smart-pointer (for instance an “`observer_ptr” that states with it's type “I am not owning this, I'm just looking at it”), in that case don't hesitate to use blank pointers, just make sure that your class(-template) really does nothing besides being some kind of pointer).
Concerning new: Don't use them! Instead use std::make_shared
and once you can use C++14 std::make_unique
(until then: copy the template into your project yourself, it's only four lines).
Concerning delete: If you need to write delete, you almost certainly did something wrong somewhere. Always use the stack or appropriate smart-pointers.
Never use blank arrays: They just aren't worth the trouble. Almost all of the time just use std::vector<T>
or std::array<T, Size>
, and rarely other containers like std::list
or std::deque
.
Herb Sutter in a recent video lecture ("One C++", 16 minutes in) pointed out that the documents that specify the core language for C++, C#, and Java are practically the exact same size. The difference is that the latter two have much larger libraries defined in the standard.
Now, of course, maybe that just means the C++ document is much more concisely written. I can't really believe that core Java is bigger than core C++, but it's an interesting little datum nonetheless.
Yeah, but meanwhile, citibank got hacked by just changing the URL.
So, yeah, doesn't really matter what kind of threat these LulzSec guys are, because some people are just too stupid to secure their own shit from regular people.
In regards to creating applications, there are a few good starting points:
Generally, searching for terms like "C# Windows Phone 8.1 Tutorials" and similar will guide you towards a wealth of knowledge.
Additionally, feel free to reach out to me at any time with anything more specific and I'll do my best to help you out!
Also, I think this more clearly provides evidence for STL's claim that rand() is terrible (http://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful).
C++ has a much better way of letting you express exactly what your intention is & it's much harder to misuse the PRNG (you don't need to be a domain expert).
Are you thinking of this - Going Native 2013 - Interactive Panel: Ask Us Anything?
Because there they recommend defaulting to int
unless you've a good reason not to.
Welcome to /r/rust! You have set us a challenge, but there is no need to write us off yet ;) There are already solutions to the zip performance issues, but whilst they are safe, they are not particularly elegant. We already know Rust can be fast, but as the evolution of Rust has shown arriving at safe, statically checked solutions to these problems takes time.
Whilst you will probably never beat us when it comes to crafting statically checked, typesafe APIs with minimal runtime overhead, we will likewise probably never match the powerful, clay-like feel of D's static metaprogramming that you mentioned in your Channel 9 interview. We have the greatest respect for your work and hope to keep up the friendly competition!
Instead of listening to random people on the Internet (myself included!), everyone needs to just watch this presentation from TechEd. Someone linked it here in another thread about snapshots, and it should be absolutely required watching if you have any responsibilities regarding backup, virtualization or maintenance of Active Directory.
I lead the team that creates Channel 9 and Coding4Fun at Microsoft.
For a little while we've been talking as a team about creating an add-in for Visual Studio that brings achievements to the product. We love that you all love the idea too, so we're going to go make it happen.
We'll follow up more with specifics and look forward to getting this up and running soon with your ideas.
> I should point out we need new hardware for this and I don't have that hardware working today. In fact we're going to try a demo of that at a session on Thursday. So you're seeing a simulation.
I'm already familiar with C#, but I've been going through these lessons on MSDN to familiarize myself with Windows Phone 8.1.
http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners
I've never developed on a mobile platform before, so this course has been a great way to get started.
You should totally check Bob Tabor's learning videos, that guy is simply awesome. I'm currently learning C# from scratch using his videos and I'm finally getting the feeling that I got it!
*edit: Here's the link to the series.
Sorry to be really pedantic but he's not using forward
correctly.
(I didn't know much about type erasure, so I am glad I watched it. And my comment doesn't affect the main point of his talk.)
Anyway, the slides around 18m09s should have T&&
instead of T
. The code there isn't buggy, it just fails to make full use of move semantics. If you want to perfectly forward things, you must follow this rules to the letters. If you want more background, watch the excellent Scott Meyers talk.
template<typename T> anything:: anything(T&& t) : // don't forgot the && here handle(new handle<typename std::remove_reference<T>::type>( std:: forward<T>(t) ))
If you want to perfectly-forward t
, then:
t
in the parameters must be T&&
. It can't be const T&&
or anything else, it must be exactly T&&
.std::forward<T>(t)
or std::forward<T&&>(t)
, not std::forward<T&>
. Actually, I think you could sneak a const
in here if you wanted.T
must be a deduced parameter.T
must be a (deduced) parameter of this very function/constructor. It can't be a parameter from a containing class or anything like that that. The template<typename T>
must immediately precede the name of this function/constructor.If you break any of these rules (he broke the first one) then you don't get the new C++11 behaviour.
Bingo! (Source: I'm a network PM at Microsoft). The number of apps that truly benefit from fast path loopback is pretty small. The speedup is very impressive as a percent, but we're still talking a small number of milliseconds. On the other hand, we always have to be mindful of introducing bugs in the network code.
For totally amazing results, try the fast path loopback and combine it with the RIO socket API. With RIO reducing the time it take to get data into the kernel, and fastpath reducing the packet handling time, you end up with awesomely fast sockets.
I would lead first and foremost with move semantics. If they are writing a game engine then performance should be a priority and you simply cannot be as fast writing C++98 without move semantics.
Second would be the ranged for loop, which should be a Godsend to people writing 98 (remember to use auto&& when appropriate to avoid copying).
I wouldn't lead on auto, it isn't the most important feature and most old timey C++98ians are still very mistrustful of it and so you will lose them right away.
I would also cut down on the sheer number of new features and just try to make an impression with the most important ones. Variadic templates are useful for what they can be used for in the standard library but most people are hardly going to need to use them.
The smart way to get people to upgrade and use your technology isn't to "win the argument" but to show them all the ways it is powerful and will make their lives easier and allow them to come to the right decision on their own.
Edit: If you haven't already I would watch Herb Sutter's Modern C++: What You Need to Know. I particularly liked the bit where he had side by side comparisons of c# / C++98 / C++11.
> Am I alone in this?
No. Herb Sutter talked about this a bit last year at Going Native in his talk Keynote: Herb Sutter - One C++. Somebody queried about it as well during the interactive panel, but it was brought forth in an awkward way by Charles and got dismissed in the end by Herb as "more news tomorrow" which ended up just being a hand wave about NuGet, which is a non-solution to the general problem.
You're getting the expected replies in this thread: it's a hard problem, and specific languages don't need their own package managers. I'm not saying that these people are wrong, but sometimes worse is better and ecosystems like pip, npm, gem etc. are valued by their users for a reason. Oh, and boost exists! Yes, you should look into boost when you're looking for a library, but plenty of us avoid boost for a reason.
In the end, it's just another one of those incidental complexities that we have to deal with as programmers. It's a lot easier these days, but still everything is strewn about multiple sites, on GitHub and BitBucket, CodePlex and still sometimes SourceForge, hidden on an implementers website, etc. As nice as it is to daydream about some ideal world where this isn't the case, I don't see it happening at any point in the near or far future.
Only physically, by using more than 32 address pins and one more level of paging, it is called PAE and can be used on Windows Server OSs, client releases didn't make the cut (licensing / supportability reasons).
You can never have more than 4GB as virtual address space (for EACH process), though you can control how much should be used by the system and the user VA space (3GB / userva).
Memory management is not really easy, but a very good talk about Windows MM is here:
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/WCL405 - Part 1 http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/WCL406 - Part 2
Chrome or Chromium? I watched it using Chromium and it's native html5 video control, but due to the ongoing patent threats from MPEG group I have to install chromium-ffmpeg from a third party source as my distro doesn't distribute that.
Try requesting html5 explicitly, might be a detection error due to web monkey failure.
I would have thought the Chrome release had webm & h264 ffmpeg support already though?
They actually talked in one of the Q&A sessions about how one of the next big needs for C++ is resources just like the ones your are asking for. I believe all the big names, Stroustrup included, are updating their books to account for C++11. The release dates range from this year to 2015, I believe.
In the meantime, I'd recommend checking out the other videos from that conference, Stroupstup's C++11 FAQ, and just about any article written by Herb Sutter in the last few years.
Digging in and learning how to use the new features is really the best way to go outside of those resources. If you really want to sharpen your axe, I'd recommend coding up some sample programs and submitting them to /r/cpp and ask for a code review. I'm sure you'll get some helpful feedback.
Other than that, Effective C++ by Scott Meyers isn't updated for C++11 yet, but I'd recommend reading through it if you haven't. It has a lot of short, practical things you can do to make your code better. Speaking of Scott Meyers, he had a joint Q&A with Herb Sutter and Andrei Alexadrescu this past fall that is on topic and worth checking out.
EDIT: Added links to a few things I referenced.
For the record, I realized my egregious grammar error with regard to "it's" about a millisecond after I hit submit.
Also, here is some more information about this for the curious.
Mary Jo Foley is entirely incorrect in her strange assumption. WinC++ is the name of the team building VC++. It's NOT a codename for some new C++ technology that somehow usurps .NET or any other development technology from Microsoft... C++ is a portable language. It doesn't make any sense at all for Microsoft to develop a variant of the language that is Windows-only... Think about it. Also, why not get the accurate information directly from the source:
http://channel9.msdn.com/Shows/Going+Deep/Conversation-with-Herb-Sutter-Perspectives-on-Modern-C0x11
Herb Sutter makes it crystal clear in this Channel 9 interview (MJF even linked to it - though it's clear that she didn't watch it...) that Microsoft is committed to implementing C++0x/11 and embracing the technology (and C++ developers - it's about time...), but not at the expense of C# (or VB or F# or JS or...). C++ is a development tool. Use it where it makes sense.
It's funny how misinformation just breeds more misinformation...
> The decision should be in the hands of the programmer.
Not if the programmer is misguided or lacking in experience. Go was created for teams to write software easily and quickly and was designed to be particularly helpful for beginners.
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike
http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/From-Parallel-to-Concurrent
> It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike
Yes, clrdbg still has a lot of work to be done before it is a usable debugger, but this is how we expect cross-platform CoreCLR debugging to work. If you are interested, you can watch a demo of clrdbg on Linux in action as part of the //Build/ conference day one keynote. The clrdbg part is at the 26:00 mark. Link: http://channel9.msdn.com/Events/Build/2015/KEY01
Last year at C++Now a similar library was written, called boostache: https://github.com/cierelabs/boostache
There was also a lightning talk about it at CppCon: http://channel9.msdn.com/Events/CPP/C-PP-Con-2014/0003-Lightning-Talks-Boostache
If you're coming in completely fresh, there's this series that's linked in the sidebar: http://channel9.msdn.com/Series/C-Fundamentals-for-Absolute-Beginners
Alternatively, if you aren't married to starting with C# and are okay with learning an academic language first you can try How to Design Programs or Concrete Abstractions.
You're going to have to be comfortable with doing things you don't necessarily understand. There are going to be plenty of times you're implementing something alongside a tutorial where the writer has made a design consideration that's just above your current level of thinking about development.
Once you're familiar with the syntax and some of the more common aspects of the framework you can start moving into more intermediate topics on how to make decisions about design.
They usually record them and then make them available for future viewing.
Here are videos from 2014: http://channel9.msdn.com/Events/dotnetConf/2014
Here are videos from 2013: https://www.youtube.com/playlist?list=PL5i79H1f8hbezk9uLlorTmI7TypY56WF0
It looks like you exposing your Front End to the Public network which is not supported, you will need to implement a Lync Edge server and a Reverse Proxy for proper External Access.
First, start reading about components required for external user access in Lync Server 2013
Things you will need for Lync External Access:
-Lync Edge Server which provides Sign-in, Web Conferencing content, Audio & Video services. The Lync Edge server requires two NIC's and they can't be on the same subnet. There are config options to either have 1 external IP or 3 external IP's. Planning for External Access
-Reverse Proxy which provides Lync Web services. Windows-based supported solutions for this include TMG, IIS ARR, Web Application Proxy. Non-Windows supported solutions include Kemp, Citrix, F5, A10. Non-Windows non-supported solutions include PFSense and Apache.
I recommend going through some training videos on Microsoft's Channel 9 site - Lync Jump Start Series - http://channel9.msdn.com/Series/Core-Solutions-of-Microsoft-Lync-Server-2013
I highly recommend sysinternals tools, running multiple AVs is nice but the assumption is you're not dealing with a zero day infection. We had a targeted attack at my company that was a zero-day infection. I was removing infections manually until I found 1 tool that actually did detect it so I could share with the team to use it. Malwarebytes detected the infection but didn't remove it, oddly enough.
https://technet.microsoft.com/en-us/sysinternals/bb896653
https://technet.microsoft.com/en-us/sysinternals/bb963902
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DCIM-B368#fbid=
People who make programming languages tend to have some idea behind their language. When Bjarne presents C++, for example, he makes it very clear that the language isn't just a hotchpotch of features; the parts are built to work together and if you want to master the language, you need to learn them together.
thenewboston not only fails to communicate (and, I suspect, grasp) this interplay of features, he doesn't even bother to explain the purpose of many features by themselves. His videos amount to a poorly-organised, frequently incomplete, and occasionally wrong dictionary -- a style wholly unsuited for learning anything useful from.
The problem is that you need to get experience before that. I'll give you some talks that are nice to watch while still teaching a lot of things (they are not introductions to the language, but in your case those don't seem needed), but at the end of the day you have to write code. When you've done that for a while you will reach the point where you can write productive programs, but there really are quite a few steps to take before.
Imagine it like learning to read and to write: In the beginning you were horrible at it (if you apply the standards that you have today) and you mainly used it to read pointless texts for school and write down stuff that didn't serve any purpose; that is besides the purpose of forcing you to write to get better at it. And while your handwriting may or may not still suck today (mine sucks) it is certainly several orders magnitude better.
The situation for programming is similar.
Now for the talks:
srand()
/rand()
- functions. A very good talk too and definitely entertaining.the problem this solves cannot be trivially solved with a templated function [...] nor by inheritance based polymorphism.
watch this: http://channel9.msdn.com/Events/GoingNative/2013/Inheritance-Is-The-Base-Class-of-Evil
Using rand() in C++ is terrible, and this guy can tell you why: http://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful
The gist of the presentation is that the std library offers much better functionality than rand() which will do exactly what you want whereas rand() cannot.
> I believe that 'someone' was Sean Parent: http://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoning > It might not be that instance of this presentation though. He brings up the code, but I've yet to see where he talks about google rejecting his change that simplified that massive thing into a few lines of readable code. He gave the same talk elsewhere and did talk about that occurring.
It was an invited talk at A9, Programming Conversations Lecture 5 part 1. The comment about the Google manager's response to Sean's code review was: "nodobody knows what std::rotate
does", and it is somewhere between 28:30 and 30:30.
I believe that 'someone' was Sean Parent: http://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoning
It might not be that instance of this presentation though. He brings up the code, but I've yet to see where he talks about google rejecting his change that simplified that massive thing into a few lines of readable code. He gave the same talk elsewhere and did talk about that occurring.
I also found this video on google's justifications wanting. I guess it works for them, but I must seriously question anyone that would use google's style guide as any sort of general style guide for coding C++. Honestly I'm not sure why google would even publish it to the public. I guess it works to keep people who don't want to be mired in legacy code so bad that it requires the kind of restrictions that style guide has from applying to work there :P
http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-
If you've got a reasonable basic understanding of C++, then Stephan T. Lavavej's channel 9 series on STL is an incredibly useful set of videos.
https://www.youtube.com/playlist?list=PL09Ke5-ligfmlkBZRHA3HbEQSU2LanWti
I'd also really recommend the videos from GoingNative 2013, there's some great content there and the recording quality is really high. Scott Meyers' talk "An Effective C++11/14 Sampler" and Sean Parent's "C++ Seasoning" are good intermediate-level talks about writing modern C++, but there's loads of other awesome talks there.
https://www.youtube.com/user/CppCon/videos
I've also just noticed that a load of videos from cppcon a few weeks ago have just been uploaded to YouTube. I haven't had a chance to look through them yet, but I would bet on Herb Sutter's talk "Back to the Basics! Essentials of Modern C++ Style" being good!
Yeah, but I mean... like... what makes headers slow in C++ is what makes templates slow? Does that make sense? The way headers work in C++ is what doesn't scale, which makes compilation slow, and by extension templates. The way templates work doesn't inherently have to be slow, so that's not a reason to keep them out of Go.
EDIT: FWIW, I believe in this panel even Rob Pike says somewhere near the end that it's not the templates slowing down C++ compile times.
Woah, nice to see Erik Meijer on Channel 9 again. I didn't think this would happen anymore given that Erik left Microsoft.
I also thought Charles left the C9 team, so it's good to see these sorts of videos coming from him again. His videos were a signpost of good content on C9. 13 hours ago at the time of this posting a new video in the Going Deep series was posted: Bart De Smet: Rx and Cortana.
For those who don't know, Channel 9 has some really sweet content that their front page sort of betrays. It makes it look like all the videos are really cheesy type weeklies or shitty, dry, enterprise presentations. As I already mentioned, Going Deep has some awesome videos which you can find by sorting them by top rated.
I spent 20 minute listening to sawzall >___< .
It seems like Go niche is where Node.js is at. Interesting... edit:
In term of Node.js, I didn't like node.js at all, the language wasn't built for concurrency in mind so there are lacking constructs to make coding async easy. The new changes to JS is probably going to make node.js nicer to code. I guess Go would be much better for this.
http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/From-Parallel-to-Concurrent
I wonder how cache friendly this decoupling is. While I was reading through it I kept thinking of Herb's talk and the importance of data locality.
Obligatory talk about why you should never use rand().
Watch this first. After you have done that: Reconsider parts of your algorithm:
You want to select a random, but unoccupied field. So just create a vector of the free fields and erase the used ones from it:
// create and seed a mersenne-twister std::mt19937_64 gen{std::random_device{}()};
std::vector<std::size_t> free_fields(9); // create vector of nine elements std::iota(free_fields.begin(), free_fields.end(), 0); // fill it with 0,1,…,8
for(unsigned i = 0; i < 9; ++i) { std::uniform_int_distribution<std::size_t> dist{0, free_fields.size() - 1}; auto field_it = free_fields.begin() + dist(gen); auto field = *field_it; free_fields.erase(field_it);
use(field); }
I know how that is. My son wrote an email to Mojang and they never responded. If you want to send it to me I'll respond in writing with a foam Channel 9 guy and read it on my show.
> b) C version uses a static array allocated on the stack, Haskell version uses a linked list
This is going to be a MAJOR drag on your system. Herb Sutter just presented on this at BUILD (Skip to 23:30 to see this part)- In his C++ talk he showed the effect of a contiguous array vs a linked list, when it interacts with the prefetcher found in most modern architectures. The results for Linked Lists were not pretty at all, while the Contiguous Arrays leveled off.
In this Channel 9 video interview it is answered by Stefan and Jeff, at 18:45. Basically there is no real reason, it just sounds nice. Still worth watching the video though.
Herb Sutter said in a talk where they announced C++11 for VS2013 that they would be supporting C99, or at least an important subset of it. I remember him talking about _Bool (I think that's what the standard named it anyway, I'm don't do C).
> "The best thing about R is that it was developed by statisticians. "The worst thing about R is that it was developed by statisticians." Bo Cowgill, Google
And similar to, e.g., Matlab, they'll just strap more things onto R to make it somehow work.
http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2012/Why-and-How-People-Use-R
edit:
I've had this discussion so many times with customers it's easier to just provide the applicable information from the Exchange Team.
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/EXL306
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/EXL308
In short, with the Exch Team putting more and more emphasis on Multi-Role servers the benefits of virtualization are disappearing. 2010 is optimized for big cheap storage on physical servers. Also, there are some very specific guidelines for virtualizing Exhcange that also remove some benefits of virtualization. Namely that you NEVER EVER want it in a situation where there can be contention for resources with other VM's. So proc/memory oversubscription can be a killer. No Dynamic Memory and no more than a viirtual proc:Logical Proc ratio of 2:1.
Having said all that, while Vitualization does introduce complexity and therefore risk in situations like yours is usually when I recommend it. Assuming a supported and licenced hypervisor of course.
EDIT: Grammar
There is a great video from the BUILD conference that launched Windows 8 that explains that there is a time and a place for the metro design paradigm. Some Apps like Photoshop or Visual Studio cannot be replicated using metro styling. Nor should they. The need for lots of buttons, menus, overlapping windows is a need for content creation that will always be there.
So far there is little word about allowing developers to make 3rd party desktop apps for Windows RT so for those who create a lot of content Windows RT as it is described now will never be a good choice for a primary device. For a lot of people though light Word/Excel work or resizing a picture will be 99% of their content creation in which case a Windows RT device could easily be their primary.