I guess it is possible to have only contributed assembly code to the kernel before, or to have contributed device trees on ARM, or improve build scripts, or to have submitted some sample Berkeley Packet Filters, where someone would never have to touch C. According to Ohloh, 5% of the kernel is written in something other than C.
But yeah, if you're doing stuff like that, then it really is a safe bet that you already know C. There's no guarantees though, and it would be something else if you didn't. However, you do need to remember that requirements are typically written up by human resources, who tend to not have a clue about what it takes to actually do a particular job, but just want to see a bunch of bullet points being met.
At least they didn't ask for 25 years of experience with the linux kernel.
Yes.
The whole point of UNIX-like operating systems is to have everything as a separate independant piece of software. This allows modularity and better security(it's easier to maintain multiple smaller packages of quality that interact with each other than one big sack of shit like Windows or SystemD).
The controversy comes from the fact SystemD does the exact opposite, it's developer(Lennart Poettering) wants it to become as important as Linux and the GNU itself.
To give you an idea of what I mean by "big sack of shit", let's compare the number of lines of code of SystemD and other init systems that used to be popular before SystemD came.
There are other init systems(the first process in all Linux systems, if it crashes the kernel triggers a panic.) that are even lighter, thus less prone to crashing and leaving freedom of choice to people.
Also, in my opinion PulseAudio(another of Lennart Poettering's "inventions") is another piece of shit, like wtf over 100ms of audio latency by default? Why it can't detect 24 bit sound cards and automatically switch to that? Why is there no GUI(even hidden) to change that while Windows and ALSA have those?
A few possible reasons:
1) it is pretty good but is still at a phase of getting known, the investment it is getting (in term of number of commits and contributors ) has been organically growing for years (source).
2) it is forked from nix/nixos (which is apparently much more popular), it does not seem to have a lot of clear advantages over it (it can be used to "bundle" apps like you can with appimage but that can also be done in nix using nix-bundle), so people might be opting for "the original", it also uses lisp which some people might be put off from . if you are programmer with a degree there is a good chance you took a course on lisp/scheme and got annoyed with all those parenthesis (and counting them, and them being maybe harder for you to read unlike more syntax rich languages like c/java/python and most other languages).
Not lead, but 4th contributor ranked by commits in the past year https://www.openhub.net/p/gimp/contributors?query=&time_span=&sort=twelve_month_commits
u/alexlg the rest of you should also set up a patreon or similar.
Slow?? We average about 100 commits a week from nearly 300 contributors, which is "one of the largest open-source teams in the world". Comments like yours are not only absolutely baseless and wrong, but they're incredibly demeaning to the people writing code for free. You should consider taking a break from Reddit, and instead opening your Monero wallet the next time a fundraiser comes around.
Similar to LibreOffice and OpenOffice. Most of the developers switched to Nextcloud so it gets most of the development.
Looks like Openhub has got stuck reading the recent git history, but you can see the difference in the 12 month stats at https://www.openhub.net/p/_compare?project_0=Nextcloud&project_1=ownCloud
There is a couple of contributors. The statistics done by Open Hub provide nice insights into this: https://www.openhub.net/p/gimp (they do this or at least try to for all publicly available source code).
This metric excludes people who contribute other things besides source code, though - above all, their spare time.
Yuck, you are wrong. Though that's more github's fault than it is yours. In particular:
github only includes contributors that have claimed their contributions with a github account. The vast majority of GTK commits are not claimed on github.
Carlos Garnacho works for Red Hat these days. Cosimo Cechi worked for Red Hat for a long time - and wrote many of his patches during his time there.
You're looking at all-time commits, not at somewhat recent stats. And I don't think commits from before 2010 are very indicative of who maintains GTK.
Finally, openhub has way better statistics. In particular, it has the 5-year trend and allows you to order by 12 month commits, which gives you an idea who contributes to GTK today.
I have more than 2,800 commits on 28 open source projects, and 70 public repositories on GitHub.
Additionally notmuch-vim is an official part of notmuch.
When I'm interested on feedback regarding on how to maintain code I'll let you know.
Cheers.
> removing the negatives like team developer centralisation
You repeat their narrative verbatim. They needed to find some reasons to justify their money grab (as they include a large premine) and came up among other crap with what is maybe the most wrong statement they could find.
As per https://www.openhub.net/p/monero: "Monero has a very large, active development team. Over the past twelve months, 205 developers contributed new code to Monero. This is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub." Monero has had 394 contributors overall.
Note that most contributors to Monero work on a voluntary basis. And those that get paid (via community support) are sustained by donations of other volunteers.
In comparison, the MoneroV scam is proposed by a handful of persons with no track record (or could be one individual for all we know), that right away assign themselves a large premine.
Do not take their statements for granted. Do not take my words for granted either. Just educate yourself to see through narratives, and always question the incentives behind.
> In contrast, Zcash developers have a built-in, predictable funding source. As a result, I expect to see Zcash development proceed at a faster pace.
As comparison, you can check the 12 month summaries on these pages:
OpenOffice is to Libreoffice as XFree86 is to Xorg.
It is as KHTML is to Webkit.
It's limping along a bit before it dies.
How do I know that? Compare OpenOffice activity with Libreoffice activity:
Libreoffice had almost as many contributors (88) last month as OpenOffice had in the history of the project (94).
Libreoffice has as many commits per day (62) as OpenOffice has per month (65).
And to give a scope of the work, the Libreoffice team has touched more than 10x as many files in the last month (20053) than OpenOffice in the last year (1621).
I'm surprised it took you this long. I argue the community has come a very long way in sharing responsibilities. In no particular order:
Luigi took on the role of maintaining the monero-gui (previously monero-core) and monero-site repositories. Fluffypony previously did all of this.
Though fluffypony still speaks about Monero at conferences, he's no longer the only one doing it. I have spoken about Monero at over 20 meetups this year, and others have spoken about Monero too.
Fluffypony is less active on Reddit now than he has been as the community has grown large. He no longer answers the most basic questions since most have been answered on StackExchange or can be answered by others.
The development community has grown. Over 300 contributors have worked on Monero.
Taiga has allowed people to more easily organize projects. There are now dozens of other projects, some development and others promotional, that have started without any "approval" from fluffypony or anyone for that matter.
When MoneroLink was published, I wrote the unofficial response. No one, including fluffypony, asked for me to write this.
While I think we can all agree that fluffypony is often outspoken, he has a decreasing amount of influence in the project. This is not to say that his role is minor; he has made substantial contributions. However, the risk of one person "dominating" the project is the lowest it has ever been, thanks in large part to many decisions fluffypony himself has made.
And yet it's already better than the Groovy tooling for eclipse. Totally opensource, so anybody who wants to change something about it can. And if you think OpenHub's COCOMO model means anything, 4 person-years of effort.
Compare the levels of activity:
So LO is roughly 7x more active than AOO.
It’s a great write up! The only thing I wish it would have had was some of the impressive stats that have happened over the past 5 years, such as: - Monero code has had 190 years worth of effort put into it (estimated by the COCOMO model) by 547 all-time contributors. Source: https://www.openhub.net/p/monero - Monero has attracted the third highest code contributor count of all cryptocurrencies, led only by Bitcoin and Ethereum. Source: https://www.coingecko.com/en?view=developer (filter list by Contributors)
Jehan is one of the top GIMP devs https://www.openhub.net/p/gimp/contributors?query=&time_span=&sort=twelve_month_commits
This film project has already lead to big improvements to the GIMP animation tools: https://www.patreon.com/posts/gimp-motion-part-13623338 https://www.patreon.com/posts/gimp-motion-part-13811099
Where do you get that impression from?
192 commits from 19 different contributors just in the past 30 days alone seems like the project is moving along just fine. ;)
For a summary: https://www.openhub.net/p/gimp
Or the definitive source (pun intended): https://git.gnome.org/browse/gimp/log/
Use tools like cloc
which will give you results similar to:
Or instead have a look at websites like openhub:
> Linux is mostly written in C with an average number of source code comments. Across all C projects on Open Hub, 19% of all source code lines are comments. This holds true for Linux Kernel as well. It contains the same ratio of comment lines to code lines as the majority of C projects in Open Hub. https://www.openhub.net/p/linux
> systemd s mostly written in C with a very low number of source code comments. Across all C projects on Open Hub, 19% of all source code lines are comments. For systemd, this figure is 7%. This lack of comments puts systemd among the lowest 10% of all C projects on Open Hub. https://www.openhub.net/p/systemd
haha. let's be objective for a moment.
Go and Ruby have approximately the same number of commits/month, which is insane considering the hype and youth of Go.
Scala isn't even the same ballpark.
I concede that Firefox and Chromium come close to Linux's scale, and they handle things differently. (But they still are less: See https://www.openhub.net/p/linux and https://www.openhub.net/p/firefox)
They are trusted because of good code in the past. Although I wonder how someone becomes a maintainer without knowing the concept of C error codes and why we have multiple. (Referring to Mauro v. Torvalds).
But have you realized that most people base their opinion of Linus on a few (~10-15) very sensationalized threads? They only get to hear about the flame-wars, there what does one find? Profanity and Sarcasm (Oh God! /s). Not a word of the daily polite threads.
Another comparison: https://www.openhub.net/p/libreoffice vs https://www.openhub.net/p/openoffice says 3 and 12 month committers LO: 67 and 290, AOO: 9 and 36. Furthermore LO can include AOO code but not vice versa.
http://www.datamation.com/open-source/libreoffice-vs.-openoffice-why-libreoffice-wins-1.html (from October) covers many of the resulting ways LO is better.
The only advantage AOO has over LO is the OpenOffice brand, which perhaps explains in part why neither site mentions the other. Just ignore AOO, and spread the LO brand. :)
Added: Also, I find it typical and unfortunate that sites for software don't mention their substitutes/competitors. It just makes it hard for a new visitor to figure out how to place the project and implies fear. The exceptions tend to be tiny projects, it seems to me.
Using copyright files which noone bothers to update regularly gets you an inaccurate metric.
I was talking about contributors and for that you have to use "git blame" and that currently yields 574 contributors.
So, no, I am not spreading FUD, smartass!
> The more users it has without anybody doing that, the more likely it is that it is, in fact, okay.
I think there are more end-users that are not developers than there are developers so I'm not entirely convinced that increased user count correlates directly to the trustworthiness of any particular piece of software. According to Mozilla there are 400k people contributing to Mozilla via Bugzilla but only ~1k people contribute source code. Firefox has ~32m SLOC and comprises 47 different languages. I don't know the depth of involvement of people reporting via Bugzilla or the likelihood that they all possess the level of technical expertise necessary to blow any whistles but I certainly don't think ~1k volunteer developers have the time or inclination to police that much code.
Something I have come to understand is that unless a program is verifiable it will have a degree of risk associated with running it. And "provably correct" isn't a slogan you hear from the marketing teams at Facebook or Firefox or Google. It is possible to write code that looks fine upon inspection but does something completely unexpected.
I think one of the side-effects of FOSS can be like you say — that more eyes would ideally correlate to less bugs, less security risks, etc. I am on board with FOSS but I am just not convinced it's safe to assume a popular FOSS project is necessarily safer than any other piece of software (regardless of licensing model) solely based on the size of the userbase.
In what way exactly? Lines of code? Memory usage? (note: I don't know what that graph shows exactly (start-up memory usage?), I was just looking for some more recent data now AWSY got retired)
Microsoft Research is pretty awesome.
I know their involvement with Haskell - they've paid notable Haskell folk like Simon Peyton Jones and Simon Marlow to work full time on Haskell.
CT scans generally come as DICOM (Digital Imaging and Communications In Mediine) data.
You can get example DICOM data from here and other places to practise with.
Your school likely has software that is capable of turning this DICOM data into a 3d file that can be used in solidworks but if it doesn't there is free software available.
DeVide is one. InVesalius is another.
Eh... Lennart is the creator of Avahi (in anything that resembles it's current form). He didn't hijack it or drove it to the ground. The problem is precisely that no one else is interested to pick up its maintanance.
The openoffice team is pretty small. Anything useful they do already gets ported across. The only advantage that openoffice has is a slightly better known brand. A merge would basically be the last few OO devs switching to LO, and then renaming LO.
>I'm not sure what your project is, but 10 man years of development in perl might not be the best investment.
Language is a fickle thing. Presumably you'd agree that 10 man years of development in perl might be the best investment, right?
>Sadly, perl is in decline.
OpenHub's stats show Perl's relative share among the languages used in all the projects they track has remained roughly stable since 2014. Here are their charts comparing C#, Go, Haskell, Perl, Rust, and Scala and C, Perl, PHP and Python commit activity over the last decade for around 25 billion lines of open source code.
>Languages like Python and Go and Javascript are much better supported by the current workforce, and you'll be more future proof with those languages.
Unless you go bankrupt because you're stupid enough to cavalierly abandon a great language (perl) perfectly suited to your project (who knows given that we haven't heard what it is?) with a working codebase (working now is worth almost infinitely more than "let's rewrite!") all because someone ignorantly repeated the two decade old mantra "perl is in decline".
I didn't downvote you because I presume you sincerely meant what you wrote. But please reflect on your lack of wisdom.
I hope you realize that it's 100% untrue, like most other replies. Look here for actual data. Note that pretty much everything listed as < 1% of the total is a false positive; when OpenHub can't identify the language of a file by its name, it tries to guess, and it's not very good at it.
https://wiki.debian.org/Debate/initsystem/upstart more info about it then on the systemd page of same
PS also that debian page you linked is false in a few ways
"The OpenRC statement has removed its systemd criticism, only keeping the vague “problems with Systemd have been debated a lot” statement. Since there is a lot of this criticism being spread by OpenRC or sysvinit advocates, it is still worth commenting on some incorrect, although widespread, beliefs. "
while the openrc page clearly states otherwise: https://wiki.debian.org/Debate/initsystem/openrc#OpenRC_vs_Systemd
and that the source code is commented, ohloh says otherwise https://www.openhub.net/p/systemd
i check the code, the comments are mostly reminders for the autors
and comparing LoC to the linux kernel is completely stupid...
and so on
You can check out some stats here:
It would probably be more illustrative to compare it to Nimrod however, which was mainly built by a sole developer:
Of course this doesn't really give the module by module stats that you wish for. But generally little, dynamically typed, interpreted languages are comparatively easy to implement, where as statically typed languages with type inference and parametric polymorphism are orders of magnitude harder.
It looks like there are 3 really active people
https://www.openhub.net/p/gimp/contributors?query=&time_span=&sort=commits
Perhaps the first thing that 3 people don't need is to spend more of their time running a 501c instead of developing and someone to take 10% of donations off the top.
Per https://www.openhub.net/p/monero/factoids#FactoidTeamSizeVeryLarge -
> Over the past twelve months, 253 developers contributed new code to Monero. This is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub. > > For this measurement, Open Hub considers only recent changes to the code. Over the entire history of the project, 503 developers have contributed.”
Who is this 1 entity that controls 503 developers?
> NetBSD over FBSD because last I checked, FBSD devs are leaving in droves or being kicked out by SJWs due to the new CoC.
Looking at the openhub page, it looks like freebsd is maintaining developers interest, unlike netbsd which seem to experience a slow yet consistent decline in contributors interest.
Does seem a bit high. Looks like it's a conclusion from the following source:
"...Monero...has had 12,119 commits made by 314 contributors representing 520,291 lines of code..."
https://www.openhub.net/p/monero
But your implication is probably right - 314 contributors to the Monero code probably does not equal "300 developers working on Monero".
Check out the factoids they cite based on the GitHub stats:
https://www.openhub.net/p/OpenBazaar/factoids#
Over the past twelve months, 75 developers contributed new code to OpenBazaar. This is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.
OpenBazaar is written mostly in JavaScript.
Across all JavaScript projects on Open Hub, 21% of all source code lines are comments. For OpenBazaar, this figure is 33%.
This high number of comments puts OpenBazaar among the highest one-third of all JavaScript projects on Open Hub.
A high number of comments might indicate that the code is well-documented and organized, and could be a sign of a helpful and disciplined development team.
The source code repository for OpenBazaar has less than a year of continuous activity. This likely is a relatively new project.
A short history is not necessarily a bad thing (all projects have to start somewhere!) but often, newer projects are changing rapidly and are thus less stable. They are also often innovative and exciting!
As this project matures, a longer source control history in conjunction with recent activity might indicate that the project has enough merit to hold contributors' interest for a long time. It might indicate a mature and relatively bug-free code base, and can be a sign of an organized, dedicated development team.
> i want to program my own package manager.
Have fun! Package managers aren't exactly trivial. According to https://www.openhub.net/p/dpkg, Debian's dpkg consists of over 74k lines of code. And that doesn't include all the infrastructure for making packages.
> Aren't packages installed from source compatible with everything?
I'm not sure I understand what that's supposed to mean. Source code, e.g. in the form of upstream tarball releases or tags in git repos, is not the same as a package. Certain information is necessary to create a package from that source code. If you write your own package manager, you'll have to also write that information (or you could take it from an existing distro, but if you do that, what's the point of doing this yourself in the first place?). And the packages are then specific to one package manager. You can't just take e.g. an .rpm
and install it using dpkg
.
> From what i learned every program or package that exists needs to contain a source code.
That's not true and it also doesn't really matter here.
> There was some hope in Way-Cooler but its author dropped the development earlier this year
He tried to focus on wlroots-rs (wlroots rust bindings), but at some point abandoned the whole approach (instead just writing the compositor in c), you can see details on the blog, way-cooler today still has a fairly decent development activity.
> Lispを勉強したいけど、一人だとなんだか勉強に集中できない
> Lispでつくりたいものがあるけど、一人だとなんだかサボってしまう
> Lispでごにょごにょしたいけど、わからないとき周りに質問したりしたい
> Lispでなんかするけど、一人だとなんかイヤだ
僕は、一人で頑張るのは強い。九年間の作品はユーザーが少ないのに、どうでもよく、どうしても前に進むつもりです。
If this was the case then nobody should be using Microsoft Windows product line. The problem with being widely popular is that you're more likely to run into these types of security issues.
1.) You have an extremely huge enterprise customer base.
To overcome this technical debt will require a ton of resources ($$$) and by then something new and better will have come along.
2.) Struts is still actively maintained and developed on by volunteers.
3.) The grass is not greener on the other side. Practice good security practices (auditing, scanning, updating, etc..) is the key to good security (looking at you Equifax). Not adopting a new technology.
Synopsys (formerly Black Duck) has a good article that says way more than I ever could on the subject:
https://www.synopsys.com/blogs/software-security/replace-apache-struts-maybe/
Also for reference here is the chart of Struts security vulnerabilities and how they've gone down over the years: https://www.openhub.net/p/struts/security?filter%5Bmajor_version%5D=2&filter%5Bperiod%5D=&filter%5Bversion%5D=3510896&filter%5Bseverity%5D=
That said, any new project should not be using Struts 2 and eventually development resources will shift away.
OpenStack "has had 931,080 commits made by 10,618 contributors representing 9,092,446 lines of code."
75% of that is in Python. And Microsoft counts among the contributors.
Is that scalable enough for you?
To be fair jemalloc which I've often seen recommend over the default choice of glibc especially for speed/fragmentation reasons (Firefox and rust default to it) is 68000 lines of code
Firstly, there has been over 200 contributors to the project over the years. At any given moment, there are many contributors actively improving Monero. Secondly, unprivileged 3rd parties can not ascertain private information just from looking at the blockchain itself.
I'm working on ITK and OTB C++ code bases and on proprietary C++ software with vim (often with a single running vim instance).
Vim is still inferior to other solutions regarding refactoring C++ code bases, and debugging (I don't find pyclewn very ergonomic). On other topics it works very well.
Note that I use my personal set of plugins and not some more trendy ones on several topic like snippets. See a more elaborated answer on quora.
> 1 1456 Bjoern Michaelsen <>
I noticed that the openhub.net website gives 2041 commits for bjoern_michaelsen. I doesn't really change anything, it's just a number, but still, a different one.
But it really is. Its slow and poorly maintained, the code is a colossal mess - its over a million lines of code with very little features for it, and lots of copy pasted methods everywhere.
> you can literally check the code from end to end if you want -- no trust required!
Can you? FireFox has 17 million lines of code and comments in 34 programming languages. It has more /lines/ of C++ comment than there are /words/ in War and Peace. And five times that many source code lines. And the same again in Javascript. And again in C.
https://www.openhub.net/p/firefox/analyses/latest/languages_summary
Even if you are a skilled C, C++ and Javascript programmer, are you also a skilled cryptographer and could audit that there's no subtle weakening of any of the encryption or integration via a deliberately incorrect floating point rounding or misleading comment or 'badly' chosen default?
And even if you have the skill and the time and put in the effort, it's still not good enough - you still have to trust, because the code could be completely clean and have a malicious bug inserted by the compiler. (And the compiler could have completely clean code as well - it could just be 'in the system' from a previous generation - http://scienceblogs.com/goodmath/2007/04/15/strange-loops-dennis-ritchie-a/ ).
It's trust all the way down.
3 1/2 year old article, worthless analogy, not good music criticism, not good programming language criticism, cherry-picking to generalizations
He mentions the TIOBE index 3 times, an index that only counts the number of hits returned by popular search engines on queries of language name + "programming". Much better indexes: http://langpop.corger.nl/ https://www.openhub.net/
As for the hello world examples comparing languages by memory and thread use, does any programmer or sysadmin not know that Java is bloated? I thought even average users knew that.
The redeeming value of this article is as a sort of art piece of doing programming language criticism in the style of bad music criticism. The subtitle "A Polemic" is the nicest touch in that respect.
the HaskFroce plugin seems to be more actively developed
you should be able to see it here, after the statistics catch up with latest developement:
and there is also intellij-haskell, but that seems quite new
> Do employers actually value contributions to open source projects as past experience?
The enlightened ones do. Of course, the quality of your contributions matters just as much as quantity.
> Wouldn't it be a little hard to quantify what an applicant actually contributed to a project?
Actually it's very easy to see what an applicant contributed to open source. In fact, there's an app for that. Here are my contributions as an example.
Oh, and if you're interested in contributing to the Mozilla project, I would be happy to assist. :-)
>I'm sure it would be similar (my guess would be 95% or higher).
You guess wrong. Zbigniew Jdrzejewski-Szmek alone has contributed more than that and he doesn't work for Red Hat (he's the secod most active systemd developer if you don't count udev). I'd imagine that David Herrmann has also already passed the 5% point. The Ohloh statistics should give some idea.
Sorry, I know this always sounds a bit passive aggressive or something. "Nope, I had no time" is of course a totally acceptable reply. Sometimes one can hope for a "Good idea, I will make that my next side project."
Trac may be big but it is very modular, has a very friendly (not very big) community and it's easy to contribute.
> ....are working on the main Monero project?
Not easy to estimate, but you can see who committed to the main Monero repositories https://www.openhub.net/p/monero/contributors/summary
> are working on Monero based projects excluding the main?
I guess you have to look for projects on GitHub and check their statistics
if this is to be trusted
https://www.openhub.net/p/chrome/analyses/latest/languages_summary
25,670,051 lines of code is a lot!
then, you have javascript, which might act weirdly under certain circumstances, let's over-simplify and say that a command could be fired twice for some reason, which might be an untested case, it's fine for a website, less so in this case
now, when it comes to c++, it's a monster! u/PlayboySkeleton gave a good tl;dr
I'm, of course, over-exaggerating, but not that much
This is might be the most actively developed open source game ever created (at least as measured by the number of contributors and commits) . I have been playing this for a while and was surprised by how well put together it is, If anybody is interested this review shows what's so unusually good about this game.
Yeah, that doesn't sound right. I am about to graduate from uni for computer science and engineering, specializing in real-time interactive simulation, plus I have a background in robotics. Sure it's complicated, but making an AI itself isn't all that large in terms of lines of code. It actually just takes a bit of memory and some time to train because AI are essentially a very fancy lookup table for calculating odds. On the other side, it seems like the industry standard for anything robotic based is use WAY more lines of code than you actually need... Looking at you ROS. Still, 300,000,000 seems overly excessive, even including all the lines from all the libraries that were used.
Edit: For some perspective, there are about 1.9 million lines of code in Tensorflow (a popular library to use when coding AI) and 7.3 million for ROS, the industry standard environment for robotics. That only makes up 9.2 million, only 3% of what they claim is used for the car. With tf I can write basic AI to do image recognition for obstacle detection in less than 500 lines of code. with another 500 I can make the robot move accordingly. Obviously this doesn't have nearly all the bells and whistles a completely automated car would need, but my point is a bulk of the program would be from the libraries, which barely covers 3% of the claimed quantity. There's no way an automated car requires that many lines. Plus there's also the fact that libraries are often comment heavy and well spaced out, meaning a lot of those lines are actually useless anyway. I'm not buying it unless I see some sources. Here are mine: https://www.openhub.net/p/tensorflow http://wiki.ros.org/code_quality
I like checking the graphs on https://www.openhub.net/p/gentoo but usually they're about 6 months out of date. The number of monthly contributors seems to be growing steadily since the transition to git in Aug 2015.
Thank you! I really should have added that Monero "is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub."
https://www.openhub.net/p/monero/factoids#FactoidTeamSizeVeryLarge
I think most people never heard about the fork and so just keep downloading and using OpenOffice.
OpenOffice claim to still get 100,000 downloads pre day. LibreOffice gets about half that, 300,000 per week. Even among the developers on OpenHub OpenOffice seem to have more users.
Its a shame people are judging the quality opensource by OpenOffice.
> Well, it's just disappointing because toolbox is otherwise a mostly good tool
I get that it is disappointing, you get to be disappointed. What is not cool is just brushing away argumentation we have put a lot of thought in (not just now but over a longer period) and meanwhile throwing in some underhanded remarks.
> and your insistence on inaction. > Hopefully someone will fill the void you are creating.
I am still not sure why you insist on putting in unneeded snarks like that. It really feels childishly complaining about something you probably never gave a second thought while we have literally put in thousands of hours and consideration. And I wish I was kidding about that amount of hours spent, but I am not.
Well thought feedback is always welcome but in the entire conversation you have just complained that we are not doing what the admins have said while it is just your interpretation. All the while ignoring me, frankly rather patiently, trying to explain to you that this specific functionality is one we have talked about and weight the pros and cons for much longer as it isn't without issues.
Anyway, I made my standpoint rather clear even though you insist I did not. So I am done here as I don't see this conversation go much further.
Might be an arbitrary decision by the GNOME Git admins to allow for the past four years.
The most comprehensive source for statistics is OpenHub, https://www.openhub.net/p/gimp should provide everything you need.
Nagios CORE isn't even maintained any more... ...and the manager listed on OpenHub was actually kicked out (along with everyone else outside of Ethan Galstad's company) years ago. https://www.openhub.net/p/nagios
That said, there are a wealth of resources for Nagios, and reinventing the wheel isn't such an amazing idea. Naemon is looking pretty choice...http://www.naemon.org/
Ok let me take a crack at this.
First, a few disclaimers:
- source code is notoriously difficult to measure, because there are a lot of tools available to make the code take more space or less space, while still accomplishing the same purpose.
- very few projects, even open-source ones, openly brag about their code size.
- after being in computer-related industries for 10+ years, I can count on one hand the number of times I've actually had a good reason to print my source code in a printer. It's not a common thing to do.
But I found one project that does brag about its size, so let's use that as a benchmark. It has 1.2 million lines of code and produces a compressed binary of about 25 megs.
Assuming 40 lines to a page, that's 30k pages for 25 megs of compressed binary, or about 1200 pages per meg.
Now, how many megs of compressed binary are in a modern operating system?
I ran the following in my copy of windows 7 pro:
C:\Windows>dir *.exe *.dll /s
This cranked for quite a while and then produced:
Total Files Listed:
24707 File(s) 15,442,779,808 bytes
Now, to be fair, there's a lot of code duplication going on here. There's a lot of parallel directories that contain copies of the same libraries, and various other redundancies.
But, following the same logic of "one meg of compressed binary = 1200 pages", that gives us about 18 million pages for windows 7.
>If you don’t want to struggle with a desktop mapping software ...
I stumbled upon the OpenJump project and have found their desktop software to be a very easy install (Java-based) and not a "struggle,", so before buying the author's premise that installing software on your desktop is a struggle (which I recall some others have been), consider OpenJump.
I've worked with FreeMercator and OpenBravo - can recommend.
http://floreant.org/ looks interesting - wasn't around when I was mucking around with POS software last.
You'll need a barcode scanner at a minimum, which acts just like a keyboard would. PS/2 'wedge' or USB - doesn't matter. There may be some basic setup required - programming the scanner to read UPC-A barcodes, for example. Should be clear in the instructions.
It doesn't sound like you need a cash-drawer that kicks out when a sale is tendered, if you're most concerned about the proper change.
Give FreeMercator a try: https://www.openhub.net/p/freemercator
Download it, run the admin.sh script, see if it does what you need. Then stop it, and make a copy of the item database import script, changing it to suit your location. (You can also add them from the admin utility)
<?xml version="1.0"?> <mercatordata> <tableinsert table="ITEM"> <item> <col name="ITEM_ID">0</col> <col name="CONFIG_NO">0</col> <col name="SKU">8858643086</col> <col name="SHORT_DESC">Wine</col> <col name="AMOUNT">1299.0</col> <col name="DEPT">1</col> <col name="TAX_GROUP">0</col> <col name="PRICING_OPT">1</col> <col name="ACT_DATE">null</col> <col name="DEACT_DATE">null</col> <col name="TAX_INCLUSIVE">0</col> <col name="TAX_EXEMPT">0</col> <col name="LOCKED">0</col> </item>
Your UPC will replace the SKU, etc.
since you mentioned the IDE situation, I don't think it's too bad. for example, now we even have 3 -- yes that's right, not just one or two, but three -- plugins just for Intellij IDEA:
the pace of development is not earthshaking, but it's being done, and I personally have had a good experience with HaskForce.
The world could probably use a mature open-source Tournament Management product too. There are some products out there, like this https://www.openhub.net/p/tournsoft but they don't really handle the end-to-end tournament management: signing up as an athlete, paying your registration fee online, building brackets, publishing results, etc...all the way through the whole lifecycle of a tournament.
I see nothing in the links you provided that would support your conspiracy-theories. Roche merely says Pivotal will not pay either the Groovy or Grails team anymore to work on Groovy or Grails. Pivotal would most likely keep all these people still working at Pivotal, but exclusively on other projects, as they have been doing for a lot of their time at Pivotal already.
He says that if any other company would want to finance Groovy and Grails (by hiring the core devs currently working for Pivotal), that would be a major investment and thus not an easy decision for another company.
As to the role of Guillaume, openhub shows he is active on the codebase as commiter: https://www.openhub.net/p/groovy/contributors?sort=latest_commit&time_span=12+months though less than the others, and I think many commits are on documentation and testing rather than new features (from a glance at the actual commits from the last 12 years).
You'd also probably be surprised what code costs to write.
Now this is no way totally accurate but if you analyse the cost of software projects using industry standard models you get 250 million dollars for a web browser (assuming you paid the industry standard rate per line of code, then by the number of lines of code, this is the price - of much of these were open sourced so the figure is out)
If you look at the ohloh entry of systemd, you'll see that it "has a very large development team". And it has a very lively mailing list. If none of the developers and none in the mailing list deemed this feature important enought to keep it away from bitrotting ... exactly one responded to the call for help and not by saying "oh, I'll do it".
In such circumstances ditching it was the right thing. You don't want half-baked solutions inside your source. At least I don't want that in my sources.
BTW, the readahead logic wasn't tapping any of systemd's internal things (e.g. internal C API). You could very well resurrect it outside of systemd, or use one existing solution. A good readahead support doesn't need to have been baked into the init daemon, running it from .service and .target is all what's needed.
Their site is down, ended up using Google to get a cached version of the blog post, you can read it below:
> Ekiga 5 has progressed a lot lately. OpenHUB is reportin a High Activity for the project. The main reason behind this is that I am again dedicating much of my spare time to the project. Unfortunately, we are again facing a lack of contributions. Most probably (among others) because the project has been silent during several years.
> Recent changes include a complete port to GTK+3, with a brand new user interface ported to GAction and GSettings. But there are also many changes regarding codecs: Opus, VP8, H.264, … have either been added or improved. TCP support has also been added (TLS and SRTP are next on the TODO).
> Are We Ready ?
> Unfortunately not. Some parts of our engine need to be rewritten because they interact badly with Opal. Actually, multiple inheritance and shared pointers have become a nightmare. Things need to be simplified and generally speaking the engine needs to be simplified to attract more developers.
> Instant messaging needs to be reimplemented. We should support MSRP.
> The Ekiga.net platform needs to be upgraded, and improved. I would like the platform to handle gracefully NAT problems. I would like a new website, more attractive, more modern. I would like new features regarding user subscription, account modification and deletion.
> I would like many things to make Ekiga attractive.
> Are you interested in helping? Ekiga needs you! Spread the word…
Krita's a lot more oriented towards actual painting techniques whereas illustrator is more drawing. I'm guessing it would have a lot of overlap with photoshop, but in the art creation rather than photo editing arena. It's more like a Corel Painter style of application.
It does have RAW import (see Open Hub), but I don't know how good they are (I'm guessing there're a lot of complexities to RAW support like demosaicing algorithms and probably camera specific differences, etc). I know it uses (or at least used to use) libkdcraw, which was created by the DigiKam team.
Hi, thanks for the reply.
I'd suspect that you are not alone in wanting better support in WINE for adobe software, but it will take some effort to convince 1000s of people that you can solve the problem. $250k is a lot of money.
Have a look at some of the successes and failures in opensource crowd funding (this thread has a good collection http://ubuntuforums.org/showthread.php?t=1946197 ). I'd say in general it helps to be initiated by the project, not an independant developer (though there are counter examples http://www.phoronix.com/scan.php?page=news_item&px=MTUxMTQ ).
If you have worked on other open source projects have you set up a page at https://www.openhub.net/ . Its a good way to show contributions.
> From a cost perspective, it doesn't matter which one you're doing, you have to pay for silicon. That's not going to change. Very clearly places like x86, like Intel, you're paying for both the design and for the chip.
I am a little disappointed to hear that from the RISC-V CTO, there are a lot of open source designs that can act as a basis for development and reduce cost (similar to how esperanto used BOOM , and Sony used FreeBSD for the playstation).
For example If i look at xiangshan openhub estimates It took ten years of effort to build , assuming one year costs about 100k that's a one million dollar worth of IP anyone can use, and as time passes it and more open source designs will be worth more and more money.
https://www.openhub.net/p/d2bot-with-kolbot
In a Nutshell, d2bot-with-kolbot...
...
has had
557 commits
made by
34 contributors
representing
49,615 lines of code
...
is
mostly written in JavaScript
with
a very low number of source code comments
...
has
a well established, mature codebase
maintained by
nobody
with
decreasing Y-O-Y commits
...
took an estimated
12 years of effort
(COCOMO model)
starting with its
first commit in February, 2012
ending with its
most recent commit over 1 year
ago
In a Nutshell, d2bot-with-kolbot...
...
has had
557 commits
made by
34 contributors
representing
49,615 lines of code
...
is
mostly written in JavaScript
with
a very low number of source code comments
...
has
a well established, mature codebase
maintained by
nobody
with
decreasing Y-O-Y commits
...
took an estimated
12 years of effort
(COCOMO model)
starting with its
first commit in February, 2012
ending with its
most recent commit over 1 year
ago"
Start with java good luck'''
Check this out: https://www.openhub.net/p/firefox/analyses/latest/languages_summary
It says Rust code is 2,394,774 lines, or 9.2% of total.
I feel like the outcome is basically the same as what we have right now though: a few web browsers that do everything, and a lot of poorly-maintained open-source products that only support a few things.
Instead of the xkcd you thought you were getting, here's a different one that I think is more applicable to this case.
Browsers are practically the size of OSes at this point! Webkit alone is 19 million lines of code and 26 million altogether! And that's just Webkit! This article claims Chrome is another 6.7 million.
Because the problem is the entire web, it's too bloated and unsustainable. Chromium has more lines of code than the Linux kernel.
https://www.openhub.net/p/chrome/analyses/latest/languages_summary
https://www.openhub.net/p/linux/analyses/latest/languages_summary
Uff men, what your issue? Seriously
Chromium (which become Chrome when added the few tweaks Google does to it) dont use Rust, https://www.openhub.net/p/chromium-blink/analyses/latest/languages_summary in the codebase, if instead of C++ they used Rust all the CVE errors would be gone as Rust dont allow most of them
Rust is a blessing and as an "ex" C++ developer i am happy we go something like it, we deserve better than a 80s experience full of pain and ghosts to chase, Rust is the ligth and answer for many of us, we can have the same performance with only 5% of the suffer and in 20% of the time required by C/C++. It is not perfect but is a huge step in the right direction, any C/C++ coder can tell you that, if you come from Java/.net/Python of course it is not the THING but it is not the main target of the tool, all are welcome but Rust always was designed to be an alternative to C/C++
Well, we are talking about a system that likely has as many lines of code (about a million) as the average entire modern implementation of Smalltalk.
.
I take that back: 25 times as many lines of code:
.
Smalltalk is estimated to be 5 times as productive as C, and there are hundreds, if not thousands, of contributors to Chrome, while at best, only 1/50 (maybe 1/500) that number contribute to Squeak or Pharo.
.
Your dream is a a tad unrealistic.
Hacking? As in, making a disk image?
I somehow doubt the laptop had whole-disk encryption turned on.
Also, msnbc.com is run by nginx -- https://sitereport.netcraft.com/?url=msnbc.com
Top contributors to nginx include a lot of Russians.... https://www.openhub.net/p/nginx/contributors
How many lines of code (including comments and blank lines) are in Firefox browser ? In how many languages ?
About 31 million, and 48, according to https://www.openhub.net/p/firefox/analyses/latest/languages_summary
> at the very least I'd expect some people would keep on top of bugs, especially vulnerabilities
Have you any idea about the actual paying type jobs one can get, when one has enough knowledge to "keep on top of bugs, especially vulnerabilities" of a project as big as firefox? It's 31 million lines of code
Have a few more:
https://www.openhub.net/p/chrome/analyses/latest/languages_summary
https://www.openhub.net/p/gnome/analyses/latest/languages_summary
https://www.openhub.net/p/kde/analyses/latest/languages_summary
The last two are collections of modules and apps, not single apps. But you're probably running large chunks of them if you're running a DE that is based on them.
> More likely that it's tens of millions of lines of codes.
Firefox and Linux kernel each are around 30 million lines of code. See for example https://www.phoronix.com/scan.php?page=news_item&px=Linux-Git-Stats-EOY2019 and https://www.openhub.net/p/firefox/analyses/latest/languages_summary
> I doubt most end users have actually dug into their OS code and made sure everything was good.
This literally is impossible. A single human being could not read and comprehend code fast enough and the code is changing frequently, plus the behavior depends on configuration and interactions with other systems.
Or you can set the author when you commit their change
https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---authorltauthorgt
An advantage of having this information in the revision control system is that will show up in places like https://www.openhub.net/ also, if you ever need to work out who wrote a specific line of code.
I don't actually own a copy of Photoshop so it's difficult for me to evaluate it, but I would place a very high man-hour estimate on a similar program .
GIMP is the closest project to Photoshop that you can feasibly pull metrics for, and it doesn't even have all the features and functionality of Photoshop. The estimate given by OpenHub puts it at 241 years, or roughly 480,000 man-hours.
There's a little over 12 million lines of C++ code and just under another 2.5 million lines of C code in Chrome. There's only 2.3 million lines of JavaScript. I would be shocked if all of that JavaScript was tests which is probably what it would take to accurately test Chrome. I would need to see some stats that you have to show what percentage of tests are in what language because I imagine doing all of the tests at the highest level wouldn't be efficient.
https://www.openhub.net/p/chrome/analyses/latest/languages_summary
Having interop between Rust and C/C++ makes the problem easier but not easy. You still block development and you have to justify the financial cost of all that work. How much would it cost to convert all of that code and how much money would it make? Is the cost less than what would be gained? If not then it's not going to happen.
Interestingly, OpenHub estimates that Deluge took around 20 person-years to develop to its current state. OP may not need anything quite that fancy, and it might also be worth comparing simpler projects - for example, the Python Kademlia library is estimated at one person-year. But if course it's a widely observed fact that projects takes longer than you expect, even when you factor in that projects take longer than you expect
Two suggestions:
He's a self-deprecating and bashful soul who doesn't want to steal the limelight from the other 588 contributors to the Monero codebase, so he's distancing himself from the whole “lead developer of Monero” narrative.
Bitcoin attracts many talented developers. Obviously it's tragic that these beautiful people are lost in such a dismal swamp, and we all earnestly hope they'll escape soon to the sunlit uplands of Monero. But Bitcoin is where they are, and they may well come up with some good ideas while they're there.
Perhaps I am a contrarian with my view but assuming 1. The founders reward expires in 2020 AND 2. ECC hands over (at least partial, but preferably fulll) control of the Zcash Trademark to the Zcash Foundation, development will SPEED up.
Ive personally met multiple developers with code contributions to Bitcoin, Monero (among other FOSS work) that seem technically interested in Zcash development but so far have chosen not to comtribute for philosophical reasons.
ECC greed and centralized control has so far had a severe negative impact on the size of the Zcash volunteer development community. I don't doubt that some current ECC employees will continue to work on a volunteer basis (perhaps as volunteers in the case on thos not already rich) after the founders reward expires. However a far larger developer community might be interested in working for free if the above 2 issues are resolved.
If Zcash ever hopes to have as many volunteer devs as Bitcoin or Monero the culture will need to change. That will require the Zcash community to stand up to the ECC more so than what we have seen so far.
https://www.openhub.net/p/bitcoin https://www.openhub.net/p/monero https://www.openhub.net/p/zcash
There's an older version from when the project was open source. I'm not sure if you need more recent features, but this might be your answer.
Disclaimer: I have not tested or tried to install that version. I have no clue how useable it is.
> Firefox has also largely been re-written in Rust.
I think this is inaccurate, a number of components have been re-written in rust but most of Firefox is still in c++.
https://wiki.mozilla.org/Oxidation#Within_Firefox https://www.openhub.net/p/firefox/analyses/latest/languages_summary
Shows about 5-6 times as much c++ as rust(which is more then I imagined on the rust side)
Interesting, I had no idea that mp3 files contained artwork imbedded. Do you happen to know when this started? I believe a lot of my collection is quite old, like early 00s.
And yes it is kinda interesting and surprising with the automatic artist art in FOSS apps. It apparently gets it from last.fm, which I heard of but didn't think was open source.
Apparently some aspects of last.fm is open source. Not sure what though. Like their database of album and photos of artists or artist/band descriptions.