>The Idle Detection API is subject to user permission, which can be found in Chrome 94 settings. The user can specify whether or not sites are allowed to ask "to know when you're actively using device". A concern with such settings though is that sites may try to coerce the user by blocking certain content unless the permission is granted.
Exactly. We're already seeing abusive, misleading prompts ("press allow notifications to verify that you are not a robot") about notifications. The same will happen here.
Every added opt-in alert will also further alert fatigue, where people just keep pressing allow until they get to the site.
To recap, the minified version of <code>flatmap-stream</code> version 0.1.1 (and possibly 0.1.2) contained obfuscated and encrypted malicious code. The code is was intended to decrypt only when a package with the description "A Secure Bitcoin Wallet" was installed as a dependency. Effectively, they were targeting the users of <code>copay</code> (open source bitcoin wallet) and its derivatives.
If both packages were installed, the malicious code would try to steal the user's bitcoins.
Reminds me of the post Linus made introducing the world to Linux..
Be warned, it will probably only ever work on AT harddisks. :-(
One thing that frustrates me about the response to the OpenSSL bug is how everyone is suddenly angry at the developers:
But this "core internet infrastructure" that secures countless billions of dollars worth of customer data is still a volunteer project developed by about 20 enthusiasts in their spare time left over from consulting and academic work: http://www.openssl.org/about/
I've yet to see anyone actually volunteer to start writing tests, start contributing their expertise in static verification, start refactoring code to be better readable, or hell, just assigning a few employees to work on OpenSSL (or an alternative SSL library) full time. It seems just wrong to demand perfection from a volunteer team because the project is critical to you, while being unwilling to contribute anything back.
I've recently discovered https://invidio.us/ as an alternative frontend for youtube. It still needs some work, but it is so much faster and you can have subscriptions without needing a google account.
I hope it lasts longer than https://hooktube.com which recently got shut down by google.
from the ToS:
>57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
This has very little to do with np++. The scilexer.dll mentioned is part of the Scintilla open source project, which is where most of np++'s functionality comes from. There are several other editors out there that use the Scintilla toolset, all of which are subject to the same vulnerability as np++.
If you're worried/find you have a modified dll, just go compile your own from Scintilla source. That's what the CIA did...
The discussion over at lobste.rs had someone actually pull the indictment and it seems that he has connections to criminal conspiracy with someone who wrote a keylogger. So the daily beast is misrepresenting the story at least a little bit.
Link to discussion: https://lobste.rs/s/ncnsli/fbi_arrests_hacker_who_hacked_no_one
I've been following the ReactOS project for over a decade.
A few misconceptions and points to clean up:
It started off trying to match NT4 for functionality, and that bar has steadily moved up. Right now the state is that if the application you want to run can work in Windows 2000 it's probably fine, and lots of XP applications work.
In terms of gaming, don't expect much. ReactX (The DirectX implementation) is pretty early stages. It's based largely on the DirectX -> OpenGL wrapper that Wine uses to run games on Linux. Very few GPU drivers actually work except for a handful of older cards with older drivers.
Some people have gotten OpenGL 3D working (With GLQuake) using nVidia drivers. But that's pretty much the peak of what it can do in that respect.
List of working games - You get Quake, C&C Red Alert, and Sim City. What more do you need? The interesting one on that list is 'Banished' which is a comparatively recent game, using the OpenGL renderer.
tl;dr:
When you set target="_blank"
but you don't set rel="noopener"
then bad things happen:
The new window/tab has a reference window.opener
to the original window/tab. Using this reference you can do nefarious things like redirect the user's original tab to wherever you want!
Several decades ago, IBM considered putting software manuals on tape so that field engineers might learn while driving to customers' sites. They hired a voice talent to make the recordings. Here is a failed attempt. You must listen at least to the 90 second mark. I got this from a UI researcher who got it from IBM.
I've read this story, I think it was in "The Design of Everyday Things". It goes like this:
It was the 1930s. Engineers were asked to design faster elevators for the new skyscrapers towering Manhattan, as the wait times for elevator cars had become unbearably slow, and many of the occupants were complaining. The engineers could only design them to move so fast without risking safety, so it seemed like a really tough engineering problem to tackle.
Then one day a few interior designers installed mirrors at the elevator waiting lobby. Suddenly, all complaints about slow elevators had disappeared, seemingly overnight. Everyone thought that the engineers had somehow overcome their limitations and had designed faster elevators, when everyone was just too damned busy staring at themselves in the mirror to notice the wait.
tl;dr installing mirrors made elevator wait time seem much more bearable.
Keepass, storing the .kdbx files on Google Drive or Dropbox.
Actual packaged release previews will come by summer. We plan to distribute via the Windows Store. Maybe also packages on our GitHub as well for those who have set Developer Mode on their Windows machine to sideload apps.
We're still working on this part. There were a lot of moving pieces to get this far by TODAY and this is one of those that we're going to get back to tackling starting after the Build conference ends later this week!
Right now, you can get it by building it yourself from our GitHub at https://github.com/microsoft/terminal.
> Hey there! Another PM on Visual Studio Live Share here. Security is absolutely something we are designing for. Microsoft will not be collecting data on the code. The code is not stored or uploaded in the cloud in any way. Rather, it is just a connection that is established between you and the teammate you are sharing with.
> There's more details in the FAQ here: https://code.visualstudio.com/docs/supporting/live-share-faq
The discussion about this article on Hacker News is interesting, and given the author and another native Bengali speaker don't agree how the character in question even should be represented, it's not particularly surprising the Unicode consortium hasn't figured it out, either.
> Yip
Hi, we're releasing an important project. Would you please consider changing your comment? I'd hate to have to take ownership of it.
(No slight intended to Yip messenger. I've never heard of you before, you just turned up in a google search.)
No kidding. The number of subdirs isn't the only problem, but it's such an obviously wrong approach that they shouldn't need Github to tell them it's a scalability issue.
Pretty much every filesystem in existence scales poorly as the number of entries in a single directory grows. Some handle it better than others, of course, but it's still a terrible practice to stuff everything into one directory and assume you won't have any problems.
And it's such an easy thing to fix - hash the filename or subdirectory name, take the first 2 hex characters, and use that as an intermediate folder name. Now you have at most 256 top-level subdirectories, and with 16k entries each of those has ~64 children.
If you look at .git/objects/ in any Git repo, you'll see this is exactly what Git does internally.
Since I can't possibly say this better than the book I just read I'll just quote it. From The Pragmatic Programmer:
>When woodworkers are faced with the task of producing the same thing over and over, they cheat. They build themselves a jig or a template. If they get the jig right once, they can reproduce a piece of work time after time. The jig takes away complexity and reduces the chances of making mistakes, leaving the craftsman free to concentrate on quality.
>As programmers, we often find ourselves in a similar position. We need to achieve the same functionality, but in different contexts. We need to repeat information in different places. Sometimes we just need to protect ourselves from carpal tunnel syndrome by cutting down on repetitive typing.
> In the same way a woodworker invests the time in a jig, a programmer can build a code generator. Once built, it can be used throughout the life of the project at virtually no cost.
>Tip 29: Write Code That Writes Code
I just installed it this week.
For those who don't know:
Software you can install on most OS but usually done on a Raspberry Pi that blocks ads at the DNS level (you make it your networks DNS provider). It has more or less same blocklists as uBlock.
Pretty disappointing. As this comment from HN says, Oracle themselves are guilty of this.
I hope this goes up to the Supreme Court. I imagine that Google are sure to appeal this.
When I installed Windows 8 on a Windows 7 machine I had configured to triple boot with Windows on the main partition and two Linux distros on other partitions, you patched my bootloader to only recognize Windows.
I've never forgiven you. (It wasn't tough to fix, actually, but it was just like "wow, really, Microsoft?")
BONUS EDIT: Chocolatey has been around for years and you still haven't given it first-class support or provided an alternative. I want a Windows package manager - let me set up a new box with a Powershell script without needing to manually install other software first.
The interesting part is why the default in systemd (i.e. whether or not to kill user processes) was changed in the first place. As per this comment, it seems to be because of some lingering processes from Gnome login sessions. The commit which actually makes the change doesn't cite any other reasons besides the generic "much cleaner and provides better control".
It is possible the actual reasons are sound and well-thought out, but the references provided by the person creating the issue fail to provide the sufficient rationale. It's not hard to understand the reluctance on the part of tmux's maintainers, especially given how much it's asked from them to accommodate for some other project's change which can very well be seen as frivolous.
'rayiner at HN made a good comment that's worth repeating here:
>Offices are a really great example of the push to keep programmers from thinking of themselves as professionals, either by treating them like IT or tech support, or like college kids. Google or Facebook's revenue per engineer is probably 3x that of a law firm or consulting firm, but the overwhelming practice in the latter sorts of places is for each professional to have an office with a door.
>When you're a growing startup, having private offices costs you flexibility as well as cash because open plan is easier to reconfigure as you grow. If you're at the point where you're commissioning a Ghery, you're well past that excuse.
I don't dispute that these were all very important books at the time of their release. For that reason, they are historically important to our profession.
The thing that I am wondering is whether each of these books is still the best way to learn about the subject that they specialize. Maybe the "state of the art" has advanced since "Refactoring" was published.
I know it's not in the list and I know it's not a software engineering book, but "The C Programming Language" (aka K&R) is treated like a bible. Yes it was important, but maybe it's not the best book to recommend to people who want to learn about C programming in 2015.
Fair question. Historically, there have been few people hired to work on PHP. A lot of work on PHP is done by volunteers. There are currently two people hired to work on PHP (that I know of). One of them is Dmitry Stogov who mostly does work on performance. The other is Nikita Popov, hired by Jetbrains, who has been the driving force behind PHP for the last 5+ years. Sadly, he's recently decided to focus his work on LLVM which prompted the PHP foundation to be created literally just a few days ago. https://opencollective.com/phpfoundation Anyway, enums were actually implemented by me and specified by Larry Garfield and me. I'm hoping to be working on PHP professionally very very soon.
Haha. Super useful, but easy to get into sticky situations. If you want to be a power user, I highly suggest learning a bit of how git works underneath. Once you know things like branches are just labeled commits, cloning repos literally clones everything over, pulling branches pulls on what's called a remote branch before it's merged with the target branch, etc. you'll be able to use git a lot better and understand what's going on in all those sticky situations.
This book is good: https://git-scm.com/book/en/v2
You also have to consider that the push to ensure all web traffic is encrypted comes from many places, like the Electronic Frontier Foundation (HTTPS Everywhere) and the greater web community. It's not passed down from on high by Google. There are lots of people who have been clamoring for this, demanding big sites like Facebook etc all switch to 100% HTTPS, and so forth. The issue of whether to bake encryption into HTTP2 was also hotly contested
I made a compiler. GNU people insist that Linus has to suck Stallman's cock because he uses gcc.
Here's my compiler. http://www.templeos.org/Wb/Compiler/
I don't have to suck Stallman's cock. God's dignity would never allow His third temple to suck atheist cock.
I wrote all the code from scratch. Yeah, no networking, Internet or GPU support. It's a toy, just like a C64 was a toy.
A lot of drama because of a fork from FFmpeg called "libav". I think there was a lot of aggression between the two forks and Michael was under a ton of stress from both sides. HERE is a thread from earlier today on HN, and one of the top level comments is from the president of VideoLAN, and that draws a lot of fire as well.
FFmpeg is amazing software, but I don't blame Michael for not wanting to work in that kind of environment.
I keep thinking about going solo - except I don't yet have enough funds and I have a family to support. On the other hand my job sucks and as I look out into the market all jobs equally suck. If I had to continue my career like this for 3 more years I would probably get depressed.
I have to wonder though, is it really a good idea to just try a few small random projects and see what sticks? That might make sense if you are just selling things made by other people, but when you are the person making the product, I don't think it's a good idea.
Good software products take a long time to create. Trying many things out means you will not spend enough time on it and thus won't have a good product to sell. So when it doesn't sell, you can't tell why: is it because no one wants such a thing, or is it because the product is incomplete? It might very well be that the product could sell if you had just spent the time to make it really good.
I don't think the product has to be novel even. It might be just fine to pick a market that already exists - as long as you can be confident you can produce something significantly better than everything out there. I don't mean to clone something and try to compete with the entire list of features; this is foolish. Instead focus on solving the core problem in that market, and do it 10x better than the competition. Leave the other fluff for later. (See "Build Less" by 37 signals).
My favorite feature of C is how main doesn't have to be a function :)
https://repl.it/repls/AbsoluteSorrowfulDevicedriver
EDIT: If anyone's curious as to why the hell this works, http://jroweboy.github.io/c/asm/2015/01/26/when-is-main-not-a-function.html
looks like:
"Everyone is welcome to contribute to Swift. Contributing doesn’t just mean submitting pull requests—there are many different ways for you to get involved, including answering questions on the mailing lists, reporting or triaging bugs, and participating in the Swift evolution process."
>Hutchins is due to appear in court later on Friday, when he could plead guilty or not guilty. If he pleads guilty he could be sentenced to a short prison sentence or supervised release. If he pleads not guilty, he will be moved to Wisconsin, where the charges have been brought, to face trial, which could start any time between three months and three years, Ekeland said.
This is the main reason we have so many people in prison.
"You can go to trial an do thirty years if we convict you or you can plead guilty and do three years.
"If you plead innocent we are going to hold you in jail for three years until your trial."
EDIT
This is something every American needs to see:
> The title of Ava DuVernay's extraordinary and galvanizing documentary refers to the 13th Amendment to the Constitution, which reads "Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States." The progression from that second qualifying clause to the horrors of mass criminalization and the sprawling American prison industry is laid out by DuVernay with bracing lucidity. With a potent mixture of archival footage and testimony from a dazzling array of activists, politicians, historians, and formerly incarcerated women and men, DuVernay creates a work of grand historical synthesis.
Watch 13th on Netflix to see where this all has led.
They added IDN support last month.
Fair warning: If you're trying to get a certificate for a domain with a IDN TLD (i.e. example.ак.срб
), you'll run into a bug preventing issuance. The fix for that will probably be deployed by the end of next week.
Issuance for something like пример.com
works right now.
doesn't scale
user base can be no more than 10-100
any more than that and the forums wage war on you as the spammer you are
having said that, there's mechanical turk
https://www.mturk.com/mturk/welcome
i've often wondered when contemplating google's self-driving car and the problems that project faces, if they could just stream all of the telemetry to some guy sitting in a cube farm, and have him drive the car. kind of like what the US military does with drones
There is a tool called Compiler Explorer that shows you the disassembly of your program and matches source lines to asm lines. It is very well known in the C++ comunity and makes this workflow quite painless
Factorio doesn't iterate through each item every tick; it's *much* more efficient than that. It only tracks the space of the leading item in the ideal case. If there's no space in front of the lead item, nothing moves, hence the behavior seen here. Any fix for this has to at be around as performant as the current algorithm.
Isn't this argument kind of a strawman?
Who says that self-documenting code means absolutely no comments? Even the biggest champion of self-documenting code, Uncle Bob, devotes an entire chapter in Clean Code to effective commenting practices.
The idea of "self-documenting code" is that comments are at best a crutch to explain a bad design, and a worst, lies. Especially as the code changes and then you have to update those comments, which becomes extremely tedious if the comments are at too low a level of detail.
Thus, while code should be self-documenting, comments should be sparse and have demonstrable value when present. This is in line with the Agile philosophy that working code is more important than documentation, but that doesn't mean that documentation isn't important. Whatever documents are created should prove themselves necessary instead of busy work that no one will refer to later.
Uncle Bob presents categories of "good comments":
Some examples of "bad comments":
"Reverse engineered" is a bit of a stretch. You can compile cuda with clang / llvm. LLVM also supports spitting out SPIR: OpenCL's intermediate language. While it may not be trivial to spit out SPIR in the backend from a CUDA frontend, it also probably does not involve a lot of "reverse" engineering.
And then there is this quote.
> While there is an independent GPGPU standard dubbed OpenCL, it isn’t necessarily as good as CUDA, Otoy believes.
CUDA colloquially refers to both the language and the toolkit NVIDIA supports. This quote does not mention which part he is talking about. The reason one might consider CUDA "good" is not because of the language (it is fairly similar to OpenCL), it is because of the toolkit. Implementing a cross compiler does not make the CUDA libraries (such as cuBLAS, cuFFT, cuDNN) portable. They are still closed source and can not be supported by this compiler.
Then there are issues with performance portability. Just because it runs on all the GPUs does not mean it is going to be good across all of them. This is a problem we constantly see with OpenCL as well.
This article reads like a PR post with little to no understanding of the GPU compute eco system.
Reminds me of this question about basic water rendering from Vladimir Kajalin, who is now lead developer at a big game studio and that invented a real time ssao algorithm in 2007 that pretty much every game uses since then.
For those that prefer a browser: https://devdocs.io
EDIT: In chrome at least the page will work offline once you accessed it. Be sure to configure the documentations you want to have available in the options menu next to the search bar
If you know the history of shuffling algorithms, this is all really funny, because Knuth wrote about ALL these problems (as well as eek04's problem elsewhere on this page) in the second volume of "The Art of Computer Programming" - in 1968!
There's a lesson here, and that lesson is that anything related to security is extremely, extremely difficult, because unlike anything else in your programming, you have many intelligent humans looking for weaknesses in your code.
Therefore, if you do anything that involves security, you should never try to reinvent the wheel - get really familiar with the known problems in the literature, and if at all possible, get an expert involved!
TAOCP is actually pretty funny on randomness! It of course includes Von Neumann's quote:
> Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. (Von Neumann)
...but then Knuth goes on to show the results of a failed experiment of his, where he had multiple random number generators, and at each step he chose which generator to use at random.
Astonishingly, the first time he ran this "random random" number generator, it converged quite fast to repeating the same number over and over again! He then proves a theorem stating that a "random" random number generator is actually very likely to "quickly" converge to a "small" loop of numbers, and the quotable here is:
> Random numbers should not be generated with a method chosen at random. (Knuth)
Really, I'm pretty amazed that the writers of the code didn't know this. If you were a computer science geek during the 70s, 80s or 90s, you almost certainly had a copy of TAOCP and knew at least the table of contents by heart, even if you were just posing. They could have just asked an old guy... :-D
This comment on HN on the subject, from another ex-Amazon employee who was also sued, is pretty informative:
https://news.ycombinator.com/item?id=7975428
tldr "Amazon has pursued this particular non-compete "hundreds" of times, and has never [...] prevailed once"; they theorize it's to frighten other employees who consider leaving.
I've done my part. I've written utf8rewind, which is a free and open source library, written in C, that handles common UTF-8 string operations like case conversion, converting to and from UTF-8 and normalization.
It requires no initialization and it allocates no memory on the heap. Adding support for UTF-8 can be as easy as compiling the library, linking it to your application, including its header and calling its functions.
Just for the note...
10 or so years ago I did an experiment of creating standalone JavaVM bundled with basic graphics and UI primitives.
So final GUI executable contains that JVM (80kb) plus bundled .class files combined together into single exe file without external dependencies. The whole project was quite promising until Sun-MS Java wars, sigh.
More on that story: https://sciter.com/10-years-road-to-sciter/
Being able to add collaborators to your repo without confirmation has been abused twice in the history of the site. The first time it happened, the abuser was banned. I wish Zed had contacted our support to bring this issue to our attention prior to taking matters into his own hands.
I'm happy to address the specifics, but there's no conspiracy here. The bottom line is we're already working on a blocking feature because a troll decided to ruin it for everyone.
Update: This comment didn't climb nearly high enough, but hopefully people will see this: https://github.com/blog/862-block-the-bullies
I tried waterfox and it is exactly like firefox with these changes:
For me one of the biggest problems with slack (right after the fact it's proprietary and controlled by a company I don't trust) is that you need to pay, if you want full access to chat history. In theory you could use API to write a bot storing history somewhere, but it seems to be against terms of service (point 6.5).
Also, I still remember how slack suddenly changed their TOS and took my data as hostage. There was no option to decline new TOS and take my current chat history away. Fuck them.
>while as black, it finds forcing draws.
Which is unfortunately common in high level chess.
The London Chess Classic is this week and the first 19 games ended in draws.
Comparing apples to apples, Microsoft also got it right with their IDE. They have made great strides with their Visual Studio products in recent years. They have created Visual Studio Code, which is free available across platforms. They have Visual Studio Community Edition, which is the full version of Visual Studio (sans some professional-level features like a testing suite, I believe), and that's free for up to 5 users under an organization that makes less than a million dollars a year.
Want to buy a license for Visual Studio as a business? Great, you can get that for some real money, because you are a business with an income and you are using Microsoft as a main tool to make that income. I'd be more than happy to shell out $1,199 a year for Visual Studio and a bunch of auxiliary tools if my team is making more than a million bucks.
Aaannnd Linux is finally beating OS X in usage amongst developers. Surprised it took so long.
2017: https://stackoverflow.com/insights/survey/2017#technology-platforms
2016: https://stackoverflow.com/insights/survey/2016#technology-desktop-operating-system
In fact, Linux looks to be the only major OS to gain in popularity, OS X and Windows lost users.
I don't like Go either. That said, I have some feedback for the author. Meta: please timestamp blog posts, at least the month and year–in this case February 2018.
Anyway...
> Rob's resistance to the idea has successfully kept Go's official site and docs highlighting-free as of this writing.
This is mostly true but the Go Tour does have optional syntax highlighting.
> Java can now emit this warning for switches over enum types. Other languages - including ... Elixir ... similarly warn where possible.
Elixir doesn't actually. It's a dynamically-typed language and it doesn't do exhaustivity checking.
> higher-order functions that generalize across more than a single concrete type,
I believe the author is referring to parametrically polymorphic functions. Higher-order functions are ones that accept and/or return functions, and Go has first-class functions so it follows it has HOFs as well, e.g. https://golang.org/doc/codewalk/functions/
> the Go team's response of "vendor everything" amounts to refusing to help developers communicate with one another about their code. ... I can respect the position the Go team has taken, which is that it's not their problem,
Actually, I don't think that's it. Go's primary 'client' is Google, and Google source code famously vendors everything. Go is designed from the ground up to enable that strategy. Its suitability to others is a secondary consideration.
> The use of a single monolithic path for all sources makes version conflicts between dependencies nearly unavoidable. ... Again, the Go team's "not our problem" response is disappointing and frustrating.
But again funnily, it's perfectly suited for Google's monorepo.
> Go has no tuples
True, but it does have multiple return values, which is a use case for tuples. This specialization can be considered good or bad (imho, bad).
> Right now, you can get it by building it yourself from our GitHub at https://github.com/microsoft/terminal
Waouh, I was not expecting that, and this is great!
(and WIL seems cool too!)
Thanks!
Their target for an IPO is 2020, at 100M revenue (https://about.gitlab.com/company/strategy/). I'd guess the next couple of years, they're going to continue to grow and spend like mad.
This is where things get interesting. They're now at a point where it becomes a massive task for any one person to remember who is working where. If they grow to 2-3x the size, there will now be a constant stream of people leaving and joining different teams. Not to mention groups that include "everyone" can be a firehose of information.
Once they IPO, they'll have to pay very close attention to open communication. You can run afoul of SEC rules very rapidly if you're just shooting strategy publicly, off the hip.
I hope they succeed, because I've preferred working remotely myself.
> Restart Apache in every 10 requests? :) Oh Lord.
You laugh, but Apache actually has first-class support for this feature: MaxRequestsPerChild.
That right there probably solves some 99% of all issues people have with node. But, you know, muh async and web scale, so no Apache.
tl;dr: The compromised version is eslint-scope 3.7.2, released about three hours ago. 3.7.1 and 4.0.0 are safe. If you've done npm install today, reset your NPM token and npm install again. You are affected if you've used eslint-scope 3.7.2, ESLint 4, or any version of Babel-ESLint (which hasn't updated to 4.0.0 yet).
It seems that the virus itself reads the .npmrc file, in order to get more tokens to compromise and spread itself.
Edit: NPM has now responded here with a liveticker. All login tokens created in the last ~40h were revoked.
Edit 2: Official Postmortem.
>The maintainer whose account was compromised had reused their npm password on several other sites and did not have two-factor authentication enabled on their npm account.
Moral of the story, that one IT sec nerd in the office trying to get us all to stop entering our passwords everywhere was right after all, I guess.
I've never ever deliberately tried to search the web from within the Windows start menu, and you probably shouldn't, either. Typing in a local file's name that has someone's name in it, or a confidential subject? You're sending off that data to Bing for autocompletion.
Use ShutUp10 to disable those results, and preferably Cortana as well.
Also this has been around for a while now in the form of instant.io. Seems like the only difference is that if your file is below 5 GB, wormhole stores it on their server for a limited time, while instant.io is peer to peer only.
It's funny that he mentions financial services as an industry with a lot of COBOL code. Although he doesn't say it, the implication is that they're getting along just fine that way, and the only problem is a looming shortage of developers.
Personally, I'd say the financial services industry (in the US, anyway) is a fucking embarrassment from a technological perspective. ACH transfers take days. Checks deposited in person at a bank office take days to clear. Credit card transactions post immediately but aren't cleared for days. Banks in the US deliver _exactly_ the kind of customer experience you'd expect from an industry that hasn't updated its technology in fifty years, which is why we now have companies like Apple, Google, Samsung, Venmo, Square, and Simple building what amounts to a parallel financial services industry backed by modern technology.
The problem is often described as too much COBOL code, but my experience has always been that any halfway competent developer can learn a new language without much difficulty, so COBOL itself can't be the problem. The real problem seems to be that all the legacy COBOL code is a bunch of god-awful spaghetti with little or no documentation or unit tests. Companies have sunk a huge amount of money into developing code that's always going to be horrifically expensive to maintain because it was written back when nobody had the first clue about how to write maintainable software.
If history is any guide, most of the companies that rely on reams of COBOL code will never update their technology because they will never be willing to invest in technology upgrades that will take years or decades to pay off. They'll just keep limping along doing business as usual until they're put out of their misery by competitors that aren't living in the past.
This is hardly news. It was being discussed in engineering circles in the 80's. One of the standard anecdotes was Wozniak's description of the development the Apple ][ disk drive (circa 1977), he could not make progress unless he got twelve uninterrupted hours at a time. I would not surprise me to learn it is in The Mythical Man-Month (1975).
There is a 1-byte-per-pixel image in memory that the graphics routine write to. They obey a window z order to keep the correct windows on top.
30 times a second, the window manager task redraws the screen by converting the 1-byte-per-pixel to 4 bit planes. It copies all four bit planes to to VGA 0xA0000-0BFFFF memory, but actually uses a cache because VGA memory is so slow. So, it only writes to the VGA memory if it has changed.
When it draws the screen, it starts by drawing the text layer, then calls a callback for each task. Finally, it draws a persistent graphics layer on top. The persistent layer is usually mostly transparent pixels.
You shouldn't removeAllObjects
before releasing a dictionary/array. If the release is actually dealloc'ing the object, it will remove all items. If not, somebody else is retaining the object and you are probably messing with them.
I would use member directly instead of properties when reading. E.g.:
// [self.allEntries objectForKey:key]; [allEntries objectForKey:key];
That way you avoid calling a method when possible.
I don't see the point of having a @protocol that's only implemented by one class... just use SudokubotViewController*
instead of id <RootViewDelegate>
.
For - (UITableViewCell*) tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
I'd recommend you to use a xib file and load it this way instead of creating views by code which is pretty hard to maintain.
Not about Java internals. About a hash function that has since found its place inside Java internals.
From the cited Bloch’s text:
> While this class of hash function is recommended in The Dragon Book > (P(65599)) and Kernighan and Ritchie's "The C Programming Language, 2 Ed." > (P(31)), it is not attributed in either of these books.
So – Java copied an algorithm described in a few different sources, among others, their book, and Bloch tried to find the original author of the algorithm.
Was it ever any different though? I remember seeing the same low-quality articles there from the day it was announced.
Also, a bit suspicious that this was posted on qvault.io, which also hosts quite a few articles geared towards beginners. Hmmm
Because almost 60% of Windows users still use it:
EDIT: A little more recent - just above 50%:
http://www.ghacks.net/2010/12/22/december-market-share-xp-drops-windows-7-gains/
iirc, it was originally written in lisp, then some crazy macro transformed the source to c/c++. I don't know if it has been rewritten since then, but there are (apparently, didn't watch the video) details here: .net gc talk
Points #3 (get a code sample) and #4 (don't give homework) appear to contradict one another. Not all developers have the time or the inclination to spend anywhere from 5-20 hours per week doing programming on personal projects. Those developers won't have code samples they can share. The only way of getting a code sample from those developers is to give them a homework assignment.
Before anyone pipes up, saying, "Well, if they don't code outside of work hours, they're no good," I have to say that the best damn C++ developer I knew was a dude who came in at 8, coded in a focused manner for 8 hours (i.e. no reddit, no hacker news, just Google and The C++ Programming Language), and then went home. On his free time, he worked on his motorcycle and went hiking.
Sorry it was unclear. Our frequent readers know of our API directory, which was started in 2005 and now has over 3,500 APIs. We're sourcing the data from our own research in our publicly accessible directory of APIs.
All XML APIs: http://www.programmableweb.com/apis/directory/?format=XML
All JSON APIs: http://www.programmableweb.com/apis/directory/?format=JSON
> I don't plan to add network functionality to this (even though you totally can), so no clone or push.
Git clone and push also work with local repositories, they don't require network functionality anyway.
There's lots of stuff I want back for extensions but it all comes down to the ability to modify the browser chrome like they used to. Because that would give extensions:
The ability to create native-looking UIs (FoxyProxys configuration window, Greasemonkeys integration into about:addons). In general better integration with the browser.
Styling the browser chrome without having to manually hack on some css file which might need to be updated for every other firefox version. (With Classic Theme Restorer an experienced developer took care of that for me.)
Hiding existing browser toolbars (like the tabs) and adding new toolbars.
Multi-line tab bar or moving the tab bar below the address bar or even below the content view
Reordering context menus and removing unwanted items
Better interoperability between Addons. As an example TreeStyleTab didn't implement a new tab bar, it simply restyled the existing one to be vertical. That means other addons that would e.g. color tabs on some conditions or add items to the tabs context menu would still work.
A consistent experience for Mouse Gestures. With WebExtensions they have to be detected using injected content scripts. As I understand websites might try to mess with that plus they do not work on internal pages.
EDIT: Not to forget the most powerful addons like Tab Mix Plus, Tab Groups, Vimperator and Pentadactyl were only possible because of that
Also from the article: > The items above represent some of the bigger changes,
Toggling reader mode API is some of the bigger changes? That has to be a joke.
Good point! One should also count references to "the Camel Book" for Programming Perl and "the Dragon Book" for Compilers: Principles, Techniques, and Tools.
I'm sure there are others too, but those are the ones that come immediately to mind just now.
The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie.
http://en.wikipedia.org/wiki/The_C_Programming_Language_(book)
This one. ABSOLUTELY.
It's comprehensive, the best written, and has withstood the test of time.
> I downloaded a trial of your software and it’s awesome, it works a lot like your last product and I love your company (I even have its logo tattooed in my back ). When I go online to buy the collector’s edition of your app I realized that since it’s a physical item they won’t ship it to my country. I try to go to several app stores and none of them will take my money since my credit card is not from the U.S. I go through a bunch of loops to get my friend’s american credit card so I can buy your stuff, because I really love what you do. Once I get it I go back to the app store and it turns out that since my IP is not in the States, I can’t buy your product. I pay $20 for StrongVPN just so I can fake my IP address and buy your product. I finally buy it. A couple of days later I run into a friend who’s also into your products and he tells that me he was able to find your application online, 1 month before the “official” launch and he just clicked two links to download it. His copy also includes a high resolution version of all the assets and the artwork in your manual. I just stand there, wondering for how long I’m going to keep going through so much bullshit just so you can go and release another application next year.
This comment said it best.
I've read 11 of them over the last 20 years, and frankly, they're all overhyped.
Pretty much all of these books are great at a specific point in a programmer's education, but will come across as either obtuse or facile if read too early or too late. Pretty much everything useful from all of these these has permeated industry.
Two exceptions:
The Art of Computer Programming should not be read, it should be referenced. Trying to read it cover-to-cover is like trying to read the encyclopedia from A to Z.
Design Patterns should not be read. The one good idea it contained (creating a shared language for common design patterns) is overshadowed by a sense that you should stuff these specific patterns into your code. The list of patterns is obsolete. It unnecessarily limits itself to capital-O Object Oriented systems. It's about 5 times longer than it should be. Seriously, no one should ever read Design Patterns; learn about design patterns somewhere else.
Yay, more fragmentation in the IM ecosystem.
Listen, it's good that it's open source and self hosted, but the problem with all the messaging "products" doesn't come only from the fact that they're proprietary, it comes from focusing on making single client solutions. IRC and XMPP have a lot of issues, but they are protocols, so anybody can write the clients they want (graphical, command line, bots, ...) and use that with any service that uses those protocols. By creating new domain specific client-server protocols, you're just giving people less choice.
Right now, the only approach that seems to go in the right direction is Matrix.org. But to be honest, there are a lot of modern XMPP and IRC clients around now, so if you want to write a fancy UI, why don't you just write a good client for those?
> 🍿🍿🍿 - https://github.com/microsoft/terminal/issues/10623
I understand Muratori's point with the whole refterm saga, and I think it was handled very badly by the windows terminal team inside github issues.
But this dude is just straight up riding on the wave against the windows terminal without providing anything relevant beside parroting what was said by Muratori in his videos, which at this point I assume the windows terminal team has already seen.
Basically:
Full documentation and a BSD source release of the graphics stack on the BCM21553. This is not the BCM2835, which is in the rPi, but it's apparently close enough that they've sponsored a contest to give $10K to the first person who can port the stack over and show that Quake III is running on open-source hardware.
This is 3D code, apparently, and some of the video codec stuff still needs blobs for licensing requirements. Not ideal, but enough to get Wayland running, and a good chance we can implement some video stuff ourselves. Or that they'll release more later.
/u/bakuretsu is talking about avoiding programming all together, not about rewriting this in something that compiles natively. This project could be handled by writing extra man pages and using the standard man command to read them.
If you want to do that, you could use Pandoc to convert the markdown tl;dr's already written in your project to groff, which man can display. You could put the pages in their own section, called tldr, say, so that they could be used as
man tldr tar
I'm curious what would happen because I'd imagine GitLab would have at least as difficult a time managing the project as GitHub. In fact I don't think they even have 5 servers they can spare for a project like this. A post from last year mentions they have only two.
> Here are 5 Awesome Books for Learning C/C++
Stop saying C/C++. They are not at all alike; they are quite different languages and need separate study if you're going to understand either.
That said, every serious programmer should understand C. Living in fear of the lower level is fine if you want to be a line-of-business Jira jockey, but if you're doing serious work, you need to understand how compilers and computers actually work.
For my part, though, I'd recommend Ben Klemens's <em>21st Century C</em>. "C is punk rock."
Top comment on Hacker News by rgbrenner:
> > We need a break. We need an opportunity to learn to the features we already have responsibly — without tools! Also, we need the time for a fundamental conversation about where we want to push the web forward to. > > How about this.. YOU take a break. Stop trying to keep up with every little new thing that comes out. Wait a while. > > And you'll get exactly what you want. > > The tools that it turns out were a bad idea will die. And those that are good will thrive. > > And you'll get more time to learn the actually useful ones... and you'll get to learn from the mistakes others made early on. > > Yes, you'll be a bit behind.. but you're apparently already ok with that anyway.
> I really hope there aren't still people in the world who get excited about Java
http://www.goodreads.com/quotes/226225-there-are-only-two-kinds-of-languages-the-ones-people
>“There are only two kinds of languages: the ones people complain about and the ones nobody uses.” ― Bjarne Stroustrup, The C++ Programming Language
Seriously. The stuff we use at work, Linux and SLURM and ten thousand system libraries... yeah, that's all made by professionals paid to solve boring professional problems.
But basically all the FOSS I use outside of work is developed and maintained by hobbyists. Nobody's writing Forbes articles about Betaflight or MyPaint.
They have a salary calculator if anyone is curious https://about.gitlab.com/2018/03/23/gitlabs-global-compensation-calculator-the-next-iteration/ It's low if you live in a major city but reasonable everywhere else
Where do you draw the line though? var j = DELAY / i;
looks suspicious right away. If j
is the intercept z
, and i % 2 - 1
is the view plane x
, then (i % 2 - 1) * j
(or (i % 2) * j - j)
) is the intercept x
with n / DELAY
for a bit of scrolling. The rest is just a normal checkerboard (x&1) != (z&1)
. The only difference between the original and this raycasty version is that both values are packed into i
so the divide ends up slightly different, but floor it beforehand and it looks like this which looks pretty similar to me.
Hah, there's also a good one buried in the half life SDK
float m_frictionFraction; // Sorry, couldn't resist this name :)
http://www.koders.com/cpp/fidC1756C2A41DF1C0DFE70937D7FECE46FB0B7FF4D.aspx?s=zombie line 56
Jason Scott just updated with two tweets:
> Mark Pilgrim is alive/annoyed we called the police. Please stand down and give the man privacy and space, and thanks everyone for caring.
> The communication was specifically verified, it was him, and that's that. That was the single hardest decision I've had to make this year.
On this occasion I have to say that despite that this is Rasmus we're talking about, I still believe there is a bug - although it may just be a documentation bug.
The documentation mentions nothing about undefined behaviour, a warning, notice or error being issued when a non-numeric value is passed to this function. In this case, one would expect, given PHP's type juggling behaviour, that the normal rules about implicit conversion to float should apply, and a non-numeric value should convert to float as 0.0. Any departure from the normal type-juggling rules is normally noted in the documentation with a notice saying that you may encounter "undefined behaviour" if you supply a certain type. There isn't such a warning in the documentation (there may be a PHP warning issued, I don't know - but this is normally mentioned in the documentation too).
The documentation also mentions nothing about the return value being anything other than a formatted string at any time.
If returning NULL to some requests is the correct behaviour this should be documented, by swapping this:
> Return Values A formatted version of number.
with this:
> Return Values A formatted version of number, or NULL if $number was non-numeric.
As with many technical developments - marketing and (bad) luck.
Edit: Because I myself loathe the typical one-liners you get on reddit (thanks to /u/nugatty for providing more insight!), Here is a stackoverflow thread that contains more informations.
Bootcamps require a lot of condensed time, dedication and hard work. this doesn't end up working for everyone.
I'd argue that anyone that can or did succeed at a bootcamp could have done just as well teaching themselves with free material in the exact same timeframe. Anyone who wouldn't succeed will save themselves a $10k lesson.
something like https://www.freecodecamp.org/ is free, it gives a nice, mostly linear path to web development (plus other content) and makes people solve problems that train you for the real world. There are many other options available at the same price point.
Anyone thinking of learning to program should at the very least start by going through content like this before dropping thousands of dollars. Any experienced programmer can tell you we are also professional researchers, we often need to find documentation or solutions to problems on the internet and we do that via the massive amount of free content available on the internet.
>Other than the culture of generous acceptance of pulling in dependancies and ease of use is it really all that different from downloading a library for say C++ and using it?
The problem is not just pulling in dependencies and ease of use. Other languages have that (Maven, NuGet, Composer, etc.). The problem is the "micropackage" culture. Let's look at mail parsing. The package mailparser
depends on 8 modules that at the same time depends on more than 10 modules. I won't check the whole graph so let's stop at that level. In the .NET world you can use for mail parsing MimeKit that depends on BouncyCastle that at the same time doesn't depend on anything else that's not included in the standard library/framework. This means that if I use MimeKit I only need to check that both MimeKit and BouncyCastle do not include malicious code, or at least I only have to trust the people behind two packages. If I use mailparser
, however, I would need to check way more than 10 packages (again, I didn't check the whole graph) or trust the people behind more than 10 packages.
That's just one example. Node.js projects end up with a ridiculous quantity of third-party dependencies. In the best case escenario you can only hope no module includes malicious code.
I found this comment on HN summarizes the major points.
> Case-sensitivity is the easiest thing - you take a bytestring from userspace, you search for it exactly in the filesystem. Difficult to get wrong.
> Case-insensitivity for ASCII is slightly more complex - thanks to the clever people who designed ASCII, you can convert lower-case to upper-case by clearing a single bit. You don't want to always clear that bit, or else you'd get weirdness like "`" being the lowercase form of "@", so there's a couple of corner-cases to check.
> Case-sensitivity for Unicode is a giant mud-ball by comparison. There's no simple bit flip to apply, just a 66KB table of mappings[1] you have to hard-code. And that's not all! Changing the case of a Unicode string can change its length (ß -> SS), sometimes lower -> upper -> lower is not a round-trip conversion (ß -> SS -> ss), and some case-folding rules depend on locale (In Turkish, uppercase LATIN SMALL LETTER I is LATIN CAPITAL LETTER I WITH DOT ABOVE, not LATIN CAPITAL LETTER I like it is in ASCII). Oh, and since Unicode requires that LATIN SMALL LETTER E + COMBINING ACUTE ACCENT should be treated the same way as LATIN SMALL LETTER E WITH ACUTE, you also need to bring in the Unicode normalisation tables too. And keep them up-to-date with each new release of Unicode.
I think the author doesn't understand what hybrid-analysis is doing when you send it a javascript file.
If you look at the analysis details, their virtual machine is starting up a copy of WScript.exe to interpret the javascript code. The analysis is then based on watching the WScript process to see what is accessed. This means that whatever WScript does by default will show up in the report, and as it turns out, every item the author has marked as a concern is just the default behavior when starting up the script engine.
This can be verified by analyzing a dummy javascript file and seeing what it reports. My upload of
function(hello, world) { } wasn't large enough for it to trip the heuristic of finding shellcode, but pretty much every other item was hit. Blindly trusting automated analysis can lead you down incorrect paths.
Also, does it seem likely that the government has coerced cloudflare in to hosting a backdoored version of a javascript library? Furthermore, does it seem likely that they would burn a zeroday exploit against a javascript engine allowing a sandbox escape strong enough to even run shellcode in the first place?
I have pinpointed an exact moment this bug was introduced.
https://github.com/MrMEEE/bumblebee/commit/6cd6b2485668e8a87485cb34ca8a0a937e73f16d
It looks like this space was deliberately inserted in there. I mean... HOW?
It wasn't in the key highlights, but compare dirty file with version on disk is a long awaited feature for me. No idea how they consistently add so many features every month.
It's all about the features supported in the browser rendering engine. A long time ago Mozilla was the most forward and so the server had to detect what features the browser supported like frames. Mozilla was a way of saying that.
Had to confirm my recollection was accurate. Yep. It was all about the rendering engine capabilities.
Then everyone started to spoof that too and it eventually became meaningless.
Short answer https://stackoverflow.com/questions/1114254/why-do-all-browsers-user-agents-start-with-mozilla
Longer history https://www.nczonline.net/blog/2010/01/12/history-of-the-user-agent-string/
AWS rents out GPU based instances:
https://aws.amazon.com/ec2/Elastic-GPUs/
p2.16xlarge -- 16 GPUs in one instance. A SHA-1 computation farm is within anyone's reach, you don't have to be a government or even a large corporation.