Inlining these solutions into your own libraries makes them less stable, less cross-platform, less future-proof, and more difficult to read. A better solution, if your goal is to reduce the size of your dependencies, would be to depend on small NPM modules for the particular features you need.
For example: domready
Yes. I am okay with them. I'm sure there are a lot of competent COBOL and PL/1 programmers who can write good modular and maintainable code in these languages.
You know, a namespace/module doesn't need to have the exact same syntax as in Java.
And btw, I now noticed you wrote that javascript doesn't have namespaces and modules. That is just wrong.
This is a namespace and module in vanilla javascript:
Namespace = {};
Namespace.myModule = function(argument) { // body... }
And with node.js and npm it's even standardized how to create modules and we can even share them really easily with other developers.
exports.myModule = function(args) { ... }
Yes, you can totally use the API with Node.js.
Evernote even released their own module for doing it:
https://npmjs.org/package/evernote
To answer the stated question, Node.js uses V8, (the JavaScript engine in Google Chrome) to execute code, and provides bindings for common systems and web programming functions. So, while a hypothetical browser-based client would be useless because of the Same-Origin Policy alluded to in that excerpt, Node has no such restriction, allowing you to use it to build an API endpoint on your server.
Nice first attempt, looks like you've got some of your external services up and running. My only advice is that in order to be considered somewhat Jarvis like (I'm assuming you're referring to Jarvis from Iron Man), you'll need to get some natural language recognition built in. NLR is really tough to do but you can fake it in a sense using AIML. AIML is an XML like language that chat bots use to give them personality and conversational content. Looks like there is a parser available for node, https://npmjs.org/package/aiml. Good luck!
Oh yeah, nice plug on the xkcd comics ;)
You will need to build some kind of checkpoint into your processing to pause and look for messages indicating that the goal of the calculation has changed.
The process running socket.io will stay responsive since all of the CPU-heavy stuff is in separate worker processes (or threads if you want).
Most of the time though you would be sending the next task (via a queue or message) to the next available worker. What sort of long-running task do you have that eats up the CPU for so much time and then needs to be modified mid-execution?
See http://nodejs.org/api/cluster.html for sending/receiving messages to workers. There is also https://npmjs.org/package/webworker-threads if you have to create a lot of tasks quickly or something. Usually cluster with separate processes works fine. The webworker-threads API seems more convenient to me though if you don't need workers to handle incoming connections.
The new Javascript BikeTag API is reaching a stable state: https://npmjs.org/biketag
And can be used to start building/working with your own BikeTag Games! Please reach out if you have an interest in doing so and we'd be happy to help guide you through the process.
This API is still actively being developed and won't be suitable for production use until at least version 2.0.2.
Thanks! ~BikeTag Team~
I can only think of one time in the relatively recent past that I needed to perform a deep clone (was for test fixtures and I just ended up using just-clone for it). I think this is mostly because I stay away from deeply nested objects in the first place. There's usually a shallow alternative you can use that ends up being easier to manage (obvious exceptions are things like data requests)
Here's my "guess," and it had also do with GitHub's own security practices than NPM's I'm thinking...
​
GitHub recently started security scanning repositories that contain NPM packages. They send an email out, and it's a pretty darn thorough system, because I've received emails that are originating with repositories that are over 6 and 7 years old.
​
The way it works, from what I remember after reading through the GitHub notice about this practice, is that it is basically running "npm audit" on each repository.
​
Whether it's looking for a package.json before it does this, or how it determines what an NPM package is (maybe by using npmjs.org to get a list of packages that use GitHub as their primary repository, instead), I don't know.
​
As any node developer knows, running npm audit will produce a list of packages that have security risks ranging from low to critical. GitHub has simply automated that practice, and good for them for doing so.
​
I've yet to be *forced* to fix anything, and if they started forcing me to fix 10 year-old repositories, I'd basically just start deleting repositories. I'm sure each developer will have his/her own opinion on the matter, but I have fixed some that I consider to be code worth keeping. I've used GitHub as a junk drawer at times, so this makes sense for me. An actual organization would obviously want to keep their public-facing repositories up to date with security patches.
​
One problem I see with this issue, if fixing packages becomes a forced thing, is that updating some packages will break builds quite readily. Not all updated packages are equal, and a good portion will kill the software they are part of.
Markdown-it: markdown with more features/etc. https://npmjs.org/packages/markdown-it
Markdown-it custom container: https://npmjs.org/package/markdown-it-container
/ … / format: regular expression, also known as regex or regexp. Flags i (case insensitive) and g (global match) apply.
i18n and l10n: internationalisation and localisation. https://en.wikipedia.org/wiki/i18n
Do you really need dependency injection? If you just want to substitute dependencies for testing, check out rewire.
You may be overengineering things if dependency injection is the best solution to your problem.
Not using a framework is not the same as not using libraries. Frameworks have various opinions and assumptions about structure which can give some benefits setting initial patterns in place but it's easy to outgrow those assumptions. For example, suppose a framework was designed around request/response but the application needs to be updated for realtime updates. You'll spend a lot of time fighting the framework to make this happen.
With libraries, your code calls the relevant abstractions directly, so you provide the architecture and the libraries can focus on doing one thing well.
Not using a framework does not mean that you'll be reinventing wheels left and right. I've observed far more wheel reinvention in framework-land because the assumptions of frameworks tend to interfere heavily with the ability to reuse third-party abstractions from package ecosystems.
Individual modules that do one thing well are very easy to swap out or throw away when your requirements change. Frameworks, not so much. There are still huge numbers of websites stuck on rails 2 for example.
You should make a directory structure like
. ├── bower_components │ └── smtg ├── bower.json ├── src ├── dist ├── .git ├── .gitignore ├── node_modules │ └── smtg ├── package.json └── README.md
Now in your .gitignore file you should have lines like
# Dependency directory # Deployed apps should consider commenting this line out: # see https://npmjs.org/doc/faq.html#Should-I-check-my-node_modules-folder-into-git node_modules bower_components
That is, whenever you make a commit your node_modules and bower_components won't be commited. The same applies (can be achieved) in subversion too.
Note: as mentioned by others, you should put your dependencies in package.json / bower.json, so if others pull your repo they only need to address npm install / bower install to install your module's deps.
Poor instructions aside, it seems straightforward enough. Unzip the xz package into /usr/bin (test with node -v), download the nodejs package to ~/Downloads, run curl -L https://npmjs.org/install.sh | sh (test with npm version) run npm update npm -g
It looks like you could create a browserify transform around what you have with the bootstrap.js file into automatically-inlined expression. This way, to use your plugin, people would just need to add -t yourthing
or put { "browserify": { "transform": "yourthing" } }
into package.json with a corresponding static function call for the source code. This is easy to do with static-module that brfs and bulkify use.
bulkify in particular is worth looking at since it's very similar to what you're doing with globs already.
have you tried nodeMirror? https://npmjs.org/package/node-mirror it has a handy Editor, a Terminal, Debugger and even a Music Player.
I recently won a Price for giving a Presentation on it. So maybe its worth looking at ;)
This page presents something of a false dichotomy. There are more libraries than jquery that do what jquery does. I prefer native DOM combined with more focused, single-purpose libraries and polyfills from npm. The best thing about tiny single-purpose libraries is that you have complete freedom over exactly which pieces of functionality you want libraries for. It's not as much of an all-or-nothing proposition like with jquery.
Or there are modules for window.getComputedStyle and Array.isArray that work all the way down to IE6. This discourse shouldn't just be jquery vs no modules. I mostly object to how jquery is a grab-bag of unrelated functionality that should exist as completely separate reusable components.
Depends on what your platform is...
I'm currently working on single page app using AngularJS and I use Grunt and all of that for my dev workflow. There's a plugin that handles refreshing browsers when files change: https://npmjs.org/package/grunt-contrib-watch
I know Microsoft also added something like that for ASP.NET, it showed up in my browser dev console after I installed Visual Studio 2013. But I've not tried to actually see if I could get it to work.
It's likely the PHP, Ruby, Python... people have come up with something similar.
It is a tutorial from Pixi JS engine (Balls).
I had my browser maximised (1920x1200) and I'm way too close to my screen, in the dark, pretty baked.
Mind did a weird thing there, got totally sucked into the visual-flow. Really had to snap out. It was scary and awesome, I think this needs exploring.
So it sort of depends on what your boss actually wants when she tells you to "develop on the live server".
grunt watch
so that every time I update something it resyncs only the compiled version up to the server.node_modules
and your bower_packages
folders. I'd also toss in a README describing how a new developer would get up and running.node_modules
but also including node, yeoman, bower, etc. on the production server. In this case, I wouldn't even bother to set them up locally. I would SSH in to the production server, run all my grunt tasks (including watch
) there, and keep a local copy of only the development files.Is this really news to webdevs? Watch tasks, Live reload server + Live reload middleware or browser extension should be in everyone's toolbox by now.
Grunt watch with build-in LR-server
Livereload Extension for chrome, or for the sophisticated:
Connect livereload middleware just put it int your (dev)-servers response (simple connect or express server).
I wrote timetrickle to respect APIs with a time-based rate limit. It should work in browsers, too, if you use Browserify. But it doesn't make much sense in browsers unless the limit is per session os IP.
It will not simply reject your call. If you make 10 API calls in a row and the API has a rate limit of 1 request per second, it will make10 calls, 1 call every second.
It's a great tool to write API wrappers. API calls will just take longer if the APi is busy/at the limit.
I'm glad to help. It depends on the language and framework you're working on. There is no go-to site for language agnostic libraries, but a bit of google-fu can go a long way, for any language. Some frameworks/languages have their own site, for example https://npmjs.org/ for Node.js, or the jQuery plugin site, but they do not provide ubiquitous coverage. The best way to approach this is probably to think about what you want to do, check if a framework or library exists that can help you get there by searching Google, Github, asking in Q&A websites like this subreddit or Stackoverflow, etc. And if that fails, go ahead and implement it, and maybe, in the future, share it so other people like you can profit from your work!
This is not the place to get help with the AUR package. Either post a comment on the AUR page or create a Manjaro forum post.
However, I see the issue. It failed connecting to registry.npmjs.org to download dependencies. Either you had an internet connection issue or npmjs.org was having temporary server issues. Try again later.
It does (as of yesterday) in the scorecard feature.
But by then is already too late as you've already published it to npmjs.org.
Please delete it from npm and repush with the correct name.
Maybe try this https://npmjs.org/package/nft.storage? But to actually help we would atleast need to know what kind of error youre getting, as that one most likely says whats wrong. Code looks correct.
Exactly due to IP, but this is how entire nodejs ecosystem works.
If you want to develop something commercially you still need to use registry and I would say I would rather use commercial offering than maintain OSS version of registries ( though there are not that many players there)
For all our open source projects we use npmjs.org
This isn't exactly what you were looking for but there's a package called enmap. It's a data structure that automatically stores what's in a "vanilla" js map to a sqlite db. It's useful for prototyping.
This is an interesting idea, however with the huge backlog of feature requests on MeshCommander and MeshCentral, it would be quite difficult to add this work for me. From the experience with Intel AMT, it also requires quite a bit of maintenance and each new generation requires software changes. I am certainly not opposed integrating it into the tools and I see there are some libraries on NPMJS.org to do iDRAC and iLO.
You should validate the data against some expected shape. I wrote a package called myzod that will even provide type hints to your editor.
Otherwise without writing more explicit validation you shouldn't just try and catch an error. Something like this would be more appropriate:
if (Array.isArray(req.body.classification) && req.body.classifications[0]) {
const id = req.body.classifications[0].segment.id;
// ...
} else {
// handle it however you like but most likely this should be a 400 error.
}
Although you should validate the shape of the objects inside your array as well to be safe.
Best of luck!
Wow! Thanks – and don't worry, I'm not taking it personally. We here to make each other better Node developers.
I was actually under the impression that promises always scheduled it's content on the event loop, and then went about. But to be fair, and as you said, it is just a wrapper around callbacks, it really does make sense that it is implemented the way it is.
That being said, I'll just add, that v1 of the package used callbacks. It was before promises was introduced and back when that was the way we did it. Actually the change to v2 was that it transitioned to using promises.
Back in v1 it actually used process.setImmediate
quite extensively. Every validator was placed separately on the event loop. The reason for this was because of the company I worked with back then, that used another validation library for their Node.js backend applications, and they actually did encounter some trouble with it being to pushy on blocking – so they had actually refactored the validation library they used, so it was making room for I/O. The guy behind torrent-stream was my colleague back then, and he was the one who did their validation library. That was the reason why I also made isvalid asynchronous.
Now promises came, and then it was obvious to just transition to that, and out went all the process.setImmediate
. It has though shown to be a strength of the package. I've been in situations where I needed to validate some object that was referencing another object. Because if it's async nature, I was able to make a pre
validator, that actually went and tried to fetch that referenced object, so when the validation completed, all the references had been resolved – and if the reference was invalid, the data had been rejected.
I don't know if you've read some of the other comments I did, but my goal for v3 is amongst other things to make it work synchronously also. I would like to make it work both ways.
> like a separate log in and register page?
A single page application doesn't mean there's one logical page, it means there is only one HTML page and all transitions between pages are handled with JavaScript. So, you might have one component which represents your home page, and another which represents the login page. When the user clicks the login button, the JavaScript makes the home page component disappear and the login component appear.
> is there a library of ready made component
npm has a lot of react components you can use.
your question/request was not clear to me. Do you have Vector Tiles in GeoJSON and want to serve them or you are looking to produce PBF Vector Tiles and then Serve those? GDAL 2.3 supports reading and writing of Vector Tiles in MBTILES and Folder of TMS PBF Tiles, so you can use OGR2OGR to create your vector tiles or use MapBox Tippecanoe. Or use MapBox Studio Classic Desktop. Or use MapBox SaaS Solution and upload your data and serve your data to a MapBox GL WebGL Map and use the JSON Stylesheet to style your data client side. There is nothing fancy serving vector tiles especially if they are folder of TMS or XYZ Tiles just upload to server and access via URL/folder/Z/X/Y.pbf even S3 Bucket works. If however, you want to keep the vector tiles in mbtiles sqlite database then you'll need a tile server. There are tons I use TileServer-PHP and TileServer-GL both free and open source made by Klokantech. But there are many other depending on your use cases and needs. Another option is to serve your GeoJSON and use GeoJSON-VT to chop the GeoJSON into TILES client side and if so desired make them a binary PBF using VT-PBF. Both of those are NodeJS libraries and you can find them on npmjs.org. My Company offers consulting and development and data development services so please feel free to reach out maps at techmaven.net
Have you considered https://npmjs.org/package/rdb ? Docs at https://github.com/alfateam/rdb/blob/master/docs/docs.md . It uses sql parameters behind the hood if input is not in white list. You can also combine it with handwritten sql filters.
In Java/C#, you leak resources by default, and can't really get to the same object using our symbols without them colliding. > curl https://npmjs.org/install.sh | sh. is potentially dangerous in the hands of routers that might send it to the slaves file on the site in question, can also be very useful in a language discussion you don't understand economy.
> You can install a plugin in Etherpad called "Syntax highlighting". > > To install the plugin simple visit /admin/plugins on your Etherpad deployment and then search for "syntax" and click Install. > > For details on the plugin see https://npmjs.org/package/ep_syntaxhighlighting
Have you considered npmjs.org/package/rdb ? I am the author. It was originally closed source at timpex, but we decided to open source it. It is used in production in logistics software, primarily for the offshore industry.
It supports postgres and mySql. Simple, flexible mapper. Transaction with commit and rollback. Persistence ignorance - no need for explicit saving, everything is handled by transaction. Eager or lazy loading. Based on promises.
I am the author of npmjs.org/package/rdb - another ORM that supports transactions and promises . It is very interesting to see how different ORMs evolve and how APIs turns out.
I generally handle validation with joi, which lets me define schemas I can use independently of the client-side JS throughout the entire stack.
Because React and immutability are a great fit, I never use traditional models. Instead I treat computed properties as "views" on the same data. This means I transform the raw data directly, either via separate stores or ad hoc in the component.
Alternatively I often use external helper functions, e.g. to convert raw milliseconds to a human-readable time value.
So, no, I don't use any models at all. Just immutable data structures, stores and specialised utility functions.
Disclosure: I'm the author of Fynx and am currently working on a medium-scale React project that will launch publicly next month.
I wouldn't get too invested in a particular framework. Just make sure you have a good grasp of the basic data structures, algorithms, and native browser APIs. If you use a package manager, many of the features of frameworks can be obtained piecemeal from tiny packages for each feature which can help with avoiding the all-in commitment that frameworks tend to entail.
It's kind of a command line tool. Basically how it works is that it can perform a variety of tasks as such: If you run
grunt
in the console, it can do a bunch of different things via plugins. That can be anything from minifying JS, compiling LESS/SASS, to even concatenating files or even running its own little Apache server.
Some of my favorite plugins are Watch, which basically watches your project directory for any file changes and can run any other set of tasks when a file changes, Concat, which allows you to take all of your JS files and package them up into one production js file to reduce HTTP requests.
All you have to do to execute those is a simple command, such as
grunt watch
So, it's a command line tool, but you really don't have to have a lot of command line experience to use it. Does this make any sense? I'd be happy to explain any other aspects to you if you're still not getting it.
I forked a node.js bitcoin client so we can have our own version, if you prefer :) The differences are primarily aesthetic for now but I think it will make it move obvious for Node.js devs how to get started!
I appreciate the need to keep a consistent environment. I'd suggest that pinning the current version isn't enough since each might depend on others and so on. pip freeze
can (for the most part) capture all the packages installed.
Ruby's bundler better captures this distinction. You keep a Gemfile
that records the dependencies of the application, and a Gemfile.lock
that captures the versions for deployment. Npm equivalently has shrinkwrap.
I'd suggest keeping a similar distinction for your web apps. a requirements.txt
for general dependencies and a deployed_requirements.txt
as the equivalent lockfile generated by pip freeze
.
By screen you mean gnu screen? :)
If so then I wouldn't really do it this way for production use. I've done it this way for a while with a Meteor application, which is based on Node.js, and then I've switched to using forever.
The main reasons for this is that tools like forever give you a much better control over your applications. You can also more easily configure shared behavior, such as where the logs for each app should go, etc. Not to mention that it will restart the process if it crashes, which is something you'd have to otherwise do manually.
If you have multiple apps they are completely separate, but they have to be running on a different port. They can even share a database, since most databases are designed to handle concurrent connections (which is what you can have even with a single application used by multiple users) :)
I use component so the HTML gets converted to js and rolled up along with the js.
We then improved it by using a grunt component plugin to use handlebars templates and partials.
https://npmjs.org/package/grunt-component-build
https://github.com/kewah/component-builder-handlebars
So we don't lazy load them. We roll them up at build time into the js code. The component build spits out dev and production versions that take a little time configuring, but it's pretty straightforward.
I then split up the application into multiple modules. Each one a component and load them using a standard js CSS lazy loader.
Along with encouraging modules that don't share functionality and makes modularized code intuitive, it also allows you to quickly share common resources using the package manager's dependency configurations.
Awesome! I based mine on the structure that rapgenius-js used, and I was actually going to learn Node for my project, but I had trouble getting the API to work, and I got burned out trying to work with it and Node, so I decided I'd rather build my own. I might make mine match rapgenius-js more closely and support annotations and dividing songs into verses.
Should be able to use something like below
db.getSiblingDB(dbName).findOne('_id':'12345') Ref: http://docs.mongodb.org/manual/reference/method/db.collection.findOne/#db.collection.findOne
you should check out mongoose as a really easy to use lib for mongo https://npmjs.org/package/mongoose
I've been writing javascript for a couple of years now and have been trying to experiment as much as possible and have found that the node.js community is the place where the most interesting stuff is happening. The goal of the core project is a cross platform evented I/O abstraction but what that simple platform has spawned is https://npmjs.org/ which is going absolutely bonkers with small unix-philosophy style building blocks that really makes for a fun development experience. things like https://github.com/dominictarr/scuttlebutt and https://github.com/substack/pushover are good examples of small and focused yet still interesting projects.
It's valid in the sense that it's possible (here is the Node file system api, and you can find a number of node xml parsers in the npm registry), but it seems a bit clunky. Is this computer really so old that you can't install some sort of storage engine on it?
There's a rather primitive nodejs module called json2officexml to convert a simple JSON object into an XLS XML stream. It is based on xmlbuilder.
At the moment, there's no safe crossbrowser compatible way to write files to the local storage. All you can do is to put the file inside a new window and/or iframe and hope that the client is configured to do the right thing(tm).