Nope, this only applies to "public IPFS infrastructure" such as their public gateways and nodes. So what this means is that if you use the public ipfs.io HTTP gateway, they can blacklist certain CIDs from that gateway if they were required to for legal reasons.
This doesn't affect any other nodes on the network, and even then you can usually get around it by using a different public gateway.
It's the same way that you can blacklist any CIDs you don't want from being served on your local node.
Yes. Just use the —offline option when starting the daemon.
https://ipfs.io/docs/commands/#ipfs-daemon
For using private networks with IPFS, see here.
I made this a while ago.
https://ipfstube.erindachtler.me/
It's not really a youtube killer, but it hosts and serves videos.
If someone made a directory/listing site to use in combination with it, that'd be great.
Non-static sites is tricky, because the dynamic behavior has to be done somewhere. You can build an external service of course, but that won't be on IPFS. I know a lot of people might have a lambda function on AWS for some specific purpose, and everything else can be static on IPFS.
One stopgap I've found is using a robust SPA framework in static mode (currently using Vue/Nuxt.js). You can do quite a bit with client-side processing, and to the user is can feel pretty dynamic.
But of course, once any kind of information that you don't want to expose to the user is involved, this is no longer sufficient.
As long as there still is a node which has the blog it cant be taken down. It can only be taken down from gateways which means you can access it through them anymore but if you have a local ipfs node that doesnt apply.
You cant find the rules because there arent really any. Just ipfs.io gateway will stop access to content through their gateway if they get a dmca.
They might be able to take down the domain at most, if you use dnslink, but not actually the site itself.
We're pretty excited about some of the stuff inside there. Private Threads are meant to allow you to share with your family or an entire community. They are private, invite only, and fully decentralized using pubsub over IPFS.
We'll be expanding the beta to a bunch more people on Monday! https://www.producthunt.com/upcoming/textile-photos
It's not necessary to have a Torrent file to download the torrent as well. All you need is a magnet URI which is a hex-encoded SHA-1 hash sums of the "info" sections.
magnet:?xt=urn:btih:54dca0477d74d88ed051a9cd62fe5359151e7823
For example, opening the above magnetURI in your torrent client will start downloading Elementary OS.
I dont think the ipfs docs are bad, I've found just about everything i wanted. And the index.html they mentioned in the docs would be: https://ipfs.io/ipfs/bafybeia5f2yk6td7ciroeped2uwfivo333b524t3zmoderfhl3xn7wi7aa/index.html
> Read copyfree.org.
This is not something I was aware of before, and it seems similar to public domain or CC0.
> Alright, I'll inject as much right-libertarian politics as possible into my work from now on
It seems from your head post that you object to "IPFS ... pushing left-wing socialist politics" such as network neutrality.
But I wonder, is NN inherently left-wing? (and what do we mean by left-wing anyway?)
A lot of people in the USA favour NN, and many of them are on the left, that's true. But it's also true that many people think companies such as Google and Twitter should have to adhere to content neutrality, and many of these people are on the right (no doubt because Google, Twitter, and many other websites and content providers tend to censor right-wing views more than left-wing views).
But aren't net neutrality and content neutrality the same principle, just at different levels of the technology stack? I think they are. And if one is supported by the left and the other by the right, isn't that merely the result of a series of historical accidents.
> does this CID work for you guys?
Yes it does.
> making a index.html, is that what I got to look into doing?
Yes, static website is best with IPFS
> too complicated for making a shit posting blog?
Not really. Keep in mind that as for blogs and websites you'd like them to keep same address. Not changing the address every new post or everytime you edit your website. Easy way to deal with this is to use the IPFS Browser Galacteek. You run a pyramid in Galacteek which will sync new edit of your website while keeping the same exact address. Theres a blog feature in Galacteek. You share the blog gateway address and any users can add that address in their regular RSS Feed reader client. Galacteek provide many IPFS features and is very user-friendly.
Give up and use ipfs.io, or cloudflare-ipfs.com, it will be better than your server.
​
Also, stop using `ipfs pin` and start using `ipfs files`. It pins stuff, but keeps you sane.
You can use any website downloder and ipfs add
the result. Wget is a popular choice and wpull is an interesting alternative, which can capture JavaScript-rendered content and download YouTube videos.
One big show stopper right now is that directories don't contain the file type. Meaning for IPFS to figure out if you are looking at a file or directory, it has to actually download the start of the file. This renders IPFS pretty useless for large collections of files, like say Linux package repositories. You can try the Ubuntu mirror here, it's just impossible slow to browse. I think directory support is one of IPFS's killer features, but with it crippled for the moment, one of the main reasons to use it is gone.
Another problem is that IPFS gives you very little feedback. Every torrent client tells you the number of seeders, how much you have downloaded and such. IPFS over HTTP just gives you spinning loading circles that tell you nothing. There is also no way to do background downloads or pin things to be downloaded later.
For small collections of small files IPFS works great. But for large files or large collections of files it just performs miserable.
IPFS is also lacking a bit in a show-case. There is nothing awesome running on IPFS that you can point to and those few things running on it, don't perform so well. If I browse https://d.tube/ for example, the IPFS thumbnails always take a lot longer to load than plain http. IPFS videos on the other side don't work at all. Don't even know if it's just that slow or if they don't make the videos available on IPFS despite using CIDs in the URL.
You just download it like you would any other file. For example, for this file https://ipfs.io/ipfs/QmQ2r6iMNpky5f1m4cnm3Yqw8VSvjuKpTcK1X7dBR1LkJF/cat.gif either load it in your browser and save it, or use wget or a download manager. It'd be easier to identify what you're having trouble with if you said exactly what file/url you're trying to download.
In Chrome 68 I get this stack trace in the console, and the offer never shows up when I click Create:
Uncaught RangeError: Maximum call stack size exceeded. at HTMLDivElement.trigger (jquery-2.0.0.js:4773) at Object.trigger (jquery-2.0.0.js:4502) at HTMLDivElement.<anonymous> (jquery-2.0.0.js:5055) at Function.each (jquery-2.0.0.js:590) at init.each (jquery-2.0.0.js:237) at init.trigger (jquery-2.0.0.js:5054) at init.jQuery.fn.(anonymous function) as focus at HTMLDocument.<anonymous> (bootstrap.js:910) at HTMLDocument.dispatch (jquery-2.0.0.js:4618) at HTMLDocument.elemData.handle (jquery-2.0.0.js:4302)
The idea here is to enable a network of node which will help routed IPFS or Swarm files and at the same times being environmental friendly. Help protecting or planet and help fight centralisation of power.
The result of my first experimentation can be found here: https://ipfs.io/ipfs/QmPKcCtDgvV4y3fezPwFGjQV4SVFbiVXCD6H37WTp9FLgT
or
I think you must've misread the blog post. You can use this just like you would a VPN service like NordVPN, ExpressVPN, OpenVPN, etc... That includes connecting machines on the VPN, or routing your traffic through the VPN to anonymously browse the internet. And no it doesn't go through a single common node on I2P.
For example this quote from the blog:
> Before leaving your computer, traffic over the VPN is encrypted and obfuscated via the I2P network, and a random hop is chosen from among the I2P nodes, whether or not they are running a libanonvpn application. That node is random and it will relay your traffic to it’s final destination, and I2P has a high amount of address churn among nodes. These properties make it much more difficult to block. (Read more)
Content in question is PRO GIT book - https://git-scm.com/book/en/v2 blocked by Cloudflare and Infura. There is very low chance that free book distributed under CCNC will be DMCA blocked on IPFS gateways by somebody holding rights.
maybe im missing something but for example:
if you visit this link:
http://library (DOT) lol/main/26141CF37FD3C420EA43427B0ACF54C4
then look at what the link is for IPFS, it is an IPFS hash link made available through the ipfs.io gateway. this means that the file is pinned out there, by someone, and now we can enjoy it, and it downloads much quicker then the old links on libgen, i read- i personally don't ever download files that may be copyrighted, because i'm a law abiding citizen who has no fun
Yes i read that, again and again. I read all that at ipfs.io.
It sounds good, but some basic questions are not answered.
How can people find a website or files? If everyone must have a hash to find a site or file then it is a little bit complicated. Where to find the hash of a specific site or file...how can people know that a hash is needed... and so on.
So is there a search-function like in google to find a thing on IPFS? I dont found such a searchfunction.
Not only for webpages , for everything! Your IP is not anonymous. You can use VPN. Im not sure if they support Tor yet.
Take a look at galacteek: https://github.com/pinnaculum/galacteek galacteek demo: https://ipfs.io/ipfs/bafybeib3yj72wtimt3h4yt6lxewufwjy4tay6r5qddzfffqr4hbgjydkqe
IPFS hashes your files. So multi-sharing is possible.
I've hosted a few vid myself and working fine for me set with dhtclient option. You could setup an IPFS hash website like a tracker website. Users that download the files would 'theoretically' multi-share these files too.
I havent tested cuz theres not really anything built for us to monitor this right now (like bittorrent clients) If you want to play with IPFS I recommend galacteek: https://github.com/pinnaculum/galacteek
Heres a demo vid about galacteek: https://ipfs.io/ipfs/bafybeib3yj72wtimt3h4yt6lxewufwjy4tay6r5qddzfffqr4hbgjydkqe
Note: theres some built-in documentation inside galacteek, watch the vid!
New edit: IP address shows up in dht, so IPFS is secure but not anonymous. keep that in mind.
Hey zenground0 here, thanks for posting this.
If you are interested in participating this coming Friday have a look at the annotated paper. It's by far best to look at the linked pdf with adobe acrobat or something else that lets you hover over the thought bubbles and see the notes in line (otherwise it's very annoying to match bubbles with notes). The notes call out good topics of discussion and places for further inquiry and add supplementary explanations for the parts of the paper I found a bit tougher.
Peace
Well there's your problem. Windows actively suppresses and represses their users. Everyone thinks they aren't smart if they are using windows. Windows doesn't ALLOW you to be smart. It doesn't want you to be smart. Smart people would be able to tell what garbage it is and that would be bad for business. So it suppresses you to the point where you cant even get in there enough to tell how garbage it is. It is your enemy not your friend.
Linux is your friend. It invites you into it's home and gives you 10 times as much power as windows for almost no effort, and there is 10 times more power than that left on tap if you do decide to put in effort. I recommend Linux Mint Cinnamon as a great starter distro for someone coming from windows. https://www.linuxmint.com/
"I regret switching to linux." -nobody ever.
You publish a CID and serve the file. Once you get viewers they also serve the file as long as it lives on their device. Dtube may remove the links to your content but not the content itself.
You can find this info on their FAQ and more generally on https://ipfs.io
Check out Sia:
"Sia, in a nutshell, is the concept for a new way to store data “in the cloud”. It takes advantage of users all over the world with spare hard drive space who wish to rent it out in order to create a huge, decentralized platform for distributed cloud storage. Data is broken into pieces, encrypted, and distributed to many hosts in order to maintain security and redundancy.
Sia aims to compete with (or totally replace) conventional cloud storage platforms like Amazon S3."
Outcome after starting from scratch and spending ~5 hours, this is the final outcome: https://ipfs.io/ipfs/QmbYBnP7XEuVriCTr3ddcwWAVU9DRY1k75wmR4JczhE9pf/
GitHub repository can be found here: https://github.com/VictorBjelkholm/self-editing-website
Sure, but any site available via ipfs.io will be lacking proper crawling instructions, so the searchability via Google might still turn out to be worse than that of a traditional website.
For practical purposes I've just been storing the hash, and then dynamically constructing URLs. We're not really even at the "short-term" bit of this upgrade path yet.
If I was going to store a full URL I'd use the public gateway, https://ipfs.io/ipfs/$hash
I've never done this, but you can change the bootstrap nodes to create your own ipfs network.
Ah, you're the developer. Sorry about that, I removed my question because I thought you were just reposting the article.
Here's what I've got so far: /ipfs/QmTE6mvceQNxw8KG3PFuPFfQbaaeJdWATEkpezpR7nH5zJ/
It's quite buggy, but there's not much I can do without the non-minimized JS (and I don't know if you keep that on-server, or what URL that would be).
And earlier, I kind of meant "do you think IPFS will be given rock-star treatment", I.E. treated as a genuine protocol for websites in this project. Like setting up a DNS record so that /ipns/dtube.video points to the up-to-date site, all functionality that can be in IPFS is, etc, etc.
I ask this because I honestly think having the front-end for a site like this be really supported in IPFS would be a great win for the protocol. :)
IPFS, like BitTorrent, requires some bandwidth and processing power to serve files to the network, and there are users on slow computers with metered connections. If your blogging platform’s client is a downloadable app, you’ll have modify the IPFS client code so users can opt in for only using P2P to download files, and not to serve them.
Also you can use an HTPP gateway, like the one at https://ipfs.io/ipfs, but if your app depends on such, it will have a point of failure.
ipfs doesn't have a built-in watch functionality but setting up https://ipfs.io/ipfs/QmPVRXVjf5eLf9ZxXJvePazmTbpWyr83wZGiwPGdmsT1u7 in a cron script on whatever interval you want will achieve what you're wanting.
Stuff does get cached without pinning. I don't know how that's managed. I believe that seed nodes ensure that content from their new peers gets cached, so that it shows up faster at <https://ipfs.io/>. That's rather a user-experience feature :) But there are no guarantees about persistence. And I don't believe that there's any tit-for-tat caching formalism.
Anyway, I gather that Etherium integration will handle stuff like that. So eventually there will be markets for bandwidth, storage, etc. So for example, you could have backup services bidding to pin your stuff. And you know what peers are providing each hash that you've added. There won't be centralized servers per se. Rather, there'll be nodes that provide services.
> Where is the database of all the addresses hosted
IPNS is a part of IPFS, it is kept on a distributed hash table, so it gets distributed and kept alive by everyone using IPFS.
You can read more about it on here: https://ipfs.io/ipfs/QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/example#/ipfs/QmQwAP9vFjbCtKvD8RkJdCvPHqLQjZfW7Mqbbqx18zd8j7/ipns/readme.md
There is also some technical explanations here: https://github.com/ipfs/go-ipfs/issues/1716
> who keeps it in order
Things you publish are stored under your peerId, and only you can publish things with your peerId, so that keeps everything in order.
Hey, I'm the author of the Atari thing, you might want to change the link to the /ipfs/ version. I'm having some problems with the publishing of the /ipns/ hash: https://ipfs.io/ipfs/QmacAqRVhJX9eS7YJX1vY3ifFKF9CduDqPEgaCUSa4x5xb/
I would recommend looking at Syncthing. It would distribute the files (no limit) to your machines, and they would all peer off of each other. Think of it as Dropbox without a middle-man.
Another way to think about it is that the GPL is toxic (and this does not mean it is bad, so hang on).
Basically everything that touches GPL needs to be GPL.
For libraries or things that need to be embedded this might be problematic.
LGPL on the other hand allows people to embed or link libraries
without the embedding or linking applications themselves having to
be licensed under the LGPL.
MIT is even more permissive, or one might say more "open" than
the GPL open source license.
Had IPFS been licensed under the GPL it would have greatly
limited its adoption. One project that, in my opionon, fell victem
to this is toxcore (https://tox.chat/download.html). A great initiative
but one that can't be touched by most companies, so it's adoption has staggered
and is limited to a handful of IM implementations.
At one point they wanted to change the license to LGPL or MIT
but a handfull of developers stopped it, and one is enough.
Mono is another project that had a lot of components licensed under GPL
or other restricive licenses which prevented it from being embedded.
Thankfully after Xamarin was acquired by Microsoft they changed it to MIT.
By doing so they have opened up to all new kinds of possiblities.
IPFS isn't just an open source application, like toxic it is a set of open protocols
which can be implemented by anyone, something like GPL however prevents people from doing that.
(Open) protocols should never be GPL licensed, implementations thereof... well that is a different debate.
Personally I am greatful they decided to go with MIT.
theres sweet ipfs but i had some troubles recently and im using ipfs lite now. Not quite a full node but work okay
Why not just use IPFSDroid? I've used it for quite some time with good battery usage. The main downside of it is that it usually lags a bit behind with the IPFS client version that comes with it.
Finally a usable and published crate! IPFS-Pinner uses a Rust client internally, but once the pin endpoint PR is merged, I'll probably switch over to this!
Mullvad is reasonably privacy-minded. And of course if it follows you back to an anonymously paid rented server which is administered via Tor that's a harder nut to crack.
If you're trying to hide from the spooks then IPFS is the wrong tool anyway.
Sounds like an sftp server. On Windows I use freesshd. I'm able to access my desktop computer via my phone even when I'm away. It's possible to give your friends a read-only account but Windows's user management is more difficult than linux and it might be a headache. On linux one would use sshd.
https://ipfs.io/ipfs/bafkreiggetbbfknvixw2dccmigr3gm3h4idlhrz3e33o5umonrjiqb3q54
I like it! But it seems to be adding a slash to the end of the file that isn't visible in the editor. Is it possible to load a markdown file from IPFS hash?
It doesn't seem you understand IPFS quite yet.
IPFS is a protocol for exchanging date, the same way HTTP is. The difference is that HTTP usually works as a central host serving up content to clients, whereas IPFS is a P2P network where anyone can ask for the same files from anyone else.
If you've ever used BitTorrent software, IPFS is a lot like that. If you have a file on your IPFS node, anyone else can ask for that file from you, or anyone else who has the same file, because the CID is a hash of the content of the file. No node is ever dependent on another, so there's no real feasible way to censor a file, other than everyone on the network just deciding not to host it.
There are however to aspects within the world of IPFS that are "centralized". First are the bootstrapping nodes. These are nodes that you need to connect to the network, because otherwise, how do you know who to connect to? But after initially setting up IPFS these aren't needed anymore, and it's also possible to just replace these bootstraps with your own or someone else's (so it's possible to create a "private" IPFS network).
The second is the "Public Gateways" (see: the official gateway, cloudflare's gateway ). These are websites that allow you to see content on IPFS, within the browser. Although technically IPFS does not require these gateways, I'm willing to bet that a large amount of traffic on the network is funneled through these highly centralized public gateways, and it's fairly easy for a DMCA notice to have a whole range of CIDs blocked… but the IPFS content itself can't be blocked if you're running your own node, and accessing the content there.
That works, but it gives way fewer nodes than I expected. For example, when I run this command on QmNvTjdqEPjZVWCvRWsFJA1vK7TTw1g9JP6we1WBJTRADM
, the root of the IETF RFC archive, I get a single result of QmNo8fc5DrtGKv1AH5KJmhKC5FsdVrzosqSgbQ1LgnXE1T
, and running it on bafybeidlkqhddsjrdue7y3dy27pu5d7ydyemcls4z24szlyik3we7vqvam
, the example file from the NFT minting guide, returns zero lines, which definitely can't be right.
Biggest immediate question: Can it interoperate with Gnus and scorefiles?
(The things I most miss from Usenet were literally the tools that let you make it reflect your interests and needs and not what someone else thought you ought to prefer.)
The problem is if your site relies on a centralized API to deliver content, you have the power to censor, and any company that becomes big enough will inevitably censor.
​
There's no search on Subby. You subscribe to someone's account (an Ethereum address), and you get their content directly from the Ethereum provider of your choice. There's no middle man API, and the Ethereum provider is not a single point of failure because you can change it.
​
It looks to me that bitchute uses webtorrent, at least on this video https://www.bitchute.com/video/F5KvHmk4pF4p/ . And it uses the same HTML5 video technique as Subby, which means it's restricted by the same codecs as us. I'm guessing they encode it using their centralized server when you upload your video.
​
The next thing we're releasing is also support for web torrent streaming, but without any centralized API.
​
Looking at DTube it does look like it plays from an IPFS gateway (but it's their gateway). We will probably start doing that as well, thanks. We will let the user add a gateway of their choice though so there's no central point of failure.
Quick calculations on the web disagree with you:
Might be related to the issue with content-length
header. I beleive if you remove filename from path (and leave only CID - /QmVdFJJBiQkVKFcvXu4WzySbZ7KnCW6uGWLJqZz5FnRWjk/file.mp4) the gateway won't return video MIME type and should allow to simply download the file.
Posting the errors or the ipfs/ipns keys would be helpful.
You can pin the content pointed to by IPNS, e.g. pinning the https://ipfs.io/ website is ipfs pin add /ipns/ipfs.io
. If that content changes, the new CID will need to be individually pinned.
Most people do this by having the "publish" step of updating their website first run an ipfs pin add ...
command. Check out DNSLink, which relies on traditional DNS but can be more performant in some circumstance.
a. feels inelegant and counter to rely on a single website for the lifetime-validity of my links. if something happens with ipfs.io then the whole thing is dead. would have been better of just hosting it in a centralised place
b. yeah i suspected as much :/
When I import files and pin them, the public links on like ipfs.io start working within a few minutes. Not sure if the pinning the file is necessary as well.
https://ipfs.io/ipfs/QmSnfoYUM8s69iMzbevhfu8rYhogHbjL6keeU5iFeWkaiw
Are you still having issues?
> How does running a pyramid in Galacteek work? Sounds like extra work as you need an extra software (galacteek) and all
easy : watch the demo video
Yes, you could offer your files over IPFS, but unless your users also run an IPFS node to download them with (you could facilitate this by offering some sought of Asset Management System that helps catalog your assets while letting the user pull them from the network. Otherwise the files would be a available at https://ipfs.io/ipfs/hash-of-the-file which would ruin the distributed delivery.
IPFS uses bittorrent to move data around, so you might be better off publishing torrent files for each of your assets and letting your users decided if they want to download a torrent manager to get better download speeds and contribute to others requesting the file via torrent.
Based on what you've said, I'd go the custom Asset Management System with a built in IPFS node with a toggle for those on a metered connection. It would reduce your bandwidth requirements greatly and allow for near intimate scalability while providing faster downloads to your users.
You should have a look at Galacteek
If you create a auto-sync pyramid, you can edit your site and the IPFS address will stay the same. Share the IPFS address once and edit the site as you want. Share the IPNS gateway address if you want your site to be available from any regular webbrowser.
Give a try to Galacteek the IPFS Browser
I agree. Hashes are great for multi-sharing. Its ez to maintain. Hash dont change as long as the file dont change so no need to worrie about trackers or having to worrie about file storage location. I've been running MuWire for about a year now, which uses hash file system and its working perfectly for multi-sharing.
I recommend galacteek which is a complete ipfs tool: https://github.com/pinnaculum/galacteek
I made a galacteek demo vid: https://ipfs.io/ipfs/bafybeib3yj72wtimt3h4yt6lxewufwjy4tay6r5qddzfffqr4hbgjydkqe
I fail to see how it's not intuitive. The basics could hardly be any easier: You do ipfs add file.txt
and get a hash back. That's it, others can than use the hash to retrieve the data. And if somebody isn't running IPFS themselves, you can hand them link via one of the gateways, e.g. something like:
https://ipfs.io/ipfs/QmRoFHdpQ5ERkpKrpPDooKya2vJwbKRFGbPUbXTbTjBnm2
Of all the blockchain/merkle-tree/checksum stuff out there, IPFS is by far the easiest to get started with, as it's nothing more than calculating a checksum of your data and than using that to access it. No monetization, no ownership, just plain old data. With IPNS, IPLD, pubsub and such things get a little more complicated, but still quite manageable.
There is also a web interface and a desktop app, but I don't really use them, as the command line tool already does everything I need.
Check out the BitSwap part of libp2p. Basically my node is more likely to send data to your node if I know you've been providing data too.
I grabbed a link to a IPFS whitepaper that mentions it here.
"Thus, BitSwap nodes send blocks to their peers optimistically, expecting the debt to be repaid. But leeches (free-loading nodes that never share) must be protected against."
As for finding where the hash is, you don't even have to do that. IPFS provides that you can ask your peers to find it for you. It's part of the system being distributed.
Maybe try this https://npmjs.org/package/nft.storage? But to actually help we would atleast need to know what kind of error youre getting, as that one most likely says whats wrong. Code looks correct.
I bought even 15 more steem power with ethereum but it didn't help. I found out it's a steem feature which prevents spamming. Here is more information https://steemit.com/steemit/@rycharde/why-are-so-many-users-hitting-their-bandwidth-limit-solved-it-what-you-can-do
The reason I hit this limit is most probably because I made a script which follows everyone on steemit and I did run it. And it made like 24 000 successful requests to write to the blockchain. So it seems that I will need to wait probably 5 days until my limit resets.
Please do not use that blog post for anything other than a lesson in what not to do. That blog post is the most futile, over-engineered and expensive exercise I think I ever saw regarding IPFS. He creates a Rube Goldberg monster running on Azure and in the end he puts it behind the Cloudflare gateway. You have to really be generous to call that "decentralized". And all of that to serve 6000 pages in a day. That's one request every 4 minutes.
You want to "decentralize your blog"? Fine, get any lower-power device you can connect to your home (a cheap router that you can install OpenWRT on with nginx, a raspberry pi zero, any junk rootable phone you have your drawer, anything) and just serve it from your home network. And if by any chance you are expecting any kind of traffic that is not measure in requests/minute, then you can even put something like Cloudflare in front of it.
But let's say that you really want to use IPFS. Ok. Install IPFS on your laptop, generate your website and then use a simple pinning service (such as https://www.eternum.io/) to ensure you have at least one copy online when you turn off your laptop. Or if you have a NAS that you always have on, install it there. Get all that money you'll be saving by not having a 3-node cluster running on Azure and donate to your favorite cause.
Whatever you do, do not take that blog post as a good guide on how to architect anything decentralized. It is just a masturbatory piece written by someone with way too much time on their hands.
You could use a base tag, but if you do something like base href="/ipfs/hash"
, you'd need to update this tag everytime you update your site. If you used base href="ipns/hash"
, it would be quite difficult to browse a previous version of your site, because all relative links and assets would point to the newest version of your site. There are also some gotchas with the base
tag noted here. These aren't impossible issues to overcome, but my preference is to be able to easily point to the root of my site using relative links.
That depends on what you mean.
If I 'pin' a file to my IPFS then it's easy to search for that later, and you can do things like this bash script to search through the pinned items.
I've never tried to search through things I had simply viewed if that's what you mean, but it could-be/probably-is possible.
I Tried this on some files that I hosted on a remote ipfs server and also looked up on choudflare and the main ipfs.io/ipfs gateways, so it should be in at least a couple of places but it just ran for a minute or so and then returned without listing anything. Is there some kind of timeout that needs to be increased?
Time to code the stuff: 10 minutes
Finding a service to pin the 1kb file: 30 minutes.
Encrypt the file here: https://ipfs.io/ipfs/QmUJmmbWoXKReuRcEp1xp4f1aYFifBg1N5C59o1QZvV5hj/encrypt.html BTW, this is hardcoded for PNG files. I have to provide the Mime type, I'm not sure if the browser will do the right thing with JPG.
Check out https://ipfs.io/ipns/ipfs.doubleplus.io. It's a fully decentalized social network I've been working on where all files are hosted on IPFS. Also the application itself is hosted through IPFS.
Be happy to hear your feedback.
I figured it out. For me it was port forwarding port 8080 to my machine on my router and changing the .ipfs/config file according to this tutorial.
Now my react app is visible here: https://ipfs.io/ipfs/QmYH5SU5bfgoQMwCLCEN1JGVhYYMaVtAwv4FsJay5EMuXW/
> say 'The Big Lebowski' then at an arbitrary later date add and link specific copies, say, 'The Big Lebowski (1080p)' and 'The Big Lebowski (720p)' and be able to access them through the original abstracted node of the generic 'The Big Lebowski'.
This is what IPNS is for. Use the ipfs name publish
command followed by the Qm hash of the folder you want to publish to IPNS. You then point people to https://ipfs.io/ipns/<yourpeerID>
. Each time you publish a new hash, this address will point to that new hash.
If you want to have multiple hashes published at the same time, you don't want to use your peerID to publish it. Instead, you can combine your peerID with a string (say, 'dog-site') to generate a new unique address to publish with:
ipfs key gen --type=rsa --size=2048 dog-site ipfs add ./my-dog-site ipfs name publish --key=dog-site <hash of dog folder>
I made a small gallery with this concept. If it's properly pinned, it can be viewed here
> My understanding is that IPFS does temporarily cache and serve the content that you've downloaded, but it's not permanent unless you pin it.
I'm looking through the whitepaper right now and it seems to me that incentives play a significant role in whether a file gets backed up by your peers or not. Filecoin payments are one direct method of incentivizing others to backup your files, but the whitepaper talks more about what I think is essentially file popularity, which the whitepaper describes on page 4 as files that peers "want." E.g. "BitSwap peers are looking to acquire a set of blocks (want_list), and have another set of blocks to offer in exchange (have_list). ... In the case that a node has nothing that its peers want (or nothing at all), it seeks the pieces its peers want."
Based on that, it seems to me that files that are often requested will be the most incentivized, and thus rarely requested files will not be likely to get backed up very often. Is that your understanding as well? Because if so I don't see how it is likely that content will be automatically cached and served unless it's popular. New content in particular will have 0 requests, so it will rarely get backed up, at least initially. Or am I missing something?
> If I am someone who uploads content, let's say an MP3 fille containing some audio, does that file automatically get saved and backed up on numerous computers across the world?
First, there is no "uploading stuff to IPFS." You can only put it on your own computer, which, assuming you're using an IPFS client, is connected to several peers on the network. Your peers may or may not be incentivized to make a backup of your file. If they are incentivized, they will make backups; if not, they won't. It's not exactly automatic, but there are a variety of ways to incentivize nodes to backup your data. I'm still looking into it myself based on the whitepaper, but right now it looks like a major incentive is a given file's popularity. If a bunch of people want to download your file, your peers are incentivized to increase its availability in order to increase their credit on the network. You can also pay other nodes to backup your file using Filecoin.
> Won't there wind up being an overwhelming amount of junk cluttering and filling up hard drives everywhere?
Users can clear out their node's cache at any time. If they are running low on diskspace, they may clear out files that have lower incentives (e.g. no longer popular, no longer paid for, etc).
Only a directory will provide you with a filename + extension. An object itself does not contain any of that information.
​
You can selectively download / request a limited amount of data through the API though
​
/ipns/docs.ipfs.io/reference/api/http/#api-v0-cat
​
​
I've plugged Lens into a pre-trained Inceptionv5 tensorflow model to get some basic image recognition:
Indexing an image: https://ipfs.io/ipfs/QmX2ckYuTvP3ECbQmBhwYaqeatcwQiBY6yMDcwwVj2Ndkt
Searching for the image: https://ipfs.io/ipfs/QmdQDUcCKqgk7oJBbvSFrDDxYAsgnKYbkXcPQQVS7PKTxM
Neat idea, however there is a small issue with this, in that it requires write access to the underlying IPFS gateway in order to store the content. I don't think you'll find much, if any gateways that do this, as can be seen by the following error
https://ipfs.io/ipfs/QmUYYh6AKmQU6wMcHBi3FC1eB67DnXqQGcshMi2MSxRQGG
my site is not so cool..but I have a few videos on how to use ipfs in the quickest ways. this is not fordevelopers or professionals. easy stuff. I try to focus on using html to share content.
https://ipfs.io/ipns/QmfQ1YxZTCRw63DXhsMKGzHqncWaEwJBX6rFxcn1Rf5bmT/
You could use a tool like SyncThink to transfer files from your phone to your home machine and then have a script that automatically runs "ipfs add" on a particular directory every 5 minutes?
Yeah so I just tried removing those two non-working servers (that have been returning 502 for a number of days) which still doesn't get the example to work for me.
Link is here: https://codepen.io/konsumer/pen/xqNBJX?editors=0010
For a example of decentralized eCommerve see https://openbazaar.org/. I think it uses IPFS for storage. It also have a distributed escrow system for payment. And likely something like a distributed rating system for users and shops. Note that you still have to run something on your server.
you're doing it right. S3 is extremely expensive.
I solved that problem with a dedicated server on Hetzner that has 8 gigs in mirror raid and 70 euros per month with 1GBit/s bandwidth. Works fine, but you must have a backup somewhere, Hetzner is not 100% reliable.
https://www.hetzner.com/dedicated-rootserver/matrix-ex
Not sure what service you are using to host on IPFS, but I recommend checking out fleek: https://fleek.co/
All this will be as simple as using something like Vercel (https://vercel.com/) for hosting your site, only it will be built on decentralized technology. Whatever service you use though, it needs to manage uploading a new version of your site to IPFS and then linking up your domain name (wether it's IPNS or not) to that new IPFS node. If you are only using the IPFS client, then this would still be a manual process for you to perform.
You do not have to use IPNS, you can just keep track of the current top level hash inside your application.
Problem comes then you need to scale it out horizontally over multiple machines, in reality you need a very fast distributed key-value store, something like Etcd3 used by Kubernetes/CoreOS to scale out to 2000 individual servers.
What do you mean when you say it "can't scale"?
I've been looking through a lot of the tools built for it already. It also appears d.tube and others use it. There are always limitations on hardware, but what do you think the problem points are with IPFS vs traditional servers?
TPB only indexes torrents that were contributed by members. TPB is not going out on the general internet trying to find any & all existing torrents. In theory someone could set up a IPFS (IPNS?) site that only lists content/IPFS sites that are manually contributed there, that would be similar to what TPB is & does.
ipfs-search is a IPFS DHT indexer. If you're looking for a torrent comparison look at a site like https://btdig.com/about/, that is a torrent DHT indexer. No one is manually contributing anything to the site, the site's own code crawls the DHT network for content to index into its search engine.
bumped into this project recently. it looks more flexible than ipfs and has that "mutability" feature that i'd like to have been built into ipfs.
there's also a browser beta https://beakerbrowser.com/
showing how you can create sites "on the fly" something i wish was easier on ipfs
SOCKS is not a good way to communicate with i2p, using one of i2p's other APIs would be better, which requires new code. You haven't addressed the privacy concerns I raised. I don't understand you're problem with this. You could choose to not enable it, and I could choose to, and we could both go our separate ways, mine privately.
At the current pace I would say "never". Things have gotten progressively worse in the last 10 years and there isn't even much of a hint of things turning the other way. And it's not like this is a new problem, Freenet has been working on it for like 20 years, but still didn't make much of a dent in the public consciousness.
Even assuming that IPFS succeeded, that's really just the tip of the iceberg. You also have to replace search, recommendations, comments, moderation, bookmarking, spam filtering and all that, basically the whole web needs to be reinvented in a decentralized way, file hosting is really just the very first step.
For example if Youtube switched to IPFS, but still kicked everybody they didn't like from their platform nothing would be fixed. On the other side if subscriptions would be RSS feeds on IPFS instead of having subscriptions be part of Youtube things would be much improved, as Youtube would no longer be in control over what you watch. But that's not happening and not just because Youtube, but because browser developers have fundamentally failed to improve their browsers. Even something basic as RSS support was never properly integrated into any of the major browsers. Bookmarks haven't received any update in a good 25 years, that's crazy, no surprise that everybody is relying on the subscription mechanism build into those web services.
Brave and LBRY look to be on a good track, but it's really baby steps and far away from an actual solution.
> I understand that the more nodes on the IPFS network, the better it can function.
That's already a wrong assumption. Each node puts a small burden on the entire network and makes everything a bit slower.
You can host a pinning service, where people can actively tell you to host their data. But there is no automatic way to share resources.
One of the darknets has a way to automatically replicate content from the network. I think it was freenet, maybe i2p, definitely not TOR.
There's a thing (a user I think) on ipfs.io that has some zip files. I would like to download them, decompress them, then view the images inside of them. To do this, I need to get the actual zip archive as a file onto my desktop. I've done a ton of searching online and I can't figure out how to do this.
There are a couple of tools for this already actually.
https://github.com/jbenet/ipfs-screencap is one them.
My personal favourite is what I have bound to PrtSc which is just this: echo "https://ipfs.io/ipfs/$(maim -s | ipfs add -Q)" | xsel -b
That oneliner would take a screenshot based on your selection, add the image data to IPFS and copy the URL as it was on the public gateways.
digging through ipfs documentation, it appears that filestore can be part of the solution, in that it allows ipfs to keep the data in readable files on disk, while only metadata is stored in ~/.ipfs.
unfortunately, this is still experimental, and only supported for ipfs add
and not for ipfs get
.
The title might be a little misleading.
You cannot access https://ipfs.io/ aka the project website.
That is how they prevent the majority of people finding out about IPFS.
Connecting to the IPFS network itself, currently, works just fine.
Sooner or later they will invest into stopping you from doing that as well.
At first that might mean you'll still be able to connect to other peers in China
but not outside of China and perhaps later on you might not be able to connect
to peers outside your own network at all.
That is of course until it turns out money can be made from it and then it will be opened up.
There are many sites like Google, Twitter.com, etc are blocked by Chinese Government, ipfs.io is one of them, so you will have to get a proxy so that you can access to them.
Thanks for the suggestions.
I suppose it took so long for my pinned content to get noticed by the ipfs.io gateway that I always ended up doing ipfs dht provide
and then having it show up eventually several minutes later. It could have just been a correlation causation fallacy though. I'll do some tests to see if it is making a difference.